International Nuclear Information System (INIS)
The inhibition effect of glycine (Gly) towards the corrosion of low alloy steel ASTM A213 grade T22 boiler steel was studied in aerated stagnant 0.50 M HCl solutions in the temperature range 20-60 deg. C using potentiodynamic polarization (Tafel polarization and linear polarization) and impedance techniques, complemented with scanning electron microscope (SEM) and energy dispersive X-ray (EDX). Electrochemical frequency modulation (EFM), a non-destructive corrosion measurement technique that can directly give values of corrosion current without prior knowledge of Tafel constants, is also presented here. Experimental corrosion rates determined by the Tafel extrapolation method are compared with corrosion rates obtained by electrochemical, namely EFM technique, and chemical (i.e., non-electrochemical) method for steel in HCl. The chemical method of confirmation of the corrosion rates involved determination of the dissolved cation, using ICP-AES (inductively coupled plasma atomic emission spectrometry) method of analysis. Corrosion rates (in mm y-1) obtained from the electrochemical (Tafel extrapolation and EFM) and the chemical method, ICP, are in a good agreement. Polarization studies have shown that Gly is a good 'green', mixed-type inhibitor with cathodic predominance. The inhibition process was attributed to the formation of an adsorbed film on the metal surface that protects the metal against corrosive agents. Scanning electron microscopy (SEM) and energanning electron microscopy (SEM) and energy dispersion X-ray (EDX) examinations of the electrode surface confirmed the existence of such an adsorbed film. The inhibition efficiency increases with increase in Gly concentration, while it decreases with solution temperature. Temkin isotherm is successfully applied to describe the adsorption process. Thermodynamic functions for the adsorption process were determined.
Directory of Open Access Journals (Sweden)
Nicola Oprea
2012-11-01
Full Text Available Computational methods for ordinary equations, although constituting one of the olderestablished areas of namerical analysis, have been the subject of a great deal of re-search in recent years. It is hoped that this article will provide postgraduate studentsand general uses of namerical analysis with a readable account of these developments.The only prerequisites required of the reader are a sound course in calculus and someacquittance with complex numbers, matrices and vectors. Some see "analysis" as thekeyword, and wish to embed the subject entirely in rigorous modern analysis. To oth-ers, "namerical" is the vital word, and the algorithm the only respectable product. Inthis paper I have tried to take a middle course between these to extremes. In this pa-per, using the polynomial extrapolation, we solve an initial value problem in ordinarydifferential equations.The aim of this paper is to compare with the fourth-order Runge-Kutta method on thebasis of accuracy for a given number of function evaluations.
Nicola Oprea
2012-01-01
Computational methods for ordinary equations, although constituting one of the olderestablished areas of namerical analysis, have been the subject of a great deal of re-search in recent years. It is hoped that this article will provide postgraduate studentsand general uses of namerical analysis with a readable account of these developments.The only prerequisites required of the reader are a sound course in calculus and someacquittance with complex numbers, matrices and vectors. Some see "anal...
Extrapolation methods theory and practice
Brezinski, C
1991-01-01
This volume is a self-contained, exhaustive exposition of the extrapolation methods theory, and of the various algorithms and procedures for accelerating the convergence of scalar and vector sequences. Many subroutines (written in FORTRAN 77) with instructions for their use are provided on a floppy disk in order to demonstrate to those working with sequences the advantages of the use of extrapolation methods. Many numerical examples showing the effectiveness of the procedures and a consequent chapter on applications are also provided - including some never before published results and applicat
Squared polynomial extrapolation methods with cycling
Roland, Ch.; Varadhan, R.; Frangakis, C.
2007-02-01
Roland and Varadhan (Appl. Numer. Math., 55:215?226, 2005) presented a new idea called ?squaring? to improve the convergence of Lemarchal?s scheme for solving nonlinear fixed-point problems. Varadhan and Roland (Squared extrapolation methods: A new class of simple and efficient numerical schemes for accelerating the convergence of the EM algorithm, Department of Biostatistics Working Paper. Johns Hopkins University, http://www.bepress.com/jhubiostat/paper63, 2004) noted that Lemarchal?s scheme can be viewed as a member of the class of polynomial extrapolation methods with cycling that uses two fixed-point iterations per cycle. Here we combine these two ideas, cycled extrapolation and squaring, and construct a new class of methods, called squared polynomial methods (SQUAREM), for accelerating the convergence of fixed-point iterations. Our main goal is to evaluate whether the squaring device is effective in improving the rate of convergence of cycled extrapolation methods that use more than two fixed-point iterations per cycle. We study the behavior of the new schemes on an image reconstruction problem for positron emission tomography (PET) using simulated data. Our numerical experiments show the effectiveness of first- and higher-order squared polynomial extrapolation methods in accelerating image reconstruction, and also their relative superiority compared to the classical, ?unsquared? vector polynomial methods.
The optimizied expansion method for wavefield extrapolation
Wu, Z.
2013-01-01
Spectral methods are fast becoming an indispensable tool for wave-field extrapolation, especially in anisotropic media, because of its dispersion and artifact free, as well as highly accurate, solutions of the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain operator.In this abstract, we propose an optimized expansion method that can approximate this operator with its low rank representation. The rank defines the number of inverse FFT required per time extrapolation step, and thus, a lower rank admits faster extrapolations. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its low rank representation.Thus,we obtain more accurate wave-fields using lower rank representation, and thus cheaper extrapolations. The optimization operation to define the low rank representation depends only on the velocity model, and this is done only once, and valid for a full reverse time migration (many shots) or one iteration of full waveform inversion. Applications on the BP model yielded superior results than those obtained using the decomposition approach. For transversely isotopic media, the solutions were free of the shear wave artifacts, and does not require that eta>0.
Implicit extrapolation methods for multilevel finite element computations
Energy Technology Data Exchange (ETDEWEB)
Jung, M.; Ruede, U. [Technische Universitaet Chemnitz-Zwickau (Germany)
1994-12-31
The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.
Extrapolation methods for the Dirac inverter in hybrid Monte Carlo
International Nuclear Information System (INIS)
In Hybrid Monte Carlo (HMC) simulations for full QCD, the gauge fields evolve smoothly as a function of Molecular Dynamics (MD) time. Thus we investigate improved methods of estimating the trial solutions to the Dirac propagator as superpositions of the solutions in the recent past. So far our best extrapolation method reduces the number of Conjugate Gradient iterations per unit MD time by about a factor of 4. Further improvements should be forthcoming as we further exploit the information of past trajectories. ((orig.))
Efficient extrapolation methods for electro- and magnetoquasistatic field simulations
Directory of Open Access Journals (Sweden)
M. Clemens
2003-01-01
Full Text Available In magneto- and electroquasi-static time domain simulations with implicit time stepping schemes the iterative solvers applied to the large sparse (non-linear systems of equations are observed to converge faster if more accurate start solutions are available. Different extrapolation techniques for such new time step solutions are compared in combination with the preconditioned conjugate gradient algorithm. Simple extrapolation schemes based on Taylor series expansion are used as well as schemes derived especially for multi-stage implicit Runge-Kutta time stepping methods. With several initial guesses available, a new subspace projection extrapolation technique is proven to produce an optimal initial value vector. Numerical tests show the resulting improvements in terms of computational efficiency for several test problems. In quasistatischen elektromagnetischen Zeitbereichsimulationen mit impliziten Zeitschrittverfahren zeigt sich, dass die iterativen Lösungsverfahren für die großen dünnbesetzten (nicht-linearen Gleichungssysteme schneller konvergieren, wenn genauere Startlösungen vorgegeben werden. Verschiedene Extrapolationstechniken werden für jeweils neue Zeitschrittlösungen in Verbindung mit dem präkonditionierten Konjugierte Gradientenverfahren vorgestellt. Einfache Extrapolationsverfahren basierend auf Taylorreihenentwicklungen werden ebenso benutzt wie speziell für mehrstufige implizite Runge-Kutta-Verfahren entwickelte Verfahren. Sind verschiedene Startlösungen verfügbar, so erlaubt ein neues Unterraum-Projektion- Extrapolationsverfahren die Konstruktion eines optimalen neuen Startvektors. Numerische Tests zeigen die aus diesen Verfahren resultierenden Verbesserungen der numerischen Effizienz.
Assessment of Load Extrapolation Methods for Wind Turbines
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; SØrensen, John Dalsgaard
2011-01-01
In the present paper, methods for statistical load extrapolation of wind-turbine response are studied using a stationary Gaussian process model, which has approximately the same spectral properties as the response for the out-of-plane bending moment of a windturbine blade. For a Gaussian process, an approximate analytical solution for the distribution of the peaks is given by Rice. In the present paper, three different methods for statistical load extrapolation are compared with the analytical solution for one mean wind speed. The methods considered are global maxima, block maxima, and the peak over threshold method with two different threshold values. The comparisons show that the goodness of fit for the local distribution has a significant influence on the results, but the peak over threshold method with a threshold value on the mean plus 1.4 standard deviations generally gives the best results. By considering Gaussian processes for 12 mean wind speeds, the “fitting before aggregation” and “aggregation before fitting” approaches are studied. The results show that the fitting before aggregation approach gives the best results.
The absolute determination of activity by the efficiency extrapolation method
International Nuclear Information System (INIS)
As agent for the Commonwealth Scientific and Industrial Research Organisation, the Australian Atomic Energy Commission is responsible for the maintenance of the Australian standard of activity. The standard comprises activity measurement procedures involving the operation of 4 ? ?-? coincidence counting equipment. The coincidence method requires the application of correction factors which depend on detection efficiency, such as arise for complex decay schemes and internal conversion. These corrections approach unity as the detection efficiency in the ?-channel approaches 100 per cent. By performing activity determinations for a range of ? detection efficiencies, an 'efficiency extrapolation' analysis can be applied which eliminates the need to determine the absolute detection efficiency for each channel
Comparison of methods for extrapolating breaking creep results
International Nuclear Information System (INIS)
Among all the methods of extrapolation, the following have been selected: - parametric methods (Larson-Miller, Dorn, Manson-Haferd); - digital and parametric method (minimum commitment); - digital method (finite differences); - descriptive method (Givar). The Larson-Miller, Dorn and Manson-Haferd methods are commonly used for analyzing the breaking creep results of materials for which the master curves can be described simply. The other methods have been developed in order to analyze the breaking creep results of materials where the structural changes over time modify the creep behaviour. In each case the assessment of the parameters is achieved by the least squares method. These methods were compared with each other on two steels, namely: Z6 CND 17-12 (316) and Z4 CND 35-20 (800 alloy). The various analyses performed show that (a) the predictions made as from the different methods are in good agreement between each other when there is a sufficient number of experimental values and (b) the predictions of the breaking times in the case of the 800 alloy differ from one method to the next. This result is due to the limited sampling data and to the complex behaviour of this alloy, the properties of which change with ageing
Extrapolation Method for System Reliability Assessment : A New Scheme
DEFF Research Database (Denmark)
Qin, Jianjun; Nishijima, Kazuyoshi
2012-01-01
The present paper presents a new scheme for probability integral solution for system reliability analysis, which takes basis in the approaches by Naess et al. (2009) and Bucher (2009). The idea is to evaluate the probability integral by extrapolation, based on a sequence of MC approximations of integrals with scaled domains. The performance of this class of approximation depends on the approach applied for the scaling and the functional form utilized for the extrapolation. A scheme for this task is derived here taking basis in the theory of asymptotic solutions to multinormal probability integrals. The scheme is extended so that it can be applied to cases where the asymptotic property may not be valid and/or the random variables are not normally distributed. The performance of the scheme is investigated by four principal series and parallel systems and some practical examples. The results indicate that the proposed scheme is efficient and adds to generality for this class of approximations for probability integrals.
Multiplicative measurement error and the simulation extrapolation method
Biewen, Elena; Nolte, Sandra; Rosemann, Martin
2008-01-01
Whereas the literature on additive measurement error has known a considerable treatment, less work has been done for multiplicative noise. In this paper we concentrate on multiplicative measurement error in the covariates, which contrary to additive error not only modifies proportionally the original value, but also conserves the structural zeros. This paper compares three variants to specify the multiplicative measurement error model in the simulation step of the Simulation-Extrapolation (SI...
An efficient wave extrapolation method for anisotropic media with tilt
Waheed, Umair bin
2015-03-23
Wavefield extrapolation operators for elliptically anisotropic media offer significant cost reduction compared with that for the transversely isotropic case, particularly when the axis of symmetry exhibits tilt (from the vertical). However, elliptical anisotropy does not provide accurate wavefield representation or imaging for transversely isotropic media. Therefore, we propose effective elliptically anisotropic models that correctly capture the kinematic behaviour of wavefields for transversely isotropic media. Specifically, we compute source-dependent effective velocities for the elliptic medium using kinematic high-frequency representation of the transversely isotropic wavefield. The effective model allows us to use cheaper elliptic wave extrapolation operators. Despite the fact that the effective models are obtained by matching kinematics using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy trade-off for wavefield computations in transversely isotropic media, particularly for media of low to moderate complexity. In addition, the wavefield solution is free from shear-wave artefacts as opposed to the conventional finite-difference-based transversely isotropic wave extrapolation scheme. We demonstrate these assertions through numerical tests on synthetic tilted transversely isotropic models.
Tao, Lu
1995-01-01
The splitting extrapolation method is a newly developed technique for solving multidimensional mathematical problems. It overcomes the difficulties arising from Richardson's extrapolation when applied to these problems and obtains higher accuracy solutions with lower cost and a high degree of parallelism. The method is particularly suitable for solving large scale scientific and engineering problems.This book presents applications of the method to multidimensional integration, integral equations and partial differential equations. It also gives an introduction to combination methods which are
How useful are corpus-based methods for extrapolating psycholinguistic variables?
Mandera, Pawe?; Keuleers, Emmanuel; Brysbaert, Marc
2015-08-01
Subjective ratings for age of acquisition, concreteness, affective valence, and many other variables are an important element of psycholinguistic research. However, even for well-studied languages, ratings usually cover just a small part of the vocabulary. A possible solution involves using corpora to build a semantic similarity space and to apply machine learning techniques to extrapolate existing ratings to previously unrated words. We conduct a systematic comparison of two extrapolation techniques: k-nearest neighbours, and random forest, in combination with semantic spaces built using latent semantic analysis, topic model, a hyperspace analogue to language (HAL)-like model, and a skip-gram model. A variant of the k-nearest neighbours method used with skip-gram word vectors gives the most accurate predictions but the random forest method has an advantage of being able to easily incorporate additional predictors. We evaluate the usefulness of the methods by exploring how much of the human performance in a lexical decision task can be explained by extrapolated ratings for age of acquisition and how precisely we can assign words to discrete categories based on extrapolated ratings. We find that at least some of the extrapolation methods may introduce artefacts to the data and produce results that could lead to different conclusions that would be reached based on the human ratings. From a practical point of view, the usefulness of ratings extrapolated with the described methods may be limited. PMID:25695623
Mueller, David S.
2013-04-01
Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.
Energy Technology Data Exchange (ETDEWEB)
Inoue, S.; Magara, T.; Choe, G. S.; Kim, K. S. [School of Space Research, Kyung Hee University, Yongin, Gyeonggi-do 446-701 (Korea, Republic of); Pandey, V. S. [Department of Physics, National Institute of Technology, Dwarka, Sector-9, Delhi-110077 (India); Shiota, D.; Kusano, K., E-mail: inosato@khu.ac.kr [Solar-Terrestrial Environment Laboratory, Furo-Cho, Chikusa-ku Nagoya 464-8601 (Japan)
2014-01-01
We develop a nonlinear force-free field (NLFFF) extrapolation code based on the magnetohydrodynamic (MHD) relaxation method. We extend the classical MHD relaxation method in two important ways. First, we introduce an algorithm initially proposed by Dedner et al. to effectively clean the numerical errors associated with ? · B . Second, the multigrid type method is implemented in our NLFFF to perform direct analysis of the high-resolution magnetogram data. As a result of these two implementations, we successfully extrapolated the high resolution force-free field introduced by Low and Lou with better accuracy in a drastically shorter time. We also applied our extrapolation method to the MHD solution obtained from the flux-emergence simulation by Magara. We found that NLFFF extrapolation may be less effective for reproducing areas higher than a half-domain, where some magnetic loops are found in a state of continuous upward expansion. However, an inverse S-shaped structure consisting of the sheared and twisted loops formed in the lower region can be captured well through our NLFFF extrapolation method. We further discuss how well these sheared and twisted fields are reconstructed by estimating the magnetic topology and twist quantitatively.
International Nuclear Information System (INIS)
We develop a nonlinear force-free field (NLFFF) extrapolation code based on the magnetohydrodynamic (MHD) relaxation method. We extend the classical MHD relaxation method in two important ways. First, we introduce an algorithm initially proposed by Dedner et al. to effectively clean the numerical errors associated with ? · B . Second, the multigrid type method is implemented in our NLFFF to perform direct analysis of the high-resolution magnetogram data. As a result of these two implementations, we successfully extrapolated the high resolution force-free field introduced by Low and Lou with better accuracy in a drastically shorter time. We also applied our extrapolation method to the MHD solution obtained from the flux-emergence simulation by Magara. We found that NLFFF extrapolation may be less effective for reproducing areas higher than a half-domain, where some magnetic loops are found in a state of continuous upward expansion. However, an inverse S-shaped structure consisting of the sheared and twisted loops formed in the lower region can be captured well through our NLFFF extrapolation method. We further discuss how well these sheared and twisted fields are reconstructed by estimating the magnetic topology and twist quantitatively.
The optimized expansion based low-rank method for wavefield extrapolation
Wu, Zedong
2014-03-01
Spectral methods are fast becoming an indispensable tool for wavefield extrapolation, especially in anisotropic media because it tends to be dispersion and artifact free as well as highly accurate when solving the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain extrapolation operator efficiently. To solve this problem, we evaluated an optimized expansion method that can approximate this operator with a low-rank variable separation representation. The rank defines the number of inverse Fourier transforms for each time extrapolation step, and thus, the lower the rank, the faster the extrapolation. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its explicit low-rank representation. As a result, we obtain lower rank representations compared with the standard low-rank method within reasonable accuracy and thus cheaper extrapolations. Additional bounds set on the range of propagated wavenumbers to adhere to the physical wave limits yield unconditionally stable extrapolations regardless of the time step. An application on the BP model provided superior results compared to those obtained using the decomposition approach. For transversely isotopic media, because we used the pure P-wave dispersion relation, we obtained solutions that were free of the shear wave artifacts, and the algorithm does not require that n > 0. In addition, the required rank for the optimization approach to obtain high accuracy in anisotropic media was lower than that obtained by the decomposition approach, and thus, it was more efficient. A reverse time migration result for the BP tilted transverse isotropy model using this method as a wave propagator demonstrated the ability of the algorithm.
A least square extrapolation method for improving solution accuracy of PDE computations
International Nuclear Information System (INIS)
Richardson extrapolation (RE) is based on a very simple and elegant mathematical idea that has been successful in several areas of numerical analysis such as quadrature or time integration of ODEs. In theory, RE can be used also on PDE approximations when the convergence order of a discrete solution is clearly known. But in practice, the order of a numerical method often depends on space location and is not accurately satisfied on different levels of grids used in the extrapolation formula. We propose in this paper a more robust and numerically efficient method based on the idea of finding automatically the order of a method as the solution of a least square minimization problem on the residual. We introduce a two-level and three-level least square extrapolation method that works on nonmatching embedded grid solutions via spline interpolation. Our least square extrapolation method is a post-processing of data produced by existing PDE codes, that is easy to implement and can be a better tool than RE for code verification. It can be also used to make a cascade of computation more numerically efficient. We can establish a consistent linear combination of coarser grid solutions to produce a better approximation of the PDE solution at a much lower cost than direct computation on a finer grid. To illustrate the performance of the method, examples including two-dimensional turning point problem with sharp transition layer and the Navier-Stokes flow inside a lid-driven cavity are adopted
Nonlinear Force-Free Extrapolation of the Coronal Magnetic Field Based on the MHD Relaxation Method
Inoue, S; Pandey, V S; Shiota., D; Kusano, K; Choe, G S; Kim, K S
2013-01-01
We develop a nonlinear force-free field (NLFFF) extrapolation code based on the magnetohydrodynamic (MHD) relaxation method. We extend the classical MHD relaxation method in two important ways. First, we introduce an algorithm initially proposed by cite{2002JCoPh.175..645D} to effectively clean the numerical errors associated with $nabla cdot vec{B}$. Second, the multi-grid type method is implemented in our NLFFF to perform direct analysis of the high-resolution magnetogram data. As a result of these two implementations, we successfully extrapolated the high resolution force-free field introduced by cite{1990ApJ...352..343L} with better accuracy in a drastically shorter time. We also applied our extrapolation method to the MHD solution obtained from the flux-emergence simulation by cite{2012ApJ...748...53M}. We found that NLFFF extrapolation may be less effective for reproducing areas higher than a half-domain, where some magnetic loops are found in a state of continuous upward expansion. However, an inverse ...
A MULTI-STEP RICHARDSON-ROMBERG EXTRAPOLATION METHOD FOR STOCHASTIC APPROXIMATION
Frikha, Noufel; Huang, Lorick
2014-01-01
We obtain an expansion of the implicit weak discretization error for the target of stochastic approximation algorithms introduced and studied in [Frikha2013]. This allows us to extend and develop the Richardson-Romberg extrapolation method for Monte Carlo linear estimator (introduced in [Talay & Tubaro 1990] and deeply studied in [Pag{\\`e}s 2007]) to the framework of stochastic optimization by means of stochastic approximation algorithm. We notably apply the method to the es...
Senjean, Bruno; Alam, Md Mehboob; Knecht, Stefan; Fromager, Emmanuel
2015-01-01
The combination of a recently proposed linear interpolation method (LIM) [Senjean et al., Phys. Rev. A 92, 012518 (2015)], which enables the calculation of weight-independent excitation energies in range-separated ensemble density-functional approximations, with the extrapolation scheme of Savin [J. Chem. Phys. 140, 18A509 (2014)] is presented in this work. It is shown that LIM excitation energies vary quadratically with the inverse of the range-separation parameter mu when the latter is large. As a result, the extrapolation scheme, which is usually applied to long-range interacting energies, can be adapted straightforwardly to LIM. This extrapolated LIM (ELIM) has been tested on a small test set consisting of He, Be, H2 and HeH+. Relatively accurate results have been obtained for the first singlet excitation energies with the typical mu=0.4 value. The improvement of LIM after extrapolation is remarkable, in particular for the doubly-excited 2^1Sigma+g state in the stretched H2 molecule. Three-state ensemble ...
Evaluation of functioning of an extrapolation chamber using Monte Carlo method
International Nuclear Information System (INIS)
The extrapolation chamber is a parallel plate chamber and variable volume based on the Braff-Gray theory. It determines in absolute mode, with high accuracy the dose absorbed by the extrapolation of the ionization current measured for a null distance between the electrodes. This camera is used for dosimetry of external beta rays for radiation protection. This paper presents a simulation for evaluating the functioning of an extrapolation chamber type 23392 of PTW, using the MCNPX Monte Carlo method. In the simulation, the fluence in the air collector cavity of the chamber was obtained. The influence of the materials that compose the camera on its response against beta radiation beam was also analysed. A comparison of the contribution of primary and secondary radiation was performed. The energy deposition in the air collector cavity for different depths was calculated. The component with the higher energy deposition is the Polymethyl methacrylate block. The energy deposition in the air collector cavity for chamber depth 2500 ?m is greater with a value of 9.708E-07 MeV. The fluence in the air collector cavity decreases with depth. It's value is 1.758E-04 1/cm2 for chamber depth 500 ?m. The values reported are for individual electron and photon histories. The graphics of simulated parameters are presented in the paper. (Author)
Evaluation of Electrochemical Methods for Electrolyte Characterization
Heidersbach, Robert H.
2001-01-01
This report documents summer research efforts in an attempt to develop an electrochemical method of characterizing electrolytes. The ultimate objective of the characterization would be to determine the composition and corrosivity of Martian soil. Results are presented using potentiodynamic scans, Tafel extrapolations, and resistivity tests in a variety of water-based electrolytes.
The Impacts of Atmospheric Stability on the Accuracy of Wind Speed Extrapolation Methods
Directory of Open Access Journals (Sweden)
Jennifer F. Newman
2014-01-01
Full Text Available The building of utility-scale wind farms requires knowledge of the wind speed climatology at hub height (typically 80–100 m. As most wind speed measurements are taken at 10 m above ground level, efforts are being made to relate 10-m measurements to approximate hub-height wind speeds. One common extrapolation method is the power law, which uses a shear parameter to estimate the wind shear between a reference height and hub height. The shear parameter is dependent on atmospheric stability and should ideally be determined independently for different atmospheric stability regimes. In this paper, data from the Oklahoma Mesonet are used to classify atmospheric stability and to develop stability-dependent power law fits for a nearby tall tower. Shear exponents developed from one month of data are applied to data from different seasons to determine the robustness of the power law method. In addition, similarity theory-based methods are investigated as possible alternatives to the power law. Results indicate that the power law method performs better than similarity theory methods, particularly under stable conditions, and can easily be applied to wind speed data from different seasons. In addition, the importance of using co-located near-surface and hub-height wind speed measurements to develop extrapolation fits is highlighted.
Sun, Shuyu
2013-06-01
This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.
Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer.
Le, Guigao; Oulaid, Othmane; Zhang, Junfeng
2015-03-01
In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems. PMID:25871245
Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc
2013-06-01
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders. PMID:23179190
International Nuclear Information System (INIS)
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on down-link band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2x2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders. (authors)
First principles Tafel kinetics of methanol oxidation on Pt(111)
Fang, Ya-Hui; Liu, Zhi-Pan
2015-01-01
Electrocatalytic methanol oxidation is of fundamental importance in electrochemistry and also a key reaction in direct methanol fuel cell. To resolve the kinetics at the atomic level, this work investigates the potential-dependent reaction kinetics of methanol oxidation on Pt(111) using the first principles periodic continuum solvation model based on modified-Poisson-Boltzmann equation (CM-MPB), focusing on the initial dehydrogenation elementary steps. A theoretical model to predict Tafel kinetics (current vs potential) is established by considering that the rate-determining step of methanol oxidation (to CO) is the first Csbnd H bond breaking (CH3OH(aq) ? CH2OH* + H*) according to the computed free energy profile. The first Csbnd H bond breaking reaction needs to overcome a large entropy loss during methanol approaching to the surface and replacing the adsorbed water molecules. While no apparent charge transfer is involved in this elementary step, the charge transfer coefficient of the reaction is calculated to be 0.36, an unconventional value for charge transfer reactions, and the Tafel slope is deduced to be 166 mV. The results show that the metal/adsorbate interaction and the solvation environment play important roles on influencing the Tafel kinetics. The knowledge learned from the potential-dependent kinetics of methanol oxidation can be applied in general for understanding the electrocatalytic reactions of organic molecules at the solid-liquid interface.
Florez, W. F.; Portapila, M.; Hill, A. F.; Power, H.; Orsini, P.; Bustamante, C. A.
2015-03-01
The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases.
Ketcheson, David I.
2014-04-11
In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.
First-principles extrapolation method for accurate CO adsorption energies on metal surfaces
Mason, S E; Rappe, A M; Mason, Sara E.; Grinberg, Ilya; Rappe, Andrew M.
2003-01-01
We show that a simple first-principles correction based on the difference between the singlet-triplet CO excitation energy values obtained by DFT and high-level quantum chemistry methods yields accurate CO adsorption properties on a variety of metal surfaces. We demonstrate a linear relationship between the CO adsorption energy and the CO singlet-triplet splitting, similar to the linear dependence of CO adsorption energy on the energy of the CO 2$\\pi$* orbital found recently {[Kresse {\\em et al.}, Physical Review B {\\bf 68}, 073401 (2003)]}. Converged DFT calculations underestimate the CO singlet-triplet excitation energy $\\Delta E_{\\rm S-T}$, whereas coupled-cluster and CI calculations reproduce the experimental $\\Delta E_{\\rm S-T}$. The dependence of $E_{\\rm chem}$ on $\\Delta E_{\\rm S-T}$ is used to extrapolate $E_{\\rm chem}$ for the top, bridge and hollow sites for the (100) and (111) surfaces of Pt, Rh, Pd and Cu to the values that correspond to the coupled-cluster and CI $\\Delta E_{\\rm S-T}$ value. The c...
Comparison of precipitation nowcasting by extrapolation and statistical-advection methods.
Czech Academy of Sciences Publication Activity Database
Sokol, Zbyn?k; Kitzmiller, D.; Pešice, Petr; Mejsnar, Jan
2013-01-01
Ro?. 123, 1 April (2013), s. 17-30. ISSN 0169-8095 R&D Projects: GA MŠk ME09033 Institutional support: RVO:68378289 Keywords : Precipitation forecast * Statistical models * Regression * Quantitative precipitation forecast * Extrapolation forecast Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.421, year: 2013 http://www.sciencedirect.com/science/article/pii/S0169809512003390
Comparison of extrapolation methods for creep rupture stresses of 12Cr and 18Cr10NiTi steels
International Nuclear Information System (INIS)
As a part of a Soviet-Swedish research programme the creep rupture properties of two heat resisting steels namely a 12% Cr steel and an 18% Cr12% Ni titanium stabilized steel have been studied. One heat from each country of both steels were creep tested. The strength of the 12% Cr steels was similar to earlier reported strength values, the Soviet steel being some-what stronger due to a higher tungsten content. The strength of the Swedish 18/12 Ti steel agreed with earlier results, while the properties of the Soviet steel were inferior to those reported from earlier Soviet creep testings. Three extrapolation methods were compared on creep rupture data collected in both countries. Isothermal extrapolation and an algebraic method of Soviet origin gave in many cases rather similar results, while the parameter method recommended by ISO resulted in higher rupture strength values at longer times. (author)
International Nuclear Information System (INIS)
A program to investigate the possibility of track extrapolation and interpolation for drift chambers with the Principal Components Analysis and polynoms was written for SAPHIR. The results for the most significant configurations at SAPHIR were pointed out. It was shown that the Principal Components Analysis is a good basis to write a fast track reconstruction program for a drift chamber using a global track model in an inhomogeneous magnetic field. A data input/output package was written, too. (orig.)
Czech Academy of Sciences Publication Activity Database
Mejsnar, Jan; Sokol, Zbyn?k; Pešice, Petr
Toulouse : Météo France, 2012. [ERAD 2012 - European Conference on Radar in Meteorology and Hydrology /7./. Toulouse (FR), 24.06.2012-29.06.2012] R&D Projects: GA MŠk ME09033 Institutional support: RVO:68378289 Keywords : precipitation nowcasting * Lagrangien extrapolation * uncertainty in precipitation Subject RIV: DG - Athmosphere Sciences, Meteorology http://www.meteo.fr/cic/meetings/2012/ERAD/extended_abs/NOW_250_ext_abs.pdf
Ketcheson, David I.
2014-06-13
We compare the three main types of high-order one-step initial value solvers: extrapolation, spectral deferred correction, and embedded Runge–Kutta pairs. We consider orders four through twelve, including both serial and parallel implementations. We cast extrapolation and deferred correction methods as fixed-order Runge–Kutta methods, providing a natural framework for the comparison. The stability and accuracy properties of the methods are analyzed by theoretical measures, and these are compared with the results of numerical tests. In serial, the eighth-order pair of Prince and Dormand (DOP8) is most efficient. But other high-order methods can be more efficient than DOP8 when implemented in parallel. This is demonstrated by comparing a parallelized version of the wellknown ODEX code with the (serial) DOP853 code. For an N-body problem with N = 400, the experimental extrapolation code is as fast as the tuned Runge–Kutta pair at loose tolerances, and is up to two times as fast at tight tolerances.
International Nuclear Information System (INIS)
Inside Activity 3 ''Materials'' of WGCS, the member states UK and FRG have developed a work regarding extrapolation methods for creep data. This work has been done by comparising extrapolation methods in use in their countries by applying them to creep rupture strength data on AISI 316 SS obtained in UK and FRG. This work has been issued on April 1978 and the Community has dealed it to all Activity 3 Members. Italy, in the figure of NIRA S.p.A., has received, from the European Community a contract to extend the work to Italian and French data, using extrapolation methods currently in use in Italy. The work should deal with the following points: - Collect of Italian experimental data; - Chemical analysis on Italian Specimen; - Comparison among Italian experimental data with French, FRG and UK data; - Description of extrapolation methods in use in Italy; - Application of these extrapolation methods to Italian, French, British and Germany data; - Extensions of a Final Report
Waheed, Umair bin
2014-08-01
The wavefield extrapolation operator for ellipsoidally anisotropic (EA) media offers significant cost reduction compared to that for the orthorhombic case, especially when the symmetry planes are tilted and/or rotated. However, ellipsoidal anisotropy does not provide accurate focusing for media of orthorhombic anisotropy. Therefore, we develop effective EA models that correctly capture the kinematic behavior of the wavefield for tilted orthorhombic (TOR) media. Specifically, we compute effective source-dependent velocities for the EA model using kinematic high-frequency representation of the TOR wavefield. The effective model allows us to use the cheaper EA wavefield extrapolation operator to obtain approximate wavefield solutions for a TOR model. Despite the fact that the effective EA models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TOR media, particularly for media of low to moderate complexity. We demonstrate applicability of the proposed approach on a layered TOR model.
Evaluation of external quality factor of the superconducting cavity using extrapolation method
International Nuclear Information System (INIS)
The estimation of the external quality factor is important for designing coupling devices for the cavities. A new representation of the external quality factor calculations for single-cell cavity coupled to a coaxial transmission line is derived based on analytic analysis and numeric analysis with the help of 3D electromagnetic code, and verified with experimental measurements at room temperature. In logarithmic scale the results for the external quality factor were quasi-linear over the limited range, and the simulated and measured data could be used and extrapolated to the superconducting case. For the unpolished 1.5 GHz 3rd harmonic superconducting cavity, the discrepancy between the evaluation value and measurement result is less than 25% within an acceptable deviation. (authors)
Shinagawa, Tatsuya
2015-09-08
Microkinetic analyses of aqueous electrochemistry involving gaseous H2 or O2, i.e., hydrogen evolution reaction (HER), hydrogen oxidation reaction (HOR), oxygen reduction reaction (ORR) and oxygen evolution reaction (OER), are revisited. The Tafel slopes used to evaluate the rate determining steps generally assume extreme coverage of the adsorbed species (????0 or??1), although, in practice, the slopes are coverage-dependent. We conducted detailed kinetic analyses describing the coverage-dependent Tafel slopes for the aforementioned reactions. Our careful analyses provide a general benchmark for experimentally observed Tafel slopes that can be assigned to specific rate determining steps. The Tafel analysis is a powerful tool for discussing the rate determining steps involved in electrocatalysis, but our study also demonstrated that overly simplified assumptions led to an inaccurate description of the surface electrocatalysis. Additionally, in many studies, Tafel analyses have been performed in conjunction with the Butler-Volmer equation, where its applicability regarding only electron transfer kinetics is often overlooked. Based on the derived kinetic description of the HER/HOR as an example, the limitation of Butler-Volmer expression in electrocatalysis is also discussed in this report.
Shinagawa, Tatsuya; Garcia-Esparza, Angel T; Takanabe, Kazuhiro
2015-01-01
Microkinetic analyses of aqueous electrochemistry involving gaseous H2 or O2, i.e., hydrogen evolution reaction (HER), hydrogen oxidation reaction (HOR), oxygen reduction reaction (ORR) and oxygen evolution reaction (OER), are revisited. The Tafel slopes used to evaluate the rate determining steps generally assume extreme coverage of the adsorbed species (????0 or??1), although, in practice, the slopes are coverage-dependent. We conducted detailed kinetic analyses describing the coverage-dependent Tafel slopes for the aforementioned reactions. Our careful analyses provide a general benchmark for experimentally observed Tafel slopes that can be assigned to specific rate determining steps. The Tafel analysis is a powerful tool for discussing the rate determining steps involved in electrocatalysis, but our study also demonstrated that overly simplified assumptions led to an inaccurate description of the surface electrocatalysis. Additionally, in many studies, Tafel analyses have been performed in conjunction with the Butler-Volmer equation, where its applicability regarding only electron transfer kinetics is often overlooked. Based on the derived kinetic description of the HER/HOR as an example, the limitation of Butler-Volmer expression in electrocatalysis is also discussed in this report. PMID:26348156
Creep behavior of bone cement: a method for time extrapolation using time-temperature equivalence.
Morgan, R L; Farrar, D F; Rose, J; Forster, H; Morgan, I
2003-04-01
The clinical lifetime of poly(methyl methacrylate) (PMMA) bone cement is considerably longer than the time over which it is convenient to perform creep testing. Consequently, it is desirable to be able to predict the long term creep behavior of bone cement from the results of short term testing. A simple method is described for prediction of long term creep using the principle of time-temperature equivalence in polymers. The use of the method is illustrated using a commercial acrylic bone cement. A creep strain of approximately 0.6% is predicted after 400 days under a constant flexural stress of 2 MPa. The temperature range and stress levels over which it is appropriate to perform testing are described. Finally, the effects of physical aging on the accuracy of the method are discussed and creep data from aged cement are reported. PMID:15348456
First-principles extrapolation method for accurate CO adsorption energies on metal surfaces
Mason, Sara E.; Grinberg, Ilya; Rappe, Andrew M.
2003-01-01
We show that a simple first-principles correction based on the difference between the singlet-triplet CO excitation energy values obtained by DFT and high-level quantum chemistry methods yields accurate CO adsorption properties on a variety of metal surfaces. We demonstrate a linear relationship between the CO adsorption energy and the CO singlet-triplet splitting, similar to the linear dependence of CO adsorption energy on the energy of the CO 2$\\pi$* orbital found recent...
International Nuclear Information System (INIS)
Model error is one of the key factors restricting the accuracy of numerical weather prediction (NWP). Considering the continuous evolution of the atmosphere, the observed data (ignoring the measurement error) can be viewed as a series of solutions of an accurate model governing the actual atmosphere. Model error is represented as an unknown term in the accurate model, thus NWP can be considered as an inverse problem to uncover the unknown error term. The inverse problem models can absorb long periods of observed data to generate model error correction procedures. They thus resolve the deficiency and faultiness of the NWP schemes employing only the initial-time data. In this study we construct two inverse problem models to estimate and extrapolate the time-varying and spatial-varying model errors in both the historical and forecast periods by using recent observations and analogue phenomena of the atmosphere. Numerical experiment on Burgers' equation has illustrated the substantial forecast improvement using inverse problem algorithms. The proposed inverse problem methods of suppressing NWP errors will be useful in future high accuracy applications of NWP. (geophysics, astronomy, and astrophysics)
DEFF Research Database (Denmark)
Kofoed, Peter; Nielsen, Peter V.
1990-01-01
The design of a displacement ventilation system involves determination of the flow rate in the thermal plumes. The flow rate in the plumes and the vertical temperature gradient influence each other, and they are influenced by many factors. This paper shows some descriptions of these effects. Free turbulent plumes from different heated bodies are investigated. The measurements have taken place in a full-scale test room where the vertical temperature gradient have been changed. The velocity and the temperature distribution in the plume are measured. Large scale plume axis wandering is taken into account and the temperature excess and the velocity distribution are calculated by use of an extrapolation method. In the case with a concentrated heat source (dia 50mm, 343W) and nearly uniform surroundings the model of a plume above a point heat source is verified. It represents a borderline case with the smallest entrainment factor and the smallest angle of spread. Due to the measuring method and data processing the velocity and temperature excess profiles are observed more narrowly than those reported by previous authors. In the case with an extensive heat source (dia 400mm, lOOW) the model of a plume above a point heat source cannot be used. This is caused either by the way of generating the plume including a long intermediate region or by the environmental conditions where vertical temperature gradients are present. The flow has a larger angle of spread and the entrainment factor is greather than for a point heat source. The exact knowledge of the vertical temperature gradient is essential to predict the flow propagation due to its influence on the entrainment, e.g. in an integral method of plume calculation • Since the flow from different heated bodies is individual full-scale measurements seem to be the only possible approach to obtain the volume flow in: thermal plumes in ventilated rooms.
Li-O2 Kinetic Overpotentials: Tafel Plots from Experiment and First-Principles Theory.
Viswanathan, V; Nørskov, J K; Speidel, A; Scheffler, R; Gowda, S; Luntz, A C
2013-02-21
We report the current dependence of the fundamental kinetic overpotentials for Li-O2 discharge and charge (Tafel plots) that define the optimal cycle efficiency in a Li-air battery. Comparison of the unusual experimental Tafel plots obtained in a bulk electrolysis cell with those obtained by first-principles theory is semiquantitative. The kinetic overpotentials for any practical current density are very small, considerably less than polarization losses due to iR drops from the cell impedance in Li-O2 batteries. If only the kinetic overpotentials were present, then a discharge-charge voltaic cycle efficiency of ?85% should be possible at ?10 mA/cm(2) superficial current density in a battery of ?0.1 m(2) total cathode area. We therefore suggest that minimizing the cell impedance is a more important problem than minimizing the kinetic overpotentials to develop higher current Li-air batteries. PMID:26281865
Buller, N P; Poole-Wilson, P A
1988-01-01
Respiratory gas exchange was measured during maximal treadmill exercise testing in six healthy volunteers and 20 patients with chronic heart failure. A curve of equation y = ax-bx2 was used to model the relation between the rate of oxygen consumption (y axis) and the rate of carbon dioxide production (x axis). The constants "a" and "b" were used to calculate the maximal value of the expression ax-bx2. This value was termed the "extrapolated maximal oxygen consumption". For all subjects a clos...
DEFF Research Database (Denmark)
Robbe, Joost Roger
2015-01-01
This article investigates the relationship between Dirc van Delft’s Tafel vanden kersten ghelove (1404) and earlier traditional medieval literature on death and dying. Two chapters in the Tafel contain a treatise on death: Somerstuc XXXVIII and XLVIII. Their sources include not only Anselm of Canterbury’s Admonitio morientis (c. 1100) and Henry Suso’s Horologium aeternae sapientiae (1331-34), but also the popular 14th-century Verses of Saint Bernard. The article demonstrates that the treatises are much more than simple compilations. Within the context of the sacrament of the sick in the first treatise, Dirc van Delft presents a veritable ars moriendi which provides practical guidance for the dying and those attending them. The treatise culminates in a vivid drama in which the soul, being subject to three temptations of the devil, can benefit from the protection of its guardian angel as well as the Verses of Saint Bernard. In the second treatise, inspired by Henry Suso, Dirc van Delft addresses the danger of asudden and unprepared death, concluding with an original ars vivendi for a life of moral perfection.
Infrared extrapolations for atomic nuclei
Furnstahl, R J; Papenbrock, T; Wendt, K A
2014-01-01
Harmonic oscillator model-space truncations introduce systematic errors to the calculation of binding energies and other observables. We identify the relevant infrared scaling variable and give values for this nucleus-dependent quantity. We consider isotopes of oxygen computed with the coupled-cluster method from chiral nucleon-nucleon interactions at next-to-next-to-leading order and show that the infrared component of the error is sufficiently understood to permit controlled extrapolations. By employing oscillator spaces with relatively large frequencies, well above the energy minimum, the ultraviolet corrections can be suppressed while infrared extrapolations over tens of MeVs are accurate for ground-state energies. However, robust uncertainty quantification for extrapolated quantities that fully accounts for systematic errors is not yet developed.
Bhattacharyya, Surjendu; Wategaonkar, Sanjay
2014-10-01
In this work we have shown that the Birge-Sponer extrapolation method can be successfully used to determine the dissociation energies (D0) of noncovalently bound complexes. The O-H···S hydrogen-bonding interaction in the cationic state of the p-fluorophenol···H2S complex was characterized using zero kinetic energy (ZEKE) photoelectron spectroscopy. This is the first ZEKE report on the O-H···S hydrogen-bonding interaction. The adiabatic ionization energy (AIE) of the complex was determined as 65?542 cm(-1). Various intermolecular and intramolecular vibrational modes of the cation were assigned. A long progression was observed in the intermolecular stretching mode (?) of the complex with significant anharmonicity along this mode. The anharmonicity information was used to estimate the dissociation energy (D0) in the cationic state using the Birge-Sponer extrapolation method. The D0 was estimated as 9.72 ± 1.05 kcal mol(-1). The ZEKE photoelectron spectra of analogous complex FLP···H2O was also recorded for the sake of comparison. The AIE was determined as 64?082 cm(-1). The intermolecular stretching mode in this system, however, was found to be quite harmonic, unlike that in the H2S complex. The dissociation energies of both the complexes, along with those of a few benchmark systems, such as phenol···H2O and indole···benzene complexes, were computed at various levels of theory such as MP2 at the complete basis set limit, ?B97X-D, and CCSD(T). It was found that only the ?B97X-D level values were in excellent agreement with the experimental results for the benchmark systems for the ground as well as the cationic states. The dissociation energy of the (FLP···H2S)(+) complex determined by the Birge-Sponer extrapolation was about ?18% lower than that computed at the ?B97X-D level. PMID:25250474
Builtin vs. auxiliary detection of extrapolation risk.
Energy Technology Data Exchange (ETDEWEB)
Munson, Miles Arthur; Kegelmeyer, W. Philip,
2013-02-01
A key assumption in supervised machine learning is that future data will be similar to historical data. This assumption is often false in real world applications, and as a result, prediction models often return predictions that are extrapolations. We compare four approaches to estimating extrapolation risk for machine learning predictions. Two builtin methods use information available from the classification model to decide if the model would be extrapolating for an input data point. The other two build auxiliary models to supplement the classification model and explicitly model extrapolation risk. Experiments with synthetic and real data sets show that the auxiliary models are more reliable risk detectors. To best safeguard against extrapolating predictions, however, we recommend combining builtin and auxiliary diagnostics.
Optimal analytical extrapolations revisite
International Nuclear Information System (INIS)
The problem of optimal analytic extrapolation of holomorphic functions from a finite set of interior data points to an other interior point is completely solved in the general case of data known with unequal errors. Simple and easily to handle algorithms are obtained. (author)
A single-phase model for liquid-feed DMFCs with non-Tafel kinetics
Energy Technology Data Exchange (ETDEWEB)
Vera, Marcos [Area de Mecanica de Fluidos, Universidad Carlos III de Madrid, Avda. de la Universidad 30, 28911 Leganes (Spain)
2007-09-27
An isothermal single-phase 3D/1D model for liquid-feed direct methanol fuel cells (DMFC) is presented. Three-dimensional (3D) mass, momentum and species transport in the anode channels and gas diffusion layer is modeled using a commercial, finite-volume based, computational fluid dynamics (CFD) software complemented with user supplied subroutines. The 3D model is locally coupled to a one-dimensional (1D) model accounting for the electrochemical reactions in both the anode and the cathode, which provides a physically sound boundary condition for the velocity and methanol concentration fields at the anode gas diffusion layer/catalyst interface. The 1D model - comprising the membrane-electrode assembly, cathode gas diffusion layer, and cathode channel - assumes non-Tafel kinetics to describe the complex kinetics of the multi-step methanol oxidation reaction at the anode, and accounts for the mixed potential associated with methanol crossover, induced both by diffusion and electro-osmotic drag. Polarization curves computed for various methanol feed concentrations, temperatures, and methanol feed velocities show good agreement with recent experimental results. The spatial distribution of methanol in the anode channels, together with the distributions of current density, methanol crossover and fuel utilization at the anode catalyst layer, are also presented for different opperating conditions. (author)
International Nuclear Information System (INIS)
The report resumes the calculation basia given by Walter Gloyer in his different papers and adds certain improvements acquired by long experience in thermal calculation engineering. The following points, necessary for the calculations, are examined in detail: verification of the thermal balances; calculation of the average temperature difference between the vapour and liquid, taking into account the efficiency of the exchanger; pressure loss of the phase stream; calculation of the various thermal resistances; calculation of the exchange surface. The basis of calculation being thus defined, a numerical application of the cooler calculation for hydrocarbon vapour + liquid mixtures with partial condensation is treated and enables the general use of this method to be considered for transfer problems in two-phase streams
One-step lowrank wave extrapolation
Sindi, G.
2014-01-01
Wavefield extrapolation is at the heart of modeling, imaging, and Full waveform inversion. Spectral methods gained well deserved attention due to their dispersion free solutions and their natural handling of anisotropic media. We propose a scheme a modified one-step lowrank wave extrapolation using Shanks transform in isotropic, and anisotropic media. Specifically, we utilize a velocity gradient term to add to the accuracy of the phase approximation function in the spectral implementation. With the higher accuracy, we can utilize larger time steps and make the extrapolation more efficient. Applications to models with strong inhomogeneity and considerable anisotropy demonstrates the utility of the approach.
Allodji, Rodrigue S; Schwartz, Boris; Diallo, Ibrahima; Agbovon, Césaire; Laurier, Dominique; de Vathaire, Florent
2015-08-01
Analyses of the Life Span Study (LSS) of Japanese atomic bombing survivors have routinely incorporated corrections for additive classical measurement errors using regression calibration. Recently, several studies reported that the efficiency of the simulation-extrapolation method (SIMEX) is slightly more accurate than the simple regression calibration method (RCAL). In the present paper, the SIMEX and RCAL methods have been used to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates. For instance, it is shown that using the SIMEX method, the ERR/Gy is increased by an amount of about 29 % for all solid cancer deaths using a linear model compared to the RCAL method, and the corrected EAR 10(-4) person-years at 1 Gy (the linear terms) is decreased by about 8 %, while the corrected quadratic term (EAR 10(-4) person-years/Gy(2)) is increased by about 65 % for leukaemia deaths based on a linear-quadratic model. The results with SIMEX method are slightly higher than published values. The observed differences were probably due to the fact that with the RCAL method the dosimetric data were partially corrected, while all doses were considered with the SIMEX method. Therefore, one should be careful when comparing the estimated risks and it may be useful to use several correction techniques in order to obtain a range of corrected estimates, rather than to rely on a single technique. This work will enable to improve the risk estimates derived from LSS data, and help to make more reliable the development of radiation protection standards. PMID:25894839
Directory of Open Access Journals (Sweden)
Gérard Meunier
2013-03-01
Full Text Available Regarding standards, it is well established that common mode currents are the main source of far field emitted by variable frequency drive (VFD-cable-motor associations. These currents are generated by the combination of floating potentials with stray capacitances between these floating potential tracks and the mechanical parts connected to the earth (the heatsink or cables are usual examples. Nowadays, due to frequency and power increases, the systematic compliance to EMC (ElectroMagnetic Compatibility becomes increasingly difficult and costly for industrials. As a consequence, there is a well-identified need to investigate practical and low cost solutions to reduce the radiated fields of VFD-cable-motor associations. A well-adapted solution is the shielding of wound components well known as the major source of near magnetic field. However, this solution is not convenient, it is expensive and may not be efficient regarding far field reduction. Optimizing the components placement could be a better and cheaper solution. As a consequence, dedicated tools have to be developed to efficiently investigate not easy comprehendible phenomena and finally to control EMC disturbances using component placement, layout geometry, shielding design if needed. However, none of the modeling methods usually used in industry complies with large frequency range and far field models including magnetic materials, multilayer PCBs, and shielding. The contribution of this paper is to show that alternatives regarding modeling solutions exist and can be used to get in-deep analysis of such complex structures. It is shown in this paper that near field investigations can give information on far field behavior. It is illustrated by an investigation of near field interactions and shielding influence using a FE-PEEC hybrid method. The test case combining a common mode filter with the floating potentials tracks of an inverter is based on an industrial and commercialized VFD. The near field interactions between the common mode inductance and the tracks with floating potentials are revealed. Then, the influence of the common mode inductance shielding is analyzed.
Load Extrapolation During Operation for Wind Turbines
Toft, Henrik Stensgaard; Sørensen, John Dalsgaard
2008-01-01
In the recent years load extrapolation for wind turbines has been widely considered in the wind turbine industry. Loads on wind turbines during operations are normally dependent on the mean wind speed, the turbulence intensity and the type and settings of the control system. All these parameters must be taken into account when characteristic load effects during operation are determined. In the wind turbine standard IEC 61400-1 a method for load extrapolation using the peak over threshold meth...
Cosmological extrapolation of MOND
Kiselev, V V
2011-01-01
Regime of MOND, which is used in astronomy to describe the gravitating systems of island type without the need to postulate the existence of a hypothetical dark matter, is generalized to the case of homogeneous distribution of usual matter by introducing a linear dependence of the critical acceleration on the size of region under consideration. We show that such the extrapolation of MOND in cosmology is consistent with both the observed dependence of brightness on the redshift for type Ia supernovae and the parameters of large-scale structure of Universe in the evolution, that is determined by the presence of a cosmological constant, the ordinary matter of baryons and electrons as well as the photon and neutrino radiation without any dark matter.
Li, Gang; Xu, Jiayun; Zhang, Jie
2014-10-22
Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am-Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am-Be neutron source was investigated by measuring the count rates obtained through a (3)He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, (3)He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. PMID:25464188
International Nuclear Information System (INIS)
Inhomogeneous corrosion in reinforced concrete is investigated using a beam with a flexural crack intersecting the reinforcement. An Evans diagram representation of the macrocell corrosion system is developed. The relationship between the current density and the potentials relative to the crack obtained from the Tafel polarization responses of active and passive steel in concrete compares favorably with the experimental values. When both microcell and macrocell mechanisms contribute to metal loss at the crack, the Evans diagram representation indicates that an increase in the macrocell current density results in a decreasing contribution from the local microcell at the macrocell anode.
Extrapolation Distances for Pulsed Neutron Experiments
International Nuclear Information System (INIS)
Attention has been drawn in earlier work to the effect of uncertainty in extrapolation distance on the results of pulsed neutron experiments and hence to the need for more accurate knowledge of this parameter. The extrapolated endpoints can be obtained from flux plots and the value for large systems can be deduced from diffusion coefficients. Information from both approaches is given and the dependence of extrapolated endpoint on temperature and on buckling is discussed. Decay times and time-dependent flux plots have been measured in pulsed source experiments on small, accurately-known, volumes of water and Dowtherm A (thermex) by the use of a small scintillation detector and a time analyser; a separate scintillation detector or a BF3 counter has been used as a monitor. Spatial harmonic analysis of the flux plots was performed by the method of least squares to obtain the extrapolated endpoints once appropriate corrections have been made to the recorded counts. Some consideration was given to the possibility of testing for the effect of flux distortion near the boundary by successive removal of the outer points and to the effects on extrapolated endpoint of the flux perturbation produced by the detector. The results presented are mainly for measurements at 20°C in 4-in and 7-in cubic containers lined with cadmium, but very preliminary information was obtained for water at temperatures up to 80°C and equipment is being designed to extend the range of temperatures still further. (author)
International Nuclear Information System (INIS)
Neutron radiation protection is an important research area because of the strong radiation biological effect of neutron field. The radiation dose of neutron is closely related to the neutron energy, and the connected relationship is a complex function of energy. For the low-level neutron radiation field (e.g. the Am–Be source), the commonly used commercial neutron dosimeter cannot always reflect the low-level dose rate, which is restricted by its own sensitivity limit and measuring range. In this paper, the intensity distribution of neutron field caused by a curie level Am–Be neutron source was investigated by measuring the count rates obtained through a 3He proportional counter at different locations around the source. The results indicate that the count rates outside of the source room are negligible compared with the count rates measured in the source room. In the source room, 3He proportional counter and neutron dosimeter were used to measure the count rates and dose rates respectively at different distances to the source. The results indicate that both the count rates and dose rates decrease exponentially with the increasing distance, and the dose rates measured by a commercial dosimeter are in good agreement with the results calculated by the Geant4 simulation within the inherent errors recommended by ICRP and IEC. Further studies presented in this paper indicate that the low-level neutron dose equivalent rates in the source room increase exponentially with the increasing low-energy neutron count rates when the source is lifted from the shield with different radiation intensities. Based on this relationship as well as the count rates measured at larger distance to the source, the dose rates can be calculated approximately by the extrapolation method. This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter. - Highlights: • The scope of the affected area for a curie-level Am–Be neutron source was measured. • The low-level neutron dose-equivalent rates around the source increase exponentially with the increasing count rates when the source is in different shielding state. • This principle can be used to estimate the low level neutron dose values in the source room which cannot be measured directly by a commercial dosimeter
Chiral extrapolation of nucleon magnetic form factors
International Nuclear Information System (INIS)
The extrapolation of nucleon magnetic form factors calculated within lattice QCD is investigated within a framework based upon heavy baryon chiral effective-field theory. All one-loop graphs are considered at arbitrary momentum transfer and all octet and decuplet baryons are included in the intermediate states. Finite range regularization is applied to improve the convergence in the quark-mass expansion. At each value of the momentum transfer (Q2), a separate extrapolation to the physical pion mass is carried out as a function of m? alone. Because of the large values of Q2 involved, the role of the pion form factor in the standard pion-loop integrals is also investigated. The resulting values of the form factors at the physical pion mass are compared with experimental data as a function of Q2 and demonstrate the utility and accuracy of the chiral extrapolation methods presented herein
Chiral extrapolation of nucleon magnetic form factors
Energy Technology Data Exchange (ETDEWEB)
P. Wang; D. Leinweber; A. W. Thomas; R.Young
2007-04-01
The extrapolation of nucleon magnetic form factors calculated within lattice QCD is investigated within a framework based upon heavy baryon chiral effective-field theory. All one-loop graphs are considered at arbitrary momentum transfer and all octet and decuplet baryons are included in the intermediate states. Finite range regularization is applied to improve the convergence in the quark-mass expansion. At each value of the momentum transfer (Q{sup 2}), a separate extrapolation to the physical pion mass is carried out as a function of m{sub {pi}} alone. Because of the large values of Q{sup 2} involved, the role of the pion form factor in the standard pion-loop integrals is also investigated. The resulting values of the form factors at the physical pion mass are compared with experimental data as a function of Q{sup 2} and demonstrate the utility and accuracy of the chiral extrapolation methods presented herein.
Uncertainties of Euclidean Time Extrapolation in Lattice Effective Field Theory
Lähde, Timo A; Krebs, Hermann; Lee, Dean; Meißner, Ulf-G; Rupak, Gautam
2014-01-01
Extrapolations in Euclidean time form a central part of Nuclear Lattice Effective Field Theory (NLEFT) calculations using the Projection Monte Carlo method, as the sign problem in many cases prevents simulations at large Euclidean time. We review the next-to-next-to-leading order NLEFT results for the alpha nuclei up to $^{28}$Si, with emphasis on the Euclidean time extrapolations, their expected accuracy and potential pitfalls. We also discuss possible avenues for improving the reliability of Euclidean time extrapolations in NLEFT.
International Nuclear Information System (INIS)
The surrogate-reaction method is an indirect technique to extract neutron-induced cross-sections of short-lived nuclei. In the last years several experiments have been performed to investigate whether this technique can be applied to infer radiative-capture cross-sections. A major difficulty in this type of measurements is the determination of the gamma-cascade detection efficiency. The pulse-height weighting technique (PHWT) has been previously used to determine this quantity in surrogate experiments. In this work, we present a new method to determine the gamma-cascade detection efficiency in the vicinity of the neutron-separation energy that is much simpler than the PHWT. We also investigate the possibility to apply this new technique in standard experiments using neutron beams.
Wavefield extrapolation in pseudodepth domain
Ma, Xuxin
2013-02-01
Wavefields are commonly computed in the Cartesian coordinate frame. Its efficiency is inherently limited due to spatial oversampling in deep layers, where the velocity is high and wavelengths are long. To alleviate this computational waste due to uneven wavelength sampling, we convert the vertical axis of the conventional domain from depth to vertical time or pseudodepth. This creates a nonorthognal Riemannian coordinate system. Isotropic and anisotropic wavefields can be extrapolated in the new coordinate frame with improved efficiency and good consistency with Cartesian domain extrapolation results. Prestack depth migrations are also evaluated based on the wavefield extrapolation in the pseudodepth domain.© 2013 Society of Exploration Geophysicists. All rights reserved.
Fuzzy Model Comparison to Extrapolate Rainfall Data
C. Tzimopoulos; L. Mpallas; C. Evangelides
2008-01-01
This research presents two fuzzy rule-based models for extrapolating the missing rainfall data records of a station, utilizing as a reference the values from another meteorological station located in an adjacent area. The first one is constructed based on the least squares algorithm and the second one using ANFIS method. Three stations were used in this research, all located in Northern Greece. The values of Thessaloniki station were used as fuzzy premises and the values of Sindos and K...
Residual extrapolation operators for efficient wavefield construction
Alkhalifah, T.
2013-02-27
Solving the wave equation using finite-difference approximations allows for fast extrapolation of the wavefield for modelling, imaging and inversion in complex media. It, however, suffers from dispersion and stability-related limitations that might hamper its efficient or proper application to high frequencies. Spectral-based time extrapolation methods tend to mitigate these problems, but at an additional cost to the extrapolation. I investigate the prospective of using a residual formulation of the spectral approach, along with utilizing Shanks transform-based expansions, that adheres to the residual requirements, to improve accuracy and reduce the cost. Utilizing the fact that spectral methods excel (time steps are allowed to be large) in homogeneous and smooth media, the residual implementation based on velocity perturbation optimizes the use of this feature. Most of the other implementations based on the spectral approach are focussed on reducing cost by reducing the number of inverse Fourier transforms required in every step of the spectral-based implementation. The approach here fixes that by improving the accuracy of each, potentially longer, time step.
Extrapolating phosphorus production to estimate resource reserves.
Vaccari, David A; Strigul, Nikolay
2011-08-01
Various indicators of resource scarcity and methods for extrapolating resource availability are examined for phosphorus. These include resource lifetime, and trends in resource price, ore grade and discovery rates, and Hubbert curve extrapolation. Several of these indicate increasing scarcity of phosphate resources. Calculated resource lifetime is subject to a number of caveats such as unanticipated future changes in resource discovery, mining and beneficiation technology, population growth or per-capita demand. Thus it should be used only as a rough planning index or as a relative indicator of potential scarcity. This paper examines the uncertainty in one method for estimating available resources from historical production data. The confidence intervals for the parameters and predictions of the Hubbert curves are computed as they relate to the amount of information available. These show that Hubbert-type extrapolations are not robust for predicting the ultimately recoverable reserves or year of peak production of phosphate rock. Previous successes of the Hubbert curve are for cases in which there exist alternative resources, which is not the situation for phosphate. It is suggested that data other than historical production, such as population growth, identified resources and economic factors, should be included in making such forecasts. PMID:21440285
Extrapolation. Recent results and challenges.
Czech Academy of Sciences Publication Activity Database
Krbec, Miroslav
Praha : Mathematical Institute of the Academy of Sciences of the Czech Republic, 2005 - (Drábek, P.; Rákosník, J.), s. 176-187 ISBN 80-85823-52-7. [Function Spaces, Differential Operators and Nonlinear Analysis. Milovy (CZ), 28.05.2004-02.06.2004] Institutional research plan: CEZ:AV0Z1019905 Keywords : extrapolation * Lorentz spaces * small Lebesque spaces Subject RIV: BA - General Mathematics
Effective orthorhombic anisotropic models for wavefield extrapolation
Ibanez-Jacome, W.
2014-07-18
Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth\\'s subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the firstarrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.
Uncertainties of Euclidean time extrapolation in lattice effective field theory
International Nuclear Information System (INIS)
Extrapolations in Euclidean time form a central part of nuclear lattice effective field theory (NLEFT) calculations using the projection Monte Carlo method, as the sign problem in many cases prevents simulations at large Euclidean time. We review the next-to-next-to-leading order NLEFT results for the alpha nuclei up to 28Si, with emphasis on the Euclidean time extrapolations, their expected accuracy and potential pitfalls. We also discuss possible avenues for improving the reliability of Euclidean time extrapolations in NLEFT. (paper)
UFOs: Observations, Studies and Extrapolations
Baer, T; Barnes, M J; Bartmann, W; Bracco, C; Carlier, E; Cerutti, F; Dehning, B; Ducimetière, L; Ferrari, A; Ferro-Luzzi, M; Garrel, N; Gerardin, A; Goddard, B; Holzer, E B; Jackson, S; Jimenez, J M; Kain, V; Zimmermann, F; Lechner, A; Mertens, V; Misiowiec, M; Nebot Del Busto, E; Morón Ballester, R; Norderhaug Drosdal, L; Nordt, A; Papotti, G; Redaelli, S; Uythoven, J; Velghe, B; Vlachoudis, V; Wenninger, J; Zamantzas, C; Zerlauth, M; Fuster Martinez, N
2012-01-01
UFOs (“ Unidentified Falling Objects”) could be one of the major performance limitations for nominal LHC operation. Therefore, in 2011, the diagnostics for UFO events were significantly improved, dedicated experiments and measurements in the LHC and in the laboratory were made and complemented by FLUKA simulations and theoretical studies. The state of knowledge is summarized and extrapolations for LHC operation in 2012 and beyond are presented. Mitigation strategies are proposed and related tests and measures for 2012 are specified.
Chiral Extrapolation of Hadronic Observables
Thomas, A W
2002-01-01
One of the great challenges of lattice QCD is to produce unambiguous predictions for the properties of physical hadrons. We review recent progress with respect to a major barrier to achieving this goal, namely the fact that computation time currently limits us to large quark mass. Using insights from the study of the lattice data itself, together with the general constraints of chiral symmetry, we demonstrate that it is possible to extrapolate accurately and in an essentially model independent manner from the mass region where calculations will be performed within the next five years to the chiral limit.
Aschwanden, Markus J; Liu, Yang
2014-01-01
We developed a {\\sl coronal non-linear force-free field (COR-NLFFF)} forward-fitting code that fits an approximate {\\sl non-linear force-free field (NLFFF)} solution to the observed geometry of automatically traced coronal loops. In contrast to photospheric NLFFF codes, which calculate a magnetic field solution from the constraints of the transverse photospheric field, this new code uses coronal constraints instead, and this way provides important information on systematic errors of each magnetic field calculation method, as well as on the non-forcefreeness in the lower chromosphere. In this study we applied the COR-NLFFF code to active region NOAA 11158, during the time interval of 2011 Feb 12 to 17, which includes an X2.2 GOES-class flare plus 35 M and C-class flares. We calcuated the free magnetic energy with a 6-minute cadence over 5 days. We find good agreement between the two types of codes for the total nonpotential $E_N$ and potential energy $E_P$, but find up to a factor of 4 discrepancy in the free ...
International Nuclear Information System (INIS)
We developed a coronal nonlinear force-free field (COR-NLFFF) forward-fitting code that fits an approximate nonlinear force-free field (NLFFF) solution to the observed geometry of automatically traced coronal loops. In contrast to photospheric NLFFF codes, which calculate a magnetic field solution from the constraints of the transverse photospheric field, this new code uses coronal constraints instead, and this way provides important information on systematic errors of each magnetic field calculation method, as well as on the non-force-freeness in the lower chromosphere. In this study we applied the COR-NLFFF code to NOAA Active Region 11158, during the time interval of 2011 February 12-17, which includes an X2.2 GOES-class flare plus 35 M- and C-class flares. We calculated the free magnetic energy with a 6 minute cadence over 5 days. We find good agreement between the two types of codes for the total nonpotential EN and potential energy EP but find up to a factor of 4 discrepancy in the free energy E free = EN – EP and up to a factor of 10 discrepancy in the decrease of the free energy ?E free during flares. The coronal NLFFF code exhibits a larger time variability and yields a decrease of free energy during the flare that is sufficient to satisfy the flare energy budget, while the photospheric NLFFF code shows much less time variability and an order of magnitude less free-energy decrease during flares. The discrepancy may partly be due to the preprocessing of photospheric vector data but more likely is due to the non-force-freeness in the lower chromosphere. We conclude that the coronal field cannot be correctly calculated on the basis of photospheric data alone and requires additional information on coronal loop geometries.
Chiral extrapolation beyond the power-counting regime
International Nuclear Information System (INIS)
Chiral effective field theory can provide valuable insight into the chiral physics of hadrons when used in conjunction with nonperturbative schemes such as lattice quantum chromodynamics (QCD). In this discourse, the attention is focused on extrapolating the mass of the ? meson to the physical pion mass in quenched QCD. With the absence of a known experimental value, this serves to demonstrate the ability of the extrapolation scheme to make predictions without prior bias. By using extended effective field theory developed previously, an extrapolation is performed using quenched lattice QCD data that extends outside the chiral power-counting regime. The method involves an analysis of the renormalization flow curves of the low-energy coefficients in a finite-range regularized effective field theory. The analysis identifies an optimal regularization scale, which is embedded in the lattice QCD data themselves. This optimal scale is the value of the regularization scale at which the renormalization of the low-energy coefficients is approximately independent of the range of quark masses considered. By using recent precision, quenched lattice results, the extrapolation is tested directly by truncating the analysis to a set of points above 380 MeV, while temporarily disregarding the simulation results closer to the chiral regime. This tests the ability of the method to make predictions of the simulation results, without phenomenologically motivated bias. The result is a successful extrapolation to the chiral regime.
Flavor extrapolation in lattice QCD
International Nuclear Information System (INIS)
Explicit calculation of the effect of virtual quark-antiquark pairs in lattice QCD has eluded researchers. To include their effect explicitly one must calculate the determinant of the fermion-fermion coupling matrix. Owing to the large number of sites in a continuum limit size lattice, direct evaluation of this term requires an unrealistic amount of computer time. The effect of the virtual pairs can be approximated by ignoring this term and adjusting lattice couplings to reproduce experimental results. This procedure is called the valence approximation since it ignores all but the minimal number of quarks needed to describe hadrons. In this work the effect of the quark-antiquark pairs has been incorporated in a theory with an effective negative number of quark flavors contributing to the closed loops. Various particle masses and decay constants have been calculated for this theory and for one with no virtual pairs. The author attempts to extrapolate results towards positive numbers of quark flavors. The results show approximate agreement with experimental measurements and demonstrate the smoothness of lattice expectations in the number of quark flavors
Chiral extrapolation beyond the power-counting regime
Hall, J M M; Leinweber, D B; Liu, K F; Mathur, N; Young, R D; Zhang, J B
2011-01-01
Chiral effective field theory can provide valuable insight into the chiral physics of hadrons when used in conjunction with non-perturbative schemes such as lattice QCD. In this discourse, the attention is focused on extrapolating the mass of the rho meson to the physical pion mass in quenched QCD (QQCD). With the absence of a known experimental value, this serves to demonstrate the ability of the extrapolation scheme to make predictions without prior bias. By using extended effective field theory developed previously, an extrapolation is performed using quenched lattice QCD data that extends outside the chiral power-counting regime (PCR). The method involves an analysis of the renormalization flow curves of the low energy coefficients in a finite-range regularized effective field theory. The analysis identifies an optimal regulator, which is embedded in the lattice QCD data themselves. This optimal regulator is the regulator value at which the renormalization of the low energy coefficients is approximately i...
Extrapolations of nuclear binding energies from new linear mass relations
Hove, D; Riisager, K
2014-01-01
We present a method to extrapolate nuclear binding energies from known values for neighbouring nuclei. We select four specific mass relations constructed to eliminate smooth variation of the binding energy as function nucleon numbers. The fast odd-even variations are avoided by comparing nuclei with same parity. The mass relations are first tested and shown to either be rather accurately obeyed or revealing signatures of quickly varying structures. Extrapolations are initially made for a nucleus by applying each of these relations. Very reliable estimates are then produced either by an average or by choosing the extrapolation where the smoothest structures enter. Corresponding mass relations for $Q_{\\alpha}$ values are used to study the general structure of super-heavy elements. A minor neutron shell at $N = 152$ is seen, but no sign of other shell structures are apparent in the super-heavy region. Accuracies are typically substantially better than $0.5$~MeV.
Design and building of an extrapolation ionization chamber for beta dosimetry
International Nuclear Information System (INIS)
An extrapolation chamber was designed and built to be used in beta dosimetry. The basic characteristics of an extrapolation chamber are discussed, together with fundamental principle of the dosimetric method used. Details of the chamber's design and properties of materials employed are presented. A full evaluation of extrapolation chamber under irradiation from two 90Sr + 90Y beta sources is done. The geometric parameters of the chamber, leakage current and ion collection efficiency are determined. (Author)
Outlier robustness for wind turbine extrapolated extreme loads
DEFF Research Database (Denmark)
Natarajan, Anand; Verelst, David Robert
2012-01-01
Methods for extrapolating extreme loads to a 50 year probability of exceedance, which display robustness to the presence of outliers in simulated loads data set, are described. Case studies of isolated high extreme out-of-plane loads are discussed to emphasize their underlying physical reasons. Stochastic identification of numerical artifacts in simulated loads is demonstrated using the method of principal component analysis. The extrapolation methodology is made robust to outliers through a weighted loads approach, whereby the eigenvalues of the correlation matrix obtained using the loads with its dependencies is utilized to estimate a probability for the largest extreme load to occur at a specific mean wind speed. This inherently weights extreme loads that occur frequently within mean wind speed bins higher than isolated occurrences of extreme loads. Primarily, the results for the blade root out-of-plane loads are presented here as those extrapolated loads have shown wide variability in literature, but the method can be generalized to any other component load. The convergence of the 1 year extrapolated extreme blade root out-of-plane load with the number of turbulent wind samples used in the loads simulation is demonstrated and compared with published results. Further effects of varying wind inflow angles and shear exponent is brought out. Parametric fitting techniques that consider all extreme loads including ‘outliers’ are proposed, and the physical reasons that result in isolated high extreme loads are highlighted, including the effect of the wind turbine controls system. Copyright © 2011 John Wiley & Sons, Ltd.
Finite-range regularisation and chiral extrapolation
International Nuclear Information System (INIS)
We study the expansion of the nucleon mass in chiral effective field theory. We describe finite-range regularisation and demonstrate its application to the chiral extrapolation problem for lattice QCD
Endangered species toxicity extrapolation using ICE models
The National Research Council’s (NRC) report on assessing pesticide risks to threatened and endangered species (T&E) included the recommendation of using interspecies correlation models (ICE) as an alternative to general safety factors for extrapolating across species. ...
Effective wavefield extrapolation in anisotropic media: Accounting for resolvable anisotropy
Alkhalifah, Tariq Ali
2014-04-30
Spectral methods provide artefact-free and generally dispersion-free wavefield extrapolation in anisotropic media. Their apparent weakness is in accessing the medium-inhomogeneity information in an efficient manner. This is usually handled through a velocity-weighted summation (interpolation) of representative constant-velocity extrapolated wavefields, with the number of these extrapolations controlled by the effective rank of the original mixed-domain operator or, more specifically, by the complexity of the velocity model. Conversely, with pseudo-spectral methods, because only the space derivatives are handled in the wavenumber domain, we obtain relatively efficient access to the inhomogeneity in isotropic media, but we often resort to weak approximations to handle the anisotropy efficiently. Utilizing perturbation theory, I isolate the contribution of anisotropy to the wavefield extrapolation process. This allows us to factorize as much of the inhomogeneity in the anisotropic parameters as possible out of the spectral implementation, yielding effectively a pseudo-spectral formulation. This is particularly true if the inhomogeneity of the dimensionless anisotropic parameters are mild compared with the velocity (i.e., factorized anisotropic media). I improve on the accuracy by using the Shanks transformation to incorporate a denominator in the expansion that predicts the higher-order omitted terms; thus, we deal with fewer terms for a high level of accuracy. In fact, when we use this new separation-based implementation, the anisotropy correction to the extrapolation can be applied separately as a residual operation, which provides a tool for anisotropic parameter sensitivity analysis. The accuracy of the approximation is high, as demonstrated in a complex tilted transversely isotropic model. © 2014 European Association of Geoscientists & Engineers.
Basis set and correlation dependent extrapolation of correlation energy
Huh, Soon Bum; Lee, Jae Shin
2003-02-01
A simple extrapolation formula of (X+?)-3 which fits correlation energies with correlation consistent (aug-)cc-pVXZ and (aug-)cc-pV(X+1)Z[X=D(2),T(3),Q(4)] basis sets to estimate the basis set limit was devised by varying the parameter ? according to basis set quality and correlation level. The explicit extrapolation formulas suitable for calculations at the second order Møller-Plesset perturbation theory and single and double excitation coupled cluster theory with perturbative triples correction level are presented and applications are made to estimate the basis set limit binding energies of various hydrogen-bonded and van der Waals clusters. A comparison of the results by this formula with the reference basis set limit results and the results by other extrapolation methods reveals that the extrapolation formulas proposed here can yield the reliable basis set limit estimates even with the small basis sets and could be used effectively for investigating large weakly bound complexes.
Frequency extrapolation by nonconvex compressive sensing
International Nuclear Information System (INIS)
Tomographic imaging modalities sample subjects with a discrete, finite set of measurements, while the underlying object function is continuous. Because of this, inversion of the imaging model, even under ideal conditions, necessarily entails approximation. The error incurred by this approximation can be important when there is rapid variation in the object function or when the objects of interest are small. In this work, we investigate this issue with the Fourier transform (FT), which can be taken as the imaging model for magnetic resonance imaging (MRl) or some forms of wave imaging. Compressive sensing has been successful for inverting this data model when only a sparse set of samples are available. We apply the compressive sensing principle to a somewhat related problem of frequency extrapolation, where the object function is represented by a super-resolution grid with many more pixels than FT measurements. The image on the super-resolution grid is obtained through nonconvex minimization. The method fully utilizes the available FT samples, while controlling aliasing and ringing. The algorithm is demonstrated with continuous FT samples of the Shepp-Logan phantom with additional small, high-contrast objects.
EXTRAPOLATING BRAIN DEVELOPMENT FROM EXPERIMENTAL SPECIES TO HUMANS
Clancy, Barbara; Finlay, Barbara L; Darlington, Richard B.; Anand, KJS
2007-01-01
To better understand the neurotoxic effects of diverse hazards on the developing human nervous system, researchers and clinicians rely on data collected from a number of model species that develop and mature at varying rates. We review the methods commonly used to extrapolate the timing of brain development from experimental mammalian species to humans, including morphological comparisons, “rules of thumb” and “event-based” analyses. Most are unavoidably limited in range or detail, many are n...
Wavefield extrapolation in pseudo-depth domain
Ma, Xuxin
2012-01-01
Extrapolating seismic waves in Cartesian coordinate is prone to uneven spatial sampling, because the seismic wavelength tends to grow with depth, as velocity increase. We transform the vertical depth axis to a pseudo one using a velocity weighted mapping, which can effectively mitigate this wavelength variation. We derive acoustic wave equations in this new domain based on the direct transformation of the Laplacian derivatives, which admits solutions that are more accurate and stable than those derived from the kinematic transformation. The anisotropic versions of these equations allow us to isolate the vertical velocity influence and reduce its impact on modeling and imaging. The major benefit of extrapolating wavefields in pseudo-depth space is its near uniform wavelength as opposed to the normally dramatic change of wavelength with the conventional approach. Time wavefield extrapolation on a complex velocity shows some of the features of this approach.
Evaluation of uncertainty in vertical extrapolation of wind speeds and its implications
Energy Technology Data Exchange (ETDEWEB)
Dimitrijevic, M.; Zaganescu, C.; Dokouzian, S. [Helimax Energy Inc., Montreal, PQ (Canada)
2008-07-01
This paper reported on a study that investigated the influence that topography, surface roughness and atmospheric stability have on wind speed vertical extrapolation and the financial impact throughout the service life of a wind power project. The wind resource should be assessed at hub height in order to calculate the energy yield of a wind turbine and the noise propagation or to determine the structural integrity of a wind tower. The accuracy of the hub height wind speed estimate depends on how well the vertical extrapolation has been done. In this study, directionally, monthly and hourly computed wind shear coefficients were used to extrapolate the measured wind speeds to hub height in order to compare extrapolated and measured wind speed values under different conditions. Boundary layer equations were used to evaluate the atmospheric stability. Wind speeds were extrapolated using the appropriate stability correction function and were verified against measured wind speeds. Different methods to define the stability classes were compared with the measured data. Several WAsP simulations were run with measured and extrapolated wind series in order to evaluate the influence of vertical extrapolation on the horizontal distribution of the wind resource. All evaluated extrapolation methods performed well, with uncertainty of up to 3 per cent for studied cases. The uncertainty was lower in less complex conditions and when more measured data was available. Future work will focus on extending validation of the extrapolation methods using other tall tower data in terrain of varying complexity, as well as further investigating the effects of stability on extrapolation. 8 refs., 2 tabs., 10 figs.
On the basis-set extrapolation
Chandra, Suresh
2015-01-01
A possible solution for the problem of memory-size and computer-time, is the extrapolation of basis-set$^1$. This extrapolation has two exponents $\\alpha$ and $\\beta$, corresponding to the HF (reference energy) and the energy of correlations (EC), respectively. For a given system, the exponents are taken as constant$^2$, and potential energy surfaces (PES) are generated. We have found that the values of $\\alpha$ and $\\beta$ are not constant, but vary from position to position in the system. How to deal with such situation and get very accurate PES, is discussed.
International Nuclear Information System (INIS)
180000 pictures taken in the 2 m CERN hydrogen bubble chamber with an incident beam of 2.77 GeV/e were examined. High statistics obtained in the whole angular production range allowed to study the d?/dt differential cross section behaviour, the mass and width of the ? meson, and the multipole parameters of this resonance. Nevertheless, the aim of this experiment was the application of the CHEW - LOW extrapolation method. Different types of extrapolation procedures were compared. Phase shift analysis of the elastic ?? scattering between 500 and 1100 MeV, performed with conformal mappings, allowed to determine the values of the S0, S2, P1, D0, D2 waves. Forward dispersion relations were used to obtain scattering length values of the S2 and P1 phase shifts. (author)
International Nuclear Information System (INIS)
From the year of 1987 the Department of Metrology of the ININ, in their Secondary Laboratory of Calibration Dosimetric, has a patron group of sources of radiation beta and an extrapolation chamber of electrodes of variable separation.Their objective is to carry out of the unit of the dose speed absorbed in air for radiation beta. It uses the ionometric method, cavity Bragg-Gray in the extrapolation chamber with which it counts. The services that offers are: i) it Calibration : Radioactive Fuentes of radiation beta, isotopes: 90Sr/90Y; Ophthalmic applicators 90Sr/90Y; Instruments for detection of beta radiation with to the radiological protection: Ionization chambers, Geiger-Muller, etc.; Personal Dosemeters. ii) Irradiation with beta radiation of materials to the investigation. (Author)
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Naess, Arvid
2011-01-01
The paper explores a recently developed method for statistical response load (load effect) extrapolation for application to extreme response of wind turbines during operation. The extrapolation method is based on average conditional exceedance rates and is in the present implementation restricted to cases where the Gumbel distribution is the appropriate asymptotic extreme value distribution. However, two extra parameters are introduced by which a more general and flexible class of extreme value distributions is obtained with the Gumbel distribution as a subclass. The general method is implemented within a hierarchical model where the variables that influence the loading are divided into ergodic variables and time-invariant non-ergodic variables. The presented method for statistical response load extrapolation was compared with the existing methods based on peak extrapolation for the blade out-of-plane bending moment and the tower mudline bending moment of a pitch-controlled wind turbine. In general, the results show that the method based on average conditional exceedance rates predicts the extrapolated characteristic response loads at the individual mean wind speeds well and results in more consistent estimates than the methods based on peak extrapolation.
A simple extrapolation of thermodynamic perturbation theory to infinite order
Ghobadi, Ahmadreza F.; Elliott, J. Richard
2015-09-01
Recent analyses of the third and fourth order perturbation contributions to the equations of state for square well spheres and Lennard-Jones chains show trends that persist across orders and molecular models. In particular, the ratio between orders (e.g., A3/A2, where Ai is the ith order perturbation contribution) exhibits a peak when plotted with respect to density. The trend resembles a Gaussian curve with the peak near the critical density. This observation can form the basis for a simple recursion and extrapolation from the highest available order to infinite order. The resulting extrapolation is analytic and therefore cannot fully characterize the critical region, but it remarkably improves accuracy, especially for the binodal curve. Whereas a second order theory is typically accurate for the binodal at temperatures within 90% of the critical temperature, the extrapolated result is accurate to within 99% of the critical temperature. In addition to square well spheres and Lennard-Jones chains, we demonstrate how the method can be applied semi-empirically to the Perturbed Chain - Statistical Associating Fluid Theory (PC-SAFT).
Effective Orthorhombic Anisotropic Models for Wave field Extrapolation
Ibanez Jacome, Wilson
2013-05-01
Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models, to reproduce wave propagation phenomena in the Earth\\'s subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, I generate effective isotropic inhomogeneous models that are capable of reproducing the first-arrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, I develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic one, is represented by a sixth order polynomial equation that includes the fastest solution corresponding to outgoing P-waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, which is done by explicitly solving the isotropic eikonal equation for the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. I extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.
Haseman, J K
2003-01-01
In a recent Perspective article (Toxicologic Pathology 31: 260-262, 2003) Waddell asserts that he has developed a log linear extrapolation model that can demonstrate a threshold and resolve for once and for all the uncertainies associated with low dose cancer risk extrapolation. However, his method essentially forces, rather than demonstrates, a threshold, and has many serious flaws that result in significant under-estimation of low dose risk. It would be a serious mistake for the scientific community to adopt Waddell's log linear extrapolation model for chemical carcinogenesis risk assessment. PMID:14692613
Extrapolation of Fracture Toughness Data for HT9 Irradiated at Temperatures 360-390 C
International Nuclear Information System (INIS)
The objective of this task is to provide estimated HT9 cladding and duct fracture toughness values for test (or application) temperatures ranging from -10 C to 200 C, after irradiation at temperatures of 360-390 C. This is expected to be an extrapolation of the limited data presented by Huang(1, 2). This extrapolation is based on currently accepted methods (ASTM 2003 Standard E 1921-02), and other relevant fracture toughness data on irradiated HT9 or similar alloys
Acute toxicity value extrapolation with fish and aquatic invertebrates
Buckler, D.R.; Mayer, F.L.; Ellersieck, Mark R.; Asfaw, A.
2005-01-01
Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled "Ecological Risk Analysis" (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more sensitive to contaminants than fish are. ?? 2005 Springer Science+Business Media, Inc.
International Nuclear Information System (INIS)
90Sr+90Y clinical applicators are used for brachytherapy in Brazilian clinics even though they are not manufactured anymore. Such sources must be calibrated periodically, and one of the calibration methods in use is ionometry with extrapolation ionization chambers. 90Sr+90Y clinical applicators were calibrated using an extrapolation minichamber developed at the Calibration Laboratory at IPEN. The obtained results agree satisfactorily with the data provided in calibration certificates of the sources. - Highlights: • 90Sr+90Y clinical applicators were calibrated using a mini-extrapolation chamber. • An extrapolation curve was obtained for each applicator during its calibration. • The results were compared with those provided by the calibration certificates. • All results of the dermatological applicators presented lower differences than 5%
On extrapolation blowups in the scale
Directory of Open Access Journals (Sweden)
Fiorenza Alberto
2006-01-01
Full Text Available Yano's extrapolation theorem dated back to 1951 establishes boundedness properties of a subadditive operator acting continuously in for close to and/or taking into as and/or with norms blowing up at speed and/or , . Here we give answers in terms of Zygmund, Lorentz-Zygmund and small Lebesgue spaces to what happens if as . The study has been motivated by current investigations of convolution maximal functions in stochastic analysis, where the problem occurs for . We also touch the problem of comparison of results in various scales of spaces.
Extrapolation from experimental systems to man. A review of the problems and the possibilities
International Nuclear Information System (INIS)
Various species of experimental animals, but in particular the mouse, have proved to be good model systems for predicting qualitatively the human response to irradiation. While extrapolations of genetic risks from mice to humans have a long history and a record of considerable success, there have been few attempts to extrapolate quantitatively the findings for somatic effects. An ability to extrapolate risks from exposures to various carcinogenic agents from experimental animal systems and from in vitro systems is an urgent need, and radiation studies provide the model for the development of suitable methods of extrapolation. Accurate measurement of dose, a remarkable store of knowledge about radiobiological responses at the molecular, cellular, and whole-organism level, and the body of data on radiation effects in both man and experimental animals make radiation studies the sensible choice of a model for the development of methods of extrapolation. The principles derived from such studies will make the much more difficult task of extrapolating risks from exposures to chemical carcinogens an easier one
Validation of the modeling of a commercial extrapolation chamber using the Monte Carlo technique
International Nuclear Information System (INIS)
Full text: Extrapolation chambers allow absolute measurements of air kerma produced by beta rays and low energy X-rays, although, in precise measurements of air kerma some effects as electron losses and photon scattering have to be taken into consideration and corrected. In order to determine these correction factors a commercial PTW extrapolation chamber (model 23391) was modeled using the MCNP code. The geometry of the extrapolation chamber was established into the MCNP code in accordance with the specifications of the chamber: 0.025 mm polyimide entrance window, 40.0 mm diameter collecting electrode made of aluminum, and wall of the chamber made of aluminum. The densities of these materials were inserted into the code. The spectrum of the X-rays beams (Pantak/Seifert, operating from 5 kV to 160 kV), measured at 1m of distance of the X-rays tube, was used to simulate the source. To validate this modeling, two tests were evaluated: determination of the extrapolation curve and the energy dependence. The results of Monte Carlo simulations were compared with experimental values. The comparison of these results (simulated and measured) tests were in good agreement. The angular coefficient of the extrapolation chamber determined by the Monte Carlo technique and the angular coefficient of the experimental extrapolation curve showed a difference of only 1%. The energy dependence (calculated and measured) presented similar response results. Therefore, the modeling of the PTWesults. Therefore, the modeling of the PTW extrapolation chamber can be utilized to determine the factors that will correct all effects that interfere in absolute air kerma measurements. The method described by Burns was the chosen method for the determination of electron losses and photon scattering correction factors. The specific corrections factors established for the extrapolation chamber due to the chamber geometry and the radiation interactions inside the collecting volume will be presented in this work. (author)
Dioxin equivalency: Challenge to dose extrapolation
Energy Technology Data Exchange (ETDEWEB)
Brown, J.F. Jr.; Silkworth, J.B. [GE Corporate Research and Development, Schenectady, NY (United States)
1995-12-31
Extensive research has shown that all biological effects of dioxin-like agents are mediated via a single biochemical target, the Ah receptor (AhR), and that the relative biologic potencies of such agents in any given system, coupled with their exposure levels, may be described in terms of toxic equivalents (TEQ). It has also shown that the TEQ sources include not only chlorinated species such as the dioxins (PCDDs), PCDFs, and coplanar PCBs, but also non-chlorinated substances such as the PAHs of wood smoke, the AhR agonists of cooked meat, and the indolocarbazol (ICZ) derived from cruciferous vegetables. Humans have probably had elevated exposures to these non-chlorinated TEQ sources ever since the discoveries of fire, cooking, and the culinary use of Brassica spp. Recent assays of CYP1A2 induction show that these ``natural`` or ``traditional`` AhR agonists are contributing 50--100 times as much to average human TEQ exposures as do the chlorinated xenobiotics. Currently, the safe doses of the xenobiotic TEQ sources are estimated from their NOAELs and large extrapolation factors, derived from arbitrary mathematical models, whereas the NOAELs themselves are regarded as the safe doses for the TEQs of traditional dietary components. Available scientific data can neither support nor refute either approach to assessing the health risk of an individual chemical substance. However, if two substances be toxicologically equivalent, then their TEQ-adjusted health risks must also be equivalent, and the same dose extrapolation procedure should be used for both.
Rational extrapolation for the PageRank vector
Brezinski, C.; Redivo-Zaglia, M.
2008-09-01
An important problem in web search is to determine the importance of each page. From the mathematical point of view, this problem consists in finding the nonnegative left eigenvector of a matrix corresponding to its dominant eigenvalue 1. Since this matrix is neither stochastic nor irreducible, the power method has convergence problems. So, the matrix is replaced by a convex combination, depending on a parameter c , with a rank one matrix. Its left principal eigenvector now depends on c , and it is the PageRank vector we are looking for. However, when c is close to 1, the problem is ill-conditioned, and the power method converges slowly. So, the idea developed in this paper consists in computing the PageRank vector for several values of c , and then to extrapolate them, by a conveniently chosen rational function, at a point near 1. The choice of this extrapolating function is based on the mathematical expression of the PageRank vector as a function of c . Numerical experiments end the paper.
Smooth extrapolation of unknown anatomy via statistical shape models
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
Scintillation counting: an extrapolation into the future
International Nuclear Information System (INIS)
Progress in scintillation counting is intimately related to advances in a variety of other disciplines such as photochemistry, photophysics, and instrumentation. And while there is steady progress in the understanding of luminescent phenomena, there is a virtual explosion in the application of semiconductor technology to detectors, counting systems, and data processing. The exponential growth of this technology has had, and will continue to have, a profound effect on the art of scintillation spectroscopy. This paper will review key events in technology that have had an impact on the development of scintillation science (solid and liquid) and will attempt to extrapolate future directions based on existing and projected capability in associated fields. Along the way there have been occasional pitfalls and several false starts; these too will be discussed as a reminder that if you want the future to be different than the past, study the past
Calculating excitation energies by extrapolation along adiabatic connections
Rebolini, Elisa; Teale, Andrew M; Helgaker, Trygve; Savin, Andreas
2015-01-01
In this paper, an alternative method to range-separated linear-response time-dependent density-functional theory and perturbation theory is proposed to improve the estimation of the energies of a physical system from the energies of a partially interacting system. Starting from the analysis of the Taylor expansion of the energies of the partially interacting system around the physical system, we use an extrapolation scheme to improve the estimation of the energies of the physical system at an intermediate point of the range-separated or linear adiabatic connection where either the electron--electron interaction is scaled or only the long-range part of the Coulomb interaction is included. The extrapolation scheme is first applied to the range-separated energies of the helium and beryllium atoms and of the hydrogen molecule at its equilibrium and stretched geometries. It improves significantly the convergence rate of the energies toward their exact limit with respect to the range-separation parameter. The range...
Extrapolating W -associated jet-production ratios at the LHC
Bern, Z.; Dixon, L. J.; Febres Cordero, F.; Höche, S.; Ita, H.; Kosower, D. A.; Maître, D.
2015-07-01
Electroweak vector-boson production, accompanied by multiple jets, is an important background to searches for physics beyond the standard model. A precise and quantitative understanding of this process is helpful in constraining deviations from known physics. We study four key ratios in W +n -jet production at the LHC. We compute the ratio of cross sections for W +n - to W +(n -1 )-jet production as a function of the minimum jet transverse momentum. We also study the ratio differentially, as a function of the W -boson transverse momentum; as a function of the scalar sum of the jet transverse energy, HTjets; and as a function of certain jet transverse momenta. We show how to use such ratios to extrapolate differential cross sections to W +6 -jet production at next-to-leading order, and we cross-check the method against a direct calculation at leading order. We predict the differential distribution in HTjets for W +6 jets at next-to-leading order using such an extrapolation. We use the BlackHat software library together with SHERPA to perform the computations.
Metallurgical instabilities and creep rupture extrapolation. Alloy 800 (Fe-Ni-Cr-Ti-Al)
International Nuclear Information System (INIS)
Extrapolation of creep rupture data by Larson-Miller, Generalized Interacting Variables, Minimum Commitment and Damage Concept methods have been examined for alloy 800 containing 0,56% (Ti + Al). It has been shown that either one of these methods could lead to erroneous extrapolations if the metallurgical instabilities resulting from ?' (Ni3Ti, Al) precipitation hardening are overlooked. Furthermore, it has been shown that for the particular heat studied the substitution of higher test temperature for shortened time of test is limited to temperatures below about 600 deg C for the ?' strengthened Alloy 800. As the result of this limitation it is concluded that the creep rupture data used for extrapolation to periods on the order of 40 years should contain data points with rupture times greater than 3 x 104 h
Nuclear Lattice Simulations using Symmetry-Sign Extrapolation
Lähde, Timo A; Lee, Dean; Meißner, Ulf-G; Epelbaum, Evgeny; Krebs, Hermann; Rupak, Gautam
2015-01-01
Projection Monte Carlo calculations of lattice Chiral Effective Field Theory suffer from sign oscillations to a varying degree dependent on the number of protons and neutrons. Hence, such studies have hitherto been concentrated on nuclei with equal numbers of protons and neutrons, and especially on the alpha nuclei where the sign oscillations are smallest. We now introduce the technique of "symmetry-sign extrapolation" which allows us to use the approximate Wigner SU(4) symmetry of the nuclear interaction to control the sign oscillations without introducing unknown systematic errors. We benchmark this method by calculating the ground-state energies of the $^{12}$C, $^6$He and $^6$Be nuclei, and discuss its potential for studies of neutron-rich halo nuclei and asymmetric nuclear matter.
Hard hadronic collisions - extrapolation of standard effects
International Nuclear Information System (INIS)
We study hard hadronic collisions for the proton-proton (pp) and the proton-antiproton (panti p) option in the CERN LEP tunnel. Based on our current knowledge of hard collisions at the present CERN panti p Collider, and with the help of quantum chromodynamics (QCD), we extrapolate to the next generation of hadron colliders with a centre-of-mass energy Esub(cm) = 10-20 TeV. We estimate various signatures, trigger rates, event topologies, and associated distributions for a variety of old and new physical processes, involving prompt photons, leptons, jets, Wsup(+-) and Z bosons in the final state. We also calculate the maximum fermion and boson masses accessible at the LEP Hadron Collider. The standard QCD and electroweak processes studied here, being the main body of standard hard collisions, quantify the challenge of extracting new physics with hadron colliders. We hope that our estimates will provide a useful profile of the final states, and that our experimental physics colleagues will find this of use in the design of their detectors. (orig.)
Ultraviolet extrapolations in finite oscillator bases
König, S; Furnstahl, R J; More, S N; Papenbrock, T
2014-01-01
The use of finite harmonic oscillator spaces in many-body calculations introduces both infrared (IR) and ultraviolet (UV) errors. The IR effects are well approximated by imposing a hard-wall boundary condition at a properly identified radius L_eff. We show that duality of the oscillator implies that the UV effects are equally well described by imposing a sharp momentum cutoff at a momentum Lambda_eff complementary to L_eff. By considering two-body systems with separable potentials, we show that the UV energy corrections depend on details of the potential, in contrast to the IR energy corrections, which depend only on the S-matrix. An adaptation of the separable treatment to more general interactions is developed and applied to model potentials as well as to the deuteron with realistic potentials. The previous success with a simple phenomenological form for the UV error is also explained. Possibilities for controlled extrapolations for A > 2 based on scaling arguments are discussed.
Similarities and differences in forward and reverse motion extrapolation.
Smith, Kevin; Davis, Joshua; Bergen, Benjamin; Vul, Edward
2015-09-01
We often must not only extrapolate the future trajectory of objects (where will a thrown rock go?) but also extrapolate backwards in time (where did the rock flying by your head come from?). We matched forward and reverse extrapolation in two experiments to investigate similarities and differences between extrapolation directions. Forward and reverse extrapolation share biases and response time patterns across various trajectories, but reverse extrapolation is noisier. This suggests both forms of extrapolation share cognitive processes, and opens up further avenues of investigation into why reverse extrapolation is noisier. In Experiment 1, participants observed a ball moving either towards (forward) or away from (reverse) a semi-circular occluder, then a mark appeared on the outside of the semi-circle and participants indicated whether the ball would travel (forward) or came from (reverse) above or below this mark. 85% accuracy was maintained by dynamically adjusting the mark for each condition. Participants were slightly slower to respond when the occluder was larger (F(2,32)=4.8, p=0.015), but speed was unaffected by extrapolation type (F(1,16)=0.4, p=0.56). Reverse extrapolation was harder (greater offset thresholds: F(1,16)=5.4, p=0.034), though difficulty increased at the same rate over distance (no interaction: F(2,32)=0.04, p=0.96). These results suggest shared processing underlies both extrapolation directions, but do not differentiate whether reverse extrapolation is more biased or simply more variable. In Experiment 2, participants observed a ball in motion and indicated where on a line it would next cross (forward) or last came from (reverse). Each forward trial had a matched reverse trial with mirrored motion, so the line crossing would be identical. Systematic biases were nearly identical across extrapolation directions (r=0.96) but people were on average 30% more variable on reverse trials. This suggests that reverse extrapolation and forward extrapolation share biases, and that greater variability caused the increased difficulty in Experiment 1. Meeting abstract presented at VSS 2015. PMID:26325976
International Nuclear Information System (INIS)
A scheme based on treating uniform singlet-pair and triplet-pair interactions is suggested to extrapolate electron correlation energy of ammonia, calculated at two basis-set levels of ab initio theory in the infinite one-electron basis-set limit. The dual-level method is tested on the extrapolation of the full correlation in coupled-cluster singles and doubles and in the case also a noniterative perturbative correction for connected triple energies for the C3v and D3h structures of ammonia, with correlation-consistent basis sets of the type cc-pVXZ (X = D, T, Q, 5,6) and aug-cc-pVXZ (X = D, T, Q, 5). For testing and comparison purposes, the energies reported by Klopper [J. Comput. Chem. 22 1306 (2001)] have been taken. From a corresponding extrapolation of CCSD(T)/AVXZ energies for X = 4, 5, we obtain total inversion barriers of 1833.87 cm?1/1832.33 cm?1 for the two/three-parameter extrapolation rules, which are in good agreement with other theoretical extrapolation and empirical values in the literature. (atomic and molecular physics)
Latychevskaia, Tatiana
2015-01-01
In coherent diffractive imaging (CDI) the resolution with which the reconstructed object can be obtained is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by post-extrapolation of coherent diffraction images, such as diffraction patterns or holograms. We proof that a diffraction pattern can unambiguously be extrapolated from just a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal, is linearly proportional to the oversampling ratio. While there could be in principle other methods to achieve extrapolation, we devote our discussion to employing phase retrieval methods and demonstrate their limits. We present two numerical studies; namely the extrapolation of diffraction patterns of non-binary and that of phase objects together with a discussion of the optimal extrapolation procedure.
In this study, six extrapolation methods have been compared for their ability to estimate daily crop evapotranspiration (ETd) from instantaneous latent heat flux estimates derived from digital airborne multispectral remote sensing imagery. Data used in this study were collected during an experiment...
Herzberg, Frederik S
2005-01-01
A number of Bermudan option pricing methods that are applicable to options on multiple assets are studied in this thesis, one of the dominating questions being the natural scaling needed to extrapolate from Bermudan to American (both approximate and ``exact'') option prices.
Revealing individual differences in strategy selection through visual motion extrapolation.
Fulvio, Jacqueline M; Maloney, Laurence T; Schrater, Paul R
2015-12-01
Humans are constantly challenged to make use of internal models to fill in missing sensory information. We measured human performance in a simple motion extrapolation task where no feedback was provided in order to elucidate the models of object motion incorporated into observers' extrapolation strategies. There was no "right" model for extrapolation in this task. Observers consistently adopted one of two models, linear or quadratic, but different observers chose different models. We further demonstrate that differences in motion sensitivity impact the choice of internal models for many observers. These results demonstrate that internal models and individual differences in those models can be elicited by unconstrained, predictive-based psychophysical tasks. PMID:25654543
Source-receiver two-way wave extrapolation for prestack exploding-reflector modelling and migration
Alkhalifah, Tariq Ali
2014-10-08
Most modern seismic imaging methods separate input data into parts (shot gathers). We develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield forward or backward in time. This approach has the potential for generating accurate images free of artiefacts associated with conventional approaches. We derive novel high-order partial differential equations in the source-receiver time domain. The fourth-order nature of the extrapolation in time leads to four solutions, two of which correspond to the incoming and outgoing P-waves and reduce to the zero-offset exploding-reflector solutions when the source coincides with the receiver. A challenge for implementing two-way time extrapolation is an essential singularity for horizontally travelling waves. This singularity can be avoided by limiting the range of wavenumbers treated in a spectral-based extrapolation. Using spectral methods based on the low-rank approximation of the propagation symbol, we extrapolate only the desired solutions in an accurate and efficient manner with reduced dispersion artiefacts. Applications to synthetic data demonstrate the accuracy of the new prestack modelling and migration approach.
Border extrapolation using fractal attributes in remote sensing images
Cipolletti, M. P.; Delrieux, C. A.; Perillo, G. M. E.; Piccolo, M. C.
2014-01-01
In management, monitoring and rational use of natural resources the knowledge of precise and updated information is essential. Satellite images have become an attractive option for quantitative data extraction and morphologic studies, assuring a wide coverage without exerting negative environmental influence over the study area. However, the precision of such practice is limited by the spatial resolution of the sensors and the additional processing algorithms. The use of high resolution imagery (i.e., Ikonos) is very expensive for studies involving large geographic areas or requiring long term monitoring, while the use of less expensive or freely available imagery poses a limit in the geographic accuracy and physical precision that may be obtained. We developed a methodology for accurate border estimation that can be used for establishing high quality measurements with low resolution imagery. The method is based on the original theory by Richardson, taking advantage of the fractal nature of geographic features. The area of interest is downsampled at different scales and, at each scale, the border is segmented and measured. Finally, a regression of the dependence of the measured length with respect to scale is computed, which then allows for a precise extrapolation of the expected length at scales much finer than the originally available. The method is tested with both synthetic and satellite imagery, producing accurate results in both cases.
Extrapolation of K to \\pi\\pi decay amplitude
Suzuki, Mahiko
2001-01-01
We examine the uncertainties involved in the off-mass-shell extrapolation of the $K\\rightarrow \\pi\\pi$ decay amplitude with emphasis on those aspects that have so far been overlooked or ignored. Among them are initial-state interactions, choice of the extrapolated kaon field, and the relation between the asymptotic behavior and the zeros of the decay amplitude. In the inelastic region the phase of the decay amplitude cannot be determined by strong interaction alone and even ...
Directory of Open Access Journals (Sweden)
Joana Aurora Braun Chagas
2010-02-01
Full Text Available O objetivo deste estudo foi avaliar o protocolo de contenção química com cetamina S(+ e midazolam em bugios-ruivos, comparando o cálculo de doses pelo método convencional e o método de extrapolação alométrica. Foram utilizados 12 macacos bugios (Alouatta guariba clamitans hígidos, com peso médio de 4,84±0,97kg, de ambos os sexos. Após jejum alimentar de 12 horas e hídrico de seis horas, realizou-se contenção física manual e aferiram-se os seguintes parâmetros: frequência cardíaca (FC, frequência respiratória (f, tempo de preenchimento capilar (TPC, temperatura retal (TR, pressão arterial sistólica não invasiva (PANI e valores de hemogasometria arterial. Posteriormente, os animais foram alocados em dois grupos: GC (Grupo Convencional, n=06, os quais receberam cetamina S(+ (5mg kg-1 e midazolam (0,5mg kg-1, pela via intramuscular, com doses calculadas pelo método convencional; e GA (Grupo Alometria, n=06, os quais receberam o mesmo protocolo, pela mesma via, utilizando-se as doses calculadas pelo método de extrapolação alométrica. Os parâmetros descritos foram mensurados novamente nos seguintes momentos: M5, M10, M20 e M30 (cinco, 10, 20 e 30 minutos após a administração dos fármacos, respectivamente. Também foram avaliados: qualidade de miorrelaxamento, reflexo podal e caudal, pinçamento interdigital, tempo para indução de decúbito, tempo hábil de sedação, qualidade de sedação, e tempo e qualidade de recuperação. O GA apresentou menor tempo para indução ao decúbito, maior grau e tempo de sedação, bem como redução significativa da FC e PANI de M5 até M30, quando comparado ao GC. Conclui-se que o grupo no qual o cálculo de dose foi realizado por meio da alometria (GA apresentou melhor grau de relaxamento muscular e sedação, sem produzir depressão cardiorrespiratória significativa.The aim of this study was to evaluate a protocol of chemical restraint comparing the conventional method of calculation (weight dose and allometric extrapolation. Twelve healthy red howler monkeys (Alouatta guariba clamitans, average weight 4.84±0.97kg, male and female, were used for this study. After a 12-hour period of food restriction and 6 hours of water restriction, the animals were physically restraint and the following parameters were measured: heart rate (HR, respiratory rate (RR, capillary refill time (CRT, rectal temperature (RT, non invasive systolic arterial pressure (NISAP and arterial blood gases analysis. The animals were distributed into two groups: CG (Conventional Group, n=6, in which the animals received S(+ ketamine (5mg kg-1 and midazolam (0.5mg kg-1, by intramuscular (IM injection; and AG (Allometry Group, n=6, in which the animals also received S(+ ketamine and midazolan IM, but the doses were calculated by allometric extrapolation. Parameters were evaluated at the following moments: M5, M10, M20 and M30 (5, 10, 20 and 30 minutes after IM injection, respectively. Muscle relaxation, pedal and caudal reflexes, interdigital pinch, recumbency time, sedation's quality and duration, and recovery time and its quality were also evaluated. The AG had a faster time for recumbency, higher period and quality of sedation, and a significantly reduction on HR and SAP from M5 to M30 when compared to CG. It was concluded that allometric extrapolation presented a better muscle relaxation and sedation without significant cardiorespiratory depression.
Scientific Electronic Library Online (English)
Joana Aurora Braun, Chagas; Nilson, Oleskovicz; Aury Nunes de, Moraes; Fabíola Niederauer, Flôres; André Luís, Corrêa; Júlio César, Souza Júnior; André Vasconcelos, Soares; Átila, Costa.
2010-02-01
Full Text Available O objetivo deste estudo foi avaliar o protocolo de contenção química com cetamina S(+) e midazolam em bugios-ruivos, comparando o cálculo de doses pelo método convencional e o método de extrapolação alométrica. Foram utilizados 12 macacos bugios (Alouatta guariba clamitans) hígidos, com peso médio d [...] e 4,84±0,97kg, de ambos os sexos. Após jejum alimentar de 12 horas e hídrico de seis horas, realizou-se contenção física manual e aferiram-se os seguintes parâmetros: frequência cardíaca (FC), frequência respiratória (f), tempo de preenchimento capilar (TPC), temperatura retal (TR), pressão arterial sistólica não invasiva (PANI) e valores de hemogasometria arterial. Posteriormente, os animais foram alocados em dois grupos: GC (Grupo Convencional, n=06), os quais receberam cetamina S(+) (5mg kg-1) e midazolam (0,5mg kg-1), pela via intramuscular, com doses calculadas pelo método convencional; e GA (Grupo Alometria, n=06), os quais receberam o mesmo protocolo, pela mesma via, utilizando-se as doses calculadas pelo método de extrapolação alométrica. Os parâmetros descritos foram mensurados novamente nos seguintes momentos: M5, M10, M20 e M30 (cinco, 10, 20 e 30 minutos após a administração dos fármacos, respectivamente). Também foram avaliados: qualidade de miorrelaxamento, reflexo podal e caudal, pinçamento interdigital, tempo para indução de decúbito, tempo hábil de sedação, qualidade de sedação, e tempo e qualidade de recuperação. O GA apresentou menor tempo para indução ao decúbito, maior grau e tempo de sedação, bem como redução significativa da FC e PANI de M5 até M30, quando comparado ao GC. Conclui-se que o grupo no qual o cálculo de dose foi realizado por meio da alometria (GA) apresentou melhor grau de relaxamento muscular e sedação, sem produzir depressão cardiorrespiratória significativa. Abstract in english The aim of this study was to evaluate a protocol of chemical restraint comparing the conventional method of calculation (weight dose) and allometric extrapolation. Twelve healthy red howler monkeys (Alouatta guariba clamitans), average weight 4.84±0.97kg, male and female, were used for this study. A [...] fter a 12-hour period of food restriction and 6 hours of water restriction, the animals were physically restraint and the following parameters were measured: heart rate (HR), respiratory rate (RR), capillary refill time (CRT), rectal temperature (RT), non invasive systolic arterial pressure (NISAP) and arterial blood gases analysis. The animals were distributed into two groups: CG (Conventional Group, n=6), in which the animals received S(+) ketamine (5mg kg-1) and midazolam (0.5mg kg-1), by intramuscular (IM) injection; and AG (Allometry Group, n=6), in which the animals also received S(+) ketamine and midazolan IM, but the doses were calculated by allometric extrapolation. Parameters were evaluated at the following moments: M5, M10, M20 and M30 (5, 10, 20 and 30 minutes after IM injection, respectively). Muscle relaxation, pedal and caudal reflexes, interdigital pinch, recumbency time, sedation's quality and duration, and recovery time and its quality were also evaluated. The AG had a faster time for recumbency, higher period and quality of sedation, and a significantly reduction on HR and SAP from M5 to M30 when compared to CG. It was concluded that allometric extrapolation presented a better muscle relaxation and sedation without significant cardiorespiratory depression.
Amir, Sahar Z.
2013-05-01
We introduce an efficient thermodynamically consistent technique to extrapolate and interpolate normalized Canonical NVT ensemble averages like pressure and energy for Lennard-Jones (L-J) fluids. Preliminary results show promising applicability in oil and gas modeling, where accurate determination of thermodynamic properties in reservoirs is challenging. The thermodynamic interpolation and thermodynamic extrapolation schemes predict ensemble averages at different thermodynamic conditions from expensively simulated data points. The methods reweight and reconstruct previously generated database values of Markov chains at neighboring temperature and density conditions. To investigate the efficiency of these methods, two databases corresponding to different combinations of normalized density and temperature are generated. One contains 175 Markov chains with 10,000,000 MC cycles each and the other contains 3000 Markov chains with 61,000,000 MC cycles each. For such massive database creation, two algorithms to parallelize the computations have been investigated. The accuracy of the thermodynamic extrapolation scheme is investigated with respect to classical interpolation and extrapolation. Finally, thermodynamic interpolation benefiting from four neighboring Markov chains points is implemented and compared with previous schemes. The thermodynamic interpolation scheme using knowledge from the four neighboring points proves to be more accurate than the thermodynamic extrapolation from the closest point only, while both thermodynamic extrapolation and thermodynamic interpolation are more accurate than the classical interpolation and extrapolation. The investigated extrapolation scheme has great potential in oil and gas reservoir modeling.That is, such a scheme has the potential to speed up the MCMC thermodynamic computation to be comparable with conventional Equation of State approaches in efficiency. In particular, this makes it applicable to large-scale optimization of L-J model parameters for hydrocarbons and other important reservoir species. The efficiency of the thermodynamic dependent techniques is expected to make the Markov chains simulation an attractive alternative in compositional multiphase flow simulation.
Lutz, Jesse J.; Piecuch, Piotr
2008-04-01
The recently proposed potential energy surface (PES) extrapolation scheme, which predicts smooth molecular PESs corresponding to larger basis sets from the relatively inexpensive calculations using smaller basis sets by scaling electron correlation energies [A. J. C. Varandas and P. Piecuch, Chem. Phys. Lett. 430, 448 (2006)], is applied to the PESs associated with the conrotatory and disrotatory isomerization pathways of bicyclo[1.1.0]butane to buta-1,3-diene. The relevant electronic structure calculations are performed using the completely renormalized coupled-cluster method with singly and doubly excited clusters and a noniterative treatment of connected triply excited clusters, termed CR-CC(2,3), which is known to provide a highly accurate description of chemical reaction profiles involving biradical transition states and intermediates. A comparison with the explicit CR-CC(2,3) calculations using the large correlation-consistent basis set of the cc-pVQZ quality shows that the cc-pVQZ PESs obtained by the extrapolation from the smaller basis set calculations employing the cc-pVDZ and cc-pVTZ basis sets are practically identical, to within fractions of a millihartree, to the true cc-pVQZ PESs. It is also demonstrated that one can use a similar extrapolation procedure to accurately predict the complete basis set (CBS) limits of the calculated PESs from the results of smaller basis set calculations at a fraction of the effort required by the conventional pointwise CBS extrapolations.
Delayed inhibition of an anticipatory action during motion extrapolation
Directory of Open Access Journals (Sweden)
Riek Stephan
2010-04-01
Full Text Available Abstract Background Continuous visual information is important for movement initiation in a variety of motor tasks. However, even in the absence of visual information people are able to initiate their responses by using motion extrapolation processes. Initiation of actions based on these cognitive processes, however, can demand more attentional resources than that required in situations in which visual information is uninterrupted. In the experiment reported we sought to determine whether the absence of visual information would affect the latency to inhibit an anticipatory action. Methods The participants performed an anticipatory timing task where they were instructed to move in synchrony with the arrival of a moving object at a determined contact point. On 50% of the trials, a stop sign appeared on the screen and it served as a signal for the participants to halt their movements. They performed the anticipatory task under two different viewing conditions: Full-View (uninterrupted and Occluded-View (occlusion of the last 500 ms prior to the arrival at the contact point. Results The results indicated that the absence of visual information prolonged the latency to suppress the anticipatory movement. Conclusion We suggest that the absence of visual information requires additional cortical processing that creates competing demand for neural resources. Reduced neural resources potentially causes increased reaction time to the inhibitory input or increased time estimation variability, which in combination would account for prolonged latency.
Scholte, Rick; Lopez, Ines; Bert Roozen, N; Nijmeijer, Henk
2009-06-01
Although near-field acoustic holography (NAH) is recognized as a powerful and extremely fast acoustic imaging method based on the inverse solution of the wave-equation, its practical implementation has suffered from problems with the use of the discrete Fourier transformation (DFT) in combination with small aperture sizes and windowing. In this paper, a method is presented that extrapolates the finite spatial aperture before the DFT is applied, which is based on the impulse response information of the known aperture data. The developed method called linear predictive border-padding is an aperture extrapolation technique that greatly reduces leakage and spatial truncation errors in planar NAH (PNAH). Numerical simulations and actual measurements on a hard-disk drive and a cooling fan illustrate the low error, high speed, and utilization of border-padding. Border-padding is an aperture extrapolation technique that makes PNAH a practical and accurate inverse near-field acoustic imaging method. PMID:19507967
Directory of Open Access Journals (Sweden)
Peter R. Spackman
2015-05-01
Full Text Available Coupled cluster calculations with all single and double excitations (CCSD converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L? two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements. The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation, 3.7 (system-dependent extrapolation, and 2.4 (additivity scheme kJ mol–1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation, 5.7 (system-dependent extrapolation, and 2.9 (additivity scheme kJ mol–1.
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L? two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
Calibration of 90Sr+90Y chemical applicators using a mini extrapolation chamber as reference system
International Nuclear Information System (INIS)
90Sr + 90Y clinical applicators are beta radiation sources utilized in several radiotherapy Brazilian clinics, although don't be more manufactured. These sources are employed in brachytherapy procedures for the treatment of superficial lesions of skin and eyes. International recommendations and previous works determine that dermatological and ophthalmic applicators shall be calibrated periodically, and one of the methods for their calibration consists of the use of an extrapolation chamber. In this work, a method of calibration of 90Sr + 90Y clinical applicators was applied using a mini-extrapolation chamber of plane window, developed at the Calibration Laboratory at IPEN, as a reference system. The results obtained were considered satisfactory, when compared with the results given in the calibration certificates of the sources. (author)
Greinert, Jens; Mcginnis, Daniel Frank; Naudts, Lieven; Linke, Peter; De Batist, Marc
2010-01-01
Bubble transport of methane from shallow seep sites in the Black Sea west of the Crimea Peninsula between 70 and 112 m water depth has been studied by extrapolation of results gained through different hydroacoustic methods and direct sampling. Ship-based hydroacoustic echo sounders can locate bubble releasing seep sites very precisely and facilitate their correlation with geological or other features at the seafloor. Here, the backscatter strength of a multibeam system was integrated with sin...
Chiral Extrapolation of Lattice Data for Heavy Meson Hyperfine Splittings
Energy Technology Data Exchange (ETDEWEB)
X.-H. Guo; P.C. Tandy; A.W. Thomas
2006-03-01
We investigate the chiral extrapolation of the lattice data for the light-heavy meson hyperfine splittings D*-D and B*-B to the physical region for the light quark mass. The chiral loop corrections providing non-analytic behavior in m{sub {pi}} are consistent with chiral perturbation theory for heavy mesons. Since chiral loop corrections tend to decrease the already too low splittings obtained from linear extrapolation, we investigate two models to guide the form of the analytic background behavior: the constituent quark potential model, and the covariant model of QCD based on the ladder-rainbow truncation of the Dyson-Schwinger equations. The extrapolated hyperfine splittings remain clearly below the experimental values even allowing for the model dependence in the description of the analytic background.
Properties of infrared extrapolations in a harmonic oscillator basis
Coon, Sidney A
2014-01-01
We continue our studies of infrared (ir) and ultraviolet (uv) regulators of no-core shell model calculations. We extend our results that an extrapolation in the ir cutoff with the uv cutoff above the intrinsic uv scale of the interaction is quite successful, not only for the eigenstates of the Hamiltonian but also for expectation values of operators considered long range. The latter results are obtained with Hamiltonians transformed by the similarity renormalization group (SRG) evolution. On the other hand, a suggested extrapolation in the uv cutoff when the ir cutoff is below the intrinsic ir scale is neither robust nor reliable.
Extrapolation of ASDEX Upgrade H-mode discharges to ITER
International Nuclear Information System (INIS)
In this paper we discuss a procedure to evaluate the fusion performance of ASDEX Upgrade discharges scaled up to ITER. The kinetic profile shape is taken from the measured profiles. Multiplication factors are used to obtain a fixed Greenwald fraction and an ITER normalized thermal pressure as in the corresponding ASDEX Upgrade discharge. The toroidal field and the plasma geometry are taken from the ITER-FEAT design (scenario 2), whereas q95 is taken from the experiment. The confinement time is inferred assuming that the measured H-factor with respect to several existing scaling laws also holds for ITER. While retaining the information contained in the multi-machine databases underlying the different scaling laws, this approach adds profile effects and confinement improvement with respect to the ITER baseline, thus including recent experimental evidence such as the prediction of peaked density profiles in ITER. Under this set of assumptions, of course not unique, we estimate the ITER performance on the basis of a wide database of ASDEX Upgrade H-mode discharges, in terms of fusion power, fusion gain and triple product. According to the three scalings considered, there is a finite probability of reaching ignition, while more than half of the discharges require less auxiliary power than the one foreseen for ITER. For all the scaling laws, high values of the thermal ?N up to 2.4 are accessible. A sensitivity study gives an estimate of the accuracy of the extrapolation. The impact of different levels of tungsten concentration on the fusion performance is also studied in this paper. This scaling method is used to verify some common 0D figures of merit of ITER's fusion performance.
International Nuclear Information System (INIS)
Over the last decades, elemental maps have become a powerful tool for the analysis of the spatial distribution of the elements within specimen. In energy-filtered transmission electron microscopy (EFTEM) one commonly uses two pre-edge and one post-edge image for the calculation of elemental maps. However, this so called three-window method can introduce serious errors into the extrapolated background for the post-edge window. Since this method uses only two pre-edge windows as data points to calculate a background model that depends on two fit parameters, the quality of the extrapolation can be estimated only statistically assuming that the background model is correct. In this paper, we will discuss a possibility to improve the accuracy and reliability of the background extrapolation by using a third pre-edge window. Since with three data points the extrapolation becomes over-determined, this change permits us to estimate not only the statistical uncertainly of the fit, but also the systematic error by using the experimental data. Furthermore we will discuss in this paper the acquisition parameters that should be used for the energy windows to reach an optimal signal-to-noise ratio (SNR) in the elemental maps. -- Highlights: ? Comparison of three pre-edge windows to the regular two pre-edge windows. ? Investigation of the optimal positioning of the third pre-edge window. ? Description of the ?2 test for extrapolation quality check.
Yang, F; Sun, N; Sun, Y X; Shan, Q; Zhao, H Y; Zeng, D P; Zeng, Z L
2013-04-01
In this study, an oral physiologically based pharmacokinetics (PBPK) model was developed for florfenicol in crucian carp (Carassius auratus). Subsequently, oral-to-intramuscular extrapolation was performed and the two models were used to predict florfenicol concentrations in the edible tissues of crucian carp. The oral model gave good predictions in most tissues, except for kidney and liver in which the florfenicol concentrations were underestimated at the later time points. In contrast, using the intramuscular model, the concentrations in the kidney were overestimated at the later time points. Both models had the best predictive ability in the main edible tissue, the muscle. The oral model also accurately predicted the florfenicol concentrations in the muscle after multiple doses. The present study demonstrated the feasibility of predicting florfenicol concentrations in the edible tissues of crucian carp using a route-to-route extrapolation method. PMID:22712485
Extrapolation of ZPR sodium void measurements to the power reactor
International Nuclear Information System (INIS)
Sodium-voiding measurements of ZPPR assemblies 2 and 5 are analyzed with ENDF/B Version IV data. Computations include directional diffusion coefficients to account for streaming effects resulting from the plate structure of the critical assembly. Bias factors for extrapolating critical assembly data to the CRBR design are derived from the results of this analysis
Do common systems control eye movements and motion extrapolation?
Makin, Alexis D J; Poliakoff, Ellen
2011-07-01
People are able to judge the current position of occluded moving objects. This operation is known as motion extrapolation. It has previously been suggested that motion extrapolation is independent of the oculomotor system. Here we revisited this question by measuring eye position while participants completed two types of motion extrapolation task. In one task, a moving visual target travelled rightwards, disappeared, then reappeared further along its trajectory. Participants discriminated correct reappearance times from incorrect (too early or too late) with a two-alternative forced-choice button press. In the second task, the target travelled rightwards behind a visible, rectangular occluder, and participants pressed a button at the time when they judged it should reappear. In both tasks, performance was significantly different under fixation as compared to free eye movement conditions. When eye movements were permitted, eye movements during occlusion were related to participants' judgements. Finally, even when participants were required to fixate, small changes in eye position around fixation (understanding the mechanism underlying motion extrapolation. PMID:21480079
Localization and extrapolation in Lorentz-Orlicz spaces.
Czech Academy of Sciences Publication Activity Database
Cruz-Uribe, D.; Krbec, Miroslav
Berlin : de Gruyter, 2002 - (Kufner, A.; Persson, L.; Sparr, M.), s. 389-401 ISBN 3-11-017117-1. [Function Spaces, Interpolation Theory and Related Topics. Lund (SE), 17.08.2001-22.08.2001] Institutional research plan: CEZ:AV0Z1019905 Keywords : extrapolation * localisation * Lorentz-Orlicz spaces Subject RIV: BA - General Mathematics
SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy
Energy Technology Data Exchange (ETDEWEB)
Liu, C; Kumarasiri, A; Chetvertkov, M; Gordon, J; Chetty, I; Siddiqui, F; Kim, J [Henry Ford Health System, Detroit, MI (United States)
2014-06-01
Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformably registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (?±?) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.
Chaouche, L Yelles; Pillet, V Martínez; Moreno-Insertis, F
2012-01-01
The 3D structure of an active region (AR) filament is studied using nonlinear force-free field (NLFFF) extrapolations based on simultaneous observations at a photospheric and a chromospheric height. To that end, we used the Si I 10827 \\AA\\ line and the He I 10830 \\AA\\ triplet obtained with the Tenerife Infrared Polarimeter (TIP) at the VTT (Tenerife). The two extrapolations have been carried out independently from each other and their respective spatial domains overlap in a considerable height range. This opens up new possibilities for diagnostics in addition to the usual ones obtained through a single extrapolation from, typically, a photospheric layer. Among those possibilities, this method allows the determination of an average formation height of the He I 10830 \\AA\\ signal of \\approx 2 Mm above the surface of the sun. It allows, as well, to cross-check the obtained 3D magnetic structures in view of verifying a possible deviation from the force- free condition especially at the photosphere. The extrapolati...
Directory of Open Access Journals (Sweden)
Bressler B
2015-06-01
Full Text Available Brian Bressler,1 Theo Dingermann2 1St Paul’s Hospital, University of British Columbia, Vancouver, BC, Canada; 2Institute of Pharmaceutical Biology, Frankfurt, Germany Abstract: Despite their enormous value for our health care system, biopharmaceuticals have become a serious threat to the system itself due to their high cost. Costs may be warranted if the medicine is new and innovative; however, it is no longer an innovation when its patent protection expires. As patents and exclusivities expire on biological drugs, biosimilar products defined as highly similar to reference biologics are being marketed. The goal of biosimilar development is to establish a high degree of biosimilarity, not to reestablish clinical efficacy and safety. Current sophisticated analytical methods allow the detection of even small changes in quality attributes and can therefore enable sensitive monitoring of the batch-to-batch consistency and variability of the manufacturing process. The European Medicines Agency (EMA, US Food and Drug Administration (FDA, and Health Canada have determined that a reduced number of nonclinical and clinical comparative studies can be sufficient for approval with clinical data from the most sensitive indication extrapolated to other indications. Extrapolation of data is a scientifically based principle, guided by specific criteria, and if approved by the EMA, FDA, and/or Health Canada is appropriate. Enablement of extrapolation of data is a core principle of biosimilar development, based on principles of comparability and necessary to fully realize cost savings for these drugs. Keywords: biosimilars, Inflectra, infliximab, pharmacoeconomics, Canada, Europe
Steinhausen, Heinz C.; Martín, Rodrigo; den Brok, Dennis; Hullin, Matthias B.; Klein, Reinhard
2015-03-01
Numerous applications in computer graphics and beyond benefit from accurate models for the visual appearance of real-world materials. Data-driven models like photographically acquired bidirectional texture functions (BTFs) suffer from limited sample sizes enforced by the common assumption of far-field illumination. Several materials like leather, structured wallpapers or wood contain structural elements on scales not captured by typical BTF measurements. We propose a method extending recent research by Steinhausen et al. to extrapolate BTFs for large-scale material samples from a measured and compressed BTF for a small fraction of the material sample, guided by a set of constraints. We propose combining color constraints with surface descriptors similar to normal maps as part of the constraints guiding the extrapolation process. This helps narrowing down the search space for suitable ABRDFs per texel to a large extent. To acquire surface descriptors for nearly at materials, we build upon the idea of photometrically estimating normals. Inspired by recent work by Pan and Skala, we obtain images of the sample in four different rotations with an off-the-shelf flatbed scanner and derive surface curvature information from these. Furthermore, we simplify the extrapolation process by using a pixel-based texture synthesis scheme, reaching computational efficiency similar to texture optimization.
Properties of a commercial extrapolation chamber in ? radiation fields
International Nuclear Information System (INIS)
A commercial extrapolation chamber was tested in different ? radiation fields and its properties investigated. Its usefulness for ? radiation calibration and dosimetry was verified. Experiments were performed in order to obtain the main characteristics such as the calibration factors (and consequently the energy dependence) for all chamber collecting electrodes (between 10 and 40 mm diameter), the transmission factors in tissue and the useful source-detector distance range
Extrapolation in games of coordination and dominance solvable games
Mengel, Friederike; Sciubba, Emanuela
2010-01-01
We study extrapolation between games in a laboratory experiment. Participants in our experiment first play either the dominance solvable guessing game or a Coordination version of the guessing game for five rounds. Afterwards they play a 3x3 normal form game for ten rounds with random matching which is either a game solvable through iterated elimination of dominated strategies (IEDS), a pure Coordination game or a Coordination game with pareto ranked equilibria. We find strong evidence that p...
Enhancing Robustness to Extrapolate Synergies Learned from Motion Capture
Aubry, Matthieu; De Loor, Pierre; Gibet, Sylvie
2010-01-01
Reproducing the characteristics of human movements, is a crucial issue in studying motion. In the context of this work, an explicit model of synergies which can be parametrized is used for reproducing the main features of reaching motions. This paper evaluates the possibility to extrapolate learned parameters from a captured motion to new targets and shows how learning process is a key issue to ensure the robustness of parameters. another target, some parameters displayed poor capacity to ext...
Efficient anisotropic wavefield extrapolation using effective isotropic models
Alkhalifah, Tariq Ali
2013-06-10
Isotropic wavefield extrapolation is more efficient than anisotropic extrapolation, and this is especially true when the anisotropy of the medium is tilted (from the vertical). We use the kinematics of the wavefield, appropriately represented in the high-frequency asymptotic approximation by the eikonal equation, to develop effective isotropic models, which are used to efficiently and approximately extrapolate anisotropic wavefields using the isotropic, relatively cheaper, operators. These effective velocity models are source dependent and tend to embed the anisotropy in the inhomogeneity. Though this isotropically generated wavefield theoretically shares the same kinematic behavior as that of the first arrival anisotropic wavefield, it also has the ability to include all the arrivals resulting from a complex wavefield propagation. In fact, the effective models reduce to the original isotropic model in the limit of isotropy, and thus, the difference between the effective model and, for example, the vertical velocity depends on the strength of anisotropy. For reverse time migration (RTM), effective models are developed for the source and receiver fields by computing the traveltime for a plane wave source stretching along our source and receiver lines in a delayed shot migration implementation. Applications to the BP TTI model demonstrates the effectiveness of the approach.
Energy Technology Data Exchange (ETDEWEB)
Alvarez R, M. T.; Morales P, J. R. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)
2001-01-15
From the year of 1987 the Department of Metrology of the ININ, in their Secondary Laboratory of Calibration Dosimetric, has a patron group of sources of radiation beta and an extrapolation chamber of electrodes of variable separation.Their objective is to carry out of the unit of the dose speed absorbed in air for radiation beta. It uses the ionometric method, cavity Bragg-Gray in the extrapolation chamber with which it counts. The services that offers are: i) it Calibration : Radioactive Fuentes of radiation beta, isotopes: {sup 90}Sr/{sup 90}Y; Ophthalmic applicators {sup 9}0{sup S}r/{sup 90}Y; Instruments for detection of beta radiation with to the radiological protection: Ionization chambers, Geiger-Muller, etc.; Personal Dosemeters. ii) Irradiation with beta radiation of materials to the investigation. (Author)
Faugeras, Blaise; Blum, Jacques; Boulbe, Cedric; Moreau, Philippe; Nardon, Eric
2014-11-01
We present a method based on the use of toroidal harmonics and on a modelization of the poloidal field coils and divertor coils for the 2D interpolation and extrapolation of discrete magnetic measurements in a tokamak. The method is generic and can be used to provide the Cauchy boundary conditions needed as input by a fixed domain equilibrium reconstruction code like Equinox (Blum et al 2012 J. Comput. Phys. 231 960–80). It can also be used to extrapolate the magnetic measurements in order to compute the plasma boundary itself. The proposed method and algorithm are detailed in this paper and results from numerous numerical experiments are presented. The method is foreseen to be used in the real-time plasma control loop on the WEST tokamak (Bucalossi et al 2011 Fusion Eng. Des. 86 684–8).
International Nuclear Information System (INIS)
We present a method based on the use of toroidal harmonics and on a modelization of the poloidal field coils and divertor coils for the 2D interpolation and extrapolation of discrete magnetic measurements in a tokamak. The method is generic and can be used to provide the Cauchy boundary conditions needed as input by a fixed domain equilibrium reconstruction code like Equinox (Blum et al 2012 J. Comput. Phys. 231 960–80). It can also be used to extrapolate the magnetic measurements in order to compute the plasma boundary itself. The proposed method and algorithm are detailed in this paper and results from numerous numerical experiments are presented. The method is foreseen to be used in the real-time plasma control loop on the WEST tokamak (Bucalossi et al 2011 Fusion Eng. Des. 86 684–8). (paper)
Energy Technology Data Exchange (ETDEWEB)
Verwichte, E.; Foullon, C.; White, R. S. [Centre for Fusion, Space and Astrophysics, Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Van Doorsselaere, T., E-mail: Erwin.Verwichte@warwick.ac.uk [Centre for Plasma Astrophysics, Department of Mathematics, Katholieke Universiteit Leuven, Celestijnenlaan 200B, B-3001 Leuven (Belgium)
2013-04-10
Two transversely oscillating coronal loops are investigated in detail during a flare on the 2011 September 6 using data from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. We compare two independent methods to determine the Alfven speed inside these loops. Through the period of oscillation and loop length, information about the Alfven speed inside each loop is deduced seismologically. This is compared with the Alfven speed profiles deduced from magnetic extrapolation and spectral methods using AIA bandpass. We find that for both loops the two methods are consistent. Also, we find that the average Alfven speed based on loop travel time is not necessarily a good measure to compare with the seismological result, which explains earlier reported discrepancies. Instead, the effect of density and magnetic stratification on the wave mode has to be taken into account. We discuss the implications of combining seismological, extrapolation, and spectral methods in deducing the physical properties of coronal loops.
Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan
2013-04-01
Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation-quality-index. Subsequently the probability and quality information of the forecast ensemble is available and flexible blending to numerical prediction model for each subarea is possible. Simultaneously with automatic processing the ensemble nowcasting product is visualized in a new innovative way which combines the intensity, probability and quality information for different subareas in one forecast image.
The use of extrapolation concepts to augment the Frequency Separation Technique
Alexiou, Spiros
2015-03-01
The Frequency Separation Technique (FST) is a general method formulated to improve the speed and/or accuracy of lineshape calculations, including strong overlapping collisions, as is the case for ion dynamics. It should be most useful when combined with ultrafast methods, that, however have significant difficulties when the impact regime is approached. These difficulties are addressed by the Frequency Separation Technique, in which the impact limit is correctly recovered. The present work examines the possibility of combining the Frequency Separation Technique with the addition of extrapolation to improve results and minimize errors resulting from the neglect of fast-slow coupling and thus obtain the exact result with a minimum of extra effort. To this end the adequacy of one such ultrafast method, the Frequency Fluctuation Method (FFM) for treating the nonimpact part is examined. It is found that although the FFM is unable to reproduce the nonimpact profile correctly, its coupling with the FST correctly reproduces the total profile.
Energy Technology Data Exchange (ETDEWEB)
Heil, Tobias, E-mail: tobiasheil@uni-muenster.de [Physikalisches Institut and Interdisziplinaeres Centrum fuer Elektronenmikroskopie und Mikroanalyse (ICEM), Universitaet Muenster, Wilhelm-Klemm-Str. 10, 48149 Muenster (Germany); Gralla, Benedikt, E-mail: lexx.matrix@uni-muenster.de [Physikalisches Institut and Interdisziplinaeres Centrum fuer Elektronenmikroskopie und Mikroanalyse (ICEM), Universitaet Muenster, Wilhelm-Klemm-Str. 10, 48149 Muenster (Germany); Epping, Michael, E-mail: michael.epping@uni-muenster.de [Physikalisches Institut and Interdisziplinaeres Centrum fuer Elektronenmikroskopie und Mikroanalyse (ICEM), Universitaet Muenster, Wilhelm-Klemm-Str. 10, 48149 Muenster (Germany); Kohl, Helmut, E-mail: kohl@uni-muenster.de [Physikalisches Institut and Interdisziplinaeres Centrum fuer Elektronenmikroskopie und Mikroanalyse (ICEM), Universitaet Muenster, Wilhelm-Klemm-Str. 10, 48149 Muenster (Germany)
2012-07-15
Over the last decades, elemental maps have become a powerful tool for the analysis of the spatial distribution of the elements within specimen. In energy-filtered transmission electron microscopy (EFTEM) one commonly uses two pre-edge and one post-edge image for the calculation of elemental maps. However, this so called three-window method can introduce serious errors into the extrapolated background for the post-edge window. Since this method uses only two pre-edge windows as data points to calculate a background model that depends on two fit parameters, the quality of the extrapolation can be estimated only statistically assuming that the background model is correct. In this paper, we will discuss a possibility to improve the accuracy and reliability of the background extrapolation by using a third pre-edge window. Since with three data points the extrapolation becomes over-determined, this change permits us to estimate not only the statistical uncertainly of the fit, but also the systematic error by using the experimental data. Furthermore we will discuss in this paper the acquisition parameters that should be used for the energy windows to reach an optimal signal-to-noise ratio (SNR) in the elemental maps. -- Highlights: Black-Right-Pointing-Pointer Comparison of three pre-edge windows to the regular two pre-edge windows. Black-Right-Pointing-Pointer Investigation of the optimal positioning of the third pre-edge window. Black-Right-Pointing-Pointer Description of the {chi}{sup 2} test for extrapolation quality check.
Magnetofrictional Extrapolations of Low and Lou's Force-Free Equilibria
Valori, G.; Kliem, B.; Fuhrmann, M.
2007-10-01
We present a careful investigation of the magnetofrictional relaxation and extrapolation technique applied to the reconstruction of two test fields. These fields are taken from the family of nonlinear force-free magnetic equilibria constructed by Low and Lou ( Astrophys. J. 352, 343, 1990), which have emerged as standard tests for extrapolation techniques in recent years. For the practically relevant case that only the field values in the bottom plane of the considered volume (vector magnetogram) are used as input information ( i.e., not including the knowledge about the test field at the side and top boundaries), the test field is reconstructed to a higher accuracy than obtained previously. Detailed diagnostics of the reconstruction accuracy show that the implementation of fourth-order spatial discretization was essential to reach this accuracy for the given test fields and to achieve near machine precision in satisfying the solenoidal condition. Different variants of boundary conditions are tested, which all yield comparable accuracy. In its present implementation, the technique yields a scaling of computing time with total number of grid points only slightly below N 5/3, which is too steep for applications to large (?10242) magnetograms, except on supercomputers. Directions for improvement are outlined.
Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media
Waheed, Umair bin
2014-05-01
Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.
DEFF Research Database (Denmark)
Ambühl, Simon; Sterndorff, Martin
2014-01-01
Mooring systems for floating wave energy converters (WECs) are a major cost driver. Failure of mooring systems often occurs due to extreme loads. This paper introduces an extrapolation method for extreme response which accounts for the control system of a WEC that controls the loads onto the structure and the harvested power of the device as well as the fact that extreme loads may occur during operation and not at extreme wave states when the device is in storm protection mode. The extrapolation method is based on shortterm load time series and applied to a case study where up-scaled surge load measurements from lab-scaled WEPTOS WEC are taken. Different catenary anchor leg mooring (CALM) systems as well as single anchor legmooring (SALM)mooring systemsare implemented for a dynamic simulation with different number of mooring lines. Extreme tension loads with a return period of 50 years are assessed for the hawser as well as at the different mooring lines. Furthermore, the extreme load impact given failure of one mooring line is assessed and compared with extreme loads given no system failure.
Lee, Tien-Chang; Perina, Thomas; Lee, Cin-Young
2002-08-01
A genetic algorithm is used here to guess-estimate a close-to-true set of trial values as input to a three-staged quasi-linear inverse modeling scheme for the determination of aquifer parameters. To validate the parameter determination, in addition to the conventional measures of misfit root mean squares (rms) and distribution, the aquifer thickness is treated as an unknown parameter and the model parameters are further evaluated by comparing the expected drawdown with the observed drawdown at wells which are not used for parameter determination (extrapolation fitting). The method is tested with synthetic and observed drawdown data from five partially screened monitoring wells in a water-table aquifer. Test results for synthetic data doped with random errors indicate that modeling based on two or more well data can yield satisfactory parameter values and extrapolation misfits in an ideal aquifer. For field data, the results indicate that a model misfit on par with the standard error of the data is achievable for each individual well or a combination of two wells but the extrapolation misfit distributions are generally biased and their rms are far greater—possibly due to aquifer heterogeneity. Consistent parameter values can be obtained from the geometric means for multiple runs of the genetic-inverse modeling of one-, two-, three-, and four-well data. Our test aquifer can be represented by a set of parameters with 10 to 15% consistency, including transmissivity, storativity, vertical-to-horizontal conductivity ratio, and storativity-to-specific yield ratio, as affirmed by model aquifer thicknesses that deviate less than 10% from the actual thickness.
Making the most of what we have: application of extrapolation approaches in wildlife transfer models
Energy Technology Data Exchange (ETDEWEB)
Beresford, Nicholas A.; Barnett, Catherine L.; Wells, Claire [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi [Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Brown, Justin E.; Hosseini, Ali [Norwegian Radiation Protection Authority, P.O. Box 55, N-1332 Oesteraas (Norway); Yankovich, Tamara L. [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria); Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Willey, Neil [Centre for Research in Biosciences, University of the West of England, Coldharbour Lane, Frenchay, Bristol BS16 1QY (United Kingdom)
2014-07-01
Radiological environmental protection models need to predict the transfer of many radionuclides to a large number of organisms. There has been considerable development of transfer (predominantly concentration ratio) databases over the last decade. However, in reality it is unlikely we will ever have empirical data for all the species-radionuclide combinations which may need to be included in assessments. To provide default values for a number of existing models/frameworks various extrapolation approaches have been suggested (e.g. using data for a similar organism or element). This paper presents recent developments in two such extrapolation approaches, namely phylogeny and allometry. An evaluation of how extrapolation approaches have performed and the potential application of Bayesian statistics to make best use of available data will also be given. Using a Residual Maximum Likelihood (REML) mixed-model regression we initially analysed a dataset comprising 597 entries for 53 freshwater fish species from 67 sites to investigate if phylogenetic variation in transfer could be identified. The REML analysis generated an estimated mean value for each species on a common scale after taking account of the effect of the inter-site variation. Using an independent dataset, we tested the hypothesis that the REML model outputs could be used to predict radionuclide activity concentrations in other species from the results of a species which had been sampled at a specific site. The outputs of the REML analysis accurately predicted {sup 137}Cs activity concentrations in different species of fish from 27 lakes. Although initially investigated as an extrapolation approach the output of this work is a potential alternative to the highly site dependent concentration ratio model. We are currently applying this approach to a wider range of organism types and different ecosystems. An initial analysis of these results will be presented. The application of allometric, or mass-dependent, relationships within radioecology has increased with the evolution of models to predict the exposure of wildlife as it presents a method of addressing the lack of empirical data. Among the parameters which scale allometrically is radionuclide biological half-life. However, sufficient data across a range of species with different masses are required to establish allometric relationships for biological half-life and this is not always available. We have recently derived an alternative allometric approach to predict the biological half-life of radionuclides in homeothermic vertebrates which does not require such data. Predicted biological half-life values for four radionuclides compared well to available data for a range of species. The potential to further develop these approaches will be discussed. (authors)
Making the most of what we have: application of extrapolation approaches in wildlife transfer models
International Nuclear Information System (INIS)
Radiological environmental protection models need to predict the transfer of many radionuclides to a large number of organisms. There has been considerable development of transfer (predominantly concentration ratio) databases over the last decade. However, in reality it is unlikely we will ever have empirical data for all the species-radionuclide combinations which may need to be included in assessments. To provide default values for a number of existing models/frameworks various extrapolation approaches have been suggested (e.g. using data for a similar organism or element). This paper presents recent developments in two such extrapolation approaches, namely phylogeny and allometry. An evaluation of how extrapolation approaches have performed and the potential application of Bayesian statistics to make best use of available data will also be given. Using a Residual Maximum Likelihood (REML) mixed-model regression we initially analysed a dataset comprising 597 entries for 53 freshwater fish species from 67 sites to investigate if phylogenetic variation in transfer could be identified. The REML analysis generated an estimated mean value for each species on a common scale after taking account of the effect of the inter-site variation. Using an independent dataset, we tested the hypothesis that the REML model outputs could be used to predict radionuclide activity concentrations in other species from the results of a species which had been sampled at a specific site. The outputs of the REML analysis accurately predicted 137Cs activity concentrations in different species of fish from 27 lakes. Although initially investigated as an extrapolation approach the output of this work is a potential alternative to the highly site dependent concentration ratio model. We are currently applying this approach to a wider range of organism types and different ecosystems. An initial analysis of these results will be presented. The application of allometric, or mass-dependent, relationships within radioecology has increased with the evolution of models to predict the exposure of wildlife as it presents a method of addressing the lack of empirical data. Among the parameters which scale allometrically is radionuclide biological half-life. However, sufficient data across a range of species with different masses are required to establish allometric relationships for biological half-life and this is not always available. We have recently derived an alternative allometric approach to predict the biological half-life of radionuclides in homeothermic vertebrates which does not require such data. Predicted biological half-life values for four radionuclides compared well to available data for a range of species. The potential to further develop these approaches will be discussed. (authors)
UFOs in the LHC: Observations, studies and extrapolations
Baer, T; Cerutti, F; Ferrari, A; Garrel, N; Goddard, B; Holzer, EB; Jackson, S; Lechner, A; Mertens, V; Misiowiec, M; Nebot del Busto, E; Nordt, A; Uythoven, J; Vlachoudis, V; Wenninger, J; Zamantzas, C; Zimmermann, F; Fuster, N
2012-01-01
Unidentified falling objects (UFOs) are potentially a major luminosity limitation for nominal LHC operation. They are presumably micrometer sized dust particles which lead to fast beam losses when they interact with the beam. With large-scale increases and optimizations of the beam loss monitor (BLM) thresholds, their impact on LHC availability was mitigated from mid 2011 onwards. For higher beam energy and lower magnet quench limits, the problem is expected to be considerably worse, though. In 2011/12, the diagnostics for UFO events were significantly improved: dedicated experiments and measurements in the LHC and in the laboratory were made and complemented by FLUKA simulations and theoretical studies. The state of knowledge, extrapolations for nominal LHC operation and mitigation strategies are presented
Null Point Distribution in Global Coronal Potential Field Extrapolations
Edwards, S. J.; Parnell, C. E.
2015-06-01
Magnetic null points are points in space where the magnetic field is zero. Thus, they can be important sites for magnetic reconnection by virtue of the fact that they are weak points in the magnetic field and also because they are associated with topological structures, such as separators, which lie on the boundary between four topologically distinct flux domains and therefore are also locations where reconnection occurs. The number and distribution of nulls in a magnetic field acts as a measure of the complexity of the field. In this article, the numbers and distributions of null points in global potential field extrapolations from high-resolution synoptic magnetograms are examined. Extrapolations from magnetograms obtained with the Michelson Doppler Imager (MDI) are studied in depth and compared with those from high-resolution SOlar Long-time Investigations of the Sun (SOLIS) and Heliospheric Magnetic Imager (HMI). The fall-off in the density of null points with height is found to follow a power law with a slope that differs depending on whether the data are from solar maximum or solar minimum. The distribution of null points with latitude also varies with the cycle as null points form predominantly over quiet-Sun regions and avoid active-region fields. The exception to this rule are the null points that form high in the solar atmosphere, and these null points tend to form over large areas of strong flux in active regions. From case studies of data acquired with the MDI, SOLIS, and HMI, it is found that the distribution of null points is very similar between data sets, except, of course, that there are far fewer nulls observed in the SOLIS data than in the cases from MDI and HMI due to its lower resolution.
Extrapolating W-Associated Jet-Production Ratios at the LHC
Bern, Z; Cordero, F Febres; Hoeche, S; Kosower, D A; Ita, H; Maitre, D
2014-01-01
Electroweak vector-boson production, accompanied by multiple jets, is an important background to searches for physics beyond the Standard Model. A precise and quantitative understanding of this process is helpful in constraining deviations from known physics. We study four key ratios in $W + n$-jet production at the LHC. We compute the ratio of cross sections for $W + n$- to $W + (n-1)$-jet production as a function of the minimum jet transverse momentum. We also study the ratio differentially, as a function of the $W$-boson transverse momentum; as a function of the scalar sum of the jet transverse energy, $H_T^{\\rm jets}$; and as a function of certain jet transverse momenta. We show how to use such ratios to extrapolate differential cross sections to $W+6$-jet production at next-to-leading order, and we cross-check the method against a direct calculation at leading order. We predict the differential distribution in $H_T^{\\rm jets}$ for $W+6$ jets at next-to-leading order using such an extrapolation. We use th...
Characterization and application of two extrapolation chambers in standard X radiation beams
International Nuclear Information System (INIS)
The extrapolation chambers are ionization chambers with variable volume, and they are mainly utilized as beta radiation detectors. In this work two extrapolation chambers were characterized, a commercial PTW extrapolation chamber and another extrapolation chamber developed at the Calibration Laboratory of IPEN, for application as reference systems in mammography, conventional diagnostic radiology and radiotherapy beams. The results obtained from the characterization tests of the chamber response: leakage current, short- and medium terms stability, determination of the saturation currents and the ion collection efficiencies, angular and energy dependence, show that these extrapolation chambers may be utilized for low-energy X radiation beam dosimetry. The transmission factors in tissue and the calibration factors were also determined for all cited radiation qualities. Finally, a procedure was established for calibration of radiation detectors in standard X radiation beams, using the extrapolation chambers. (author)
Application of extrapolation chambers in low-energy X-rays as reference systems
International Nuclear Information System (INIS)
Extrapolation chambers are instruments designed to measure doses of low-energy radiations, mainly beta radiation. In this work, a commercial extrapolation chamber and a homemade extrapolation chamber were applied in measurements using standard radiotherapy X-ray beams. Saturation curves and polarity effect as well as short- and medium-term stabilities were obtained, and these results are within the recommendations of the International Electrotechnical Commission (IEC). The response linearity and the extrapolation curves were also obtained, and they presented good behavior. The results show the usefulness of these extrapolation chambers in low-energy X-ray beams. - Highlights: ? Usefulness of two extrapolation chambers was studied for low-energy X-ray beam dosimetry. ? Performance of the chambers was verified at standard X-radiation qualities. ? Both chambers are suited for use with radiotherapy quality X-ray beams.
Generalized multi-hit dose response model for low-dose extrapolation
International Nuclear Information System (INIS)
Man is exposed to a variety of natural and synthetic substances that are known to be harmful to experimental animals at high dose levels and consequently are under suspicion of being harmful to humans. The large number of animals required to obtain any positive response at low-dose levels makes it prohibitive to directly estimate the risk at the required dose levels. Thus, the most common method for establishing safe dose levels is to estimate a dose-response curve based on laboratory tests on a limited number of animals at exposure levels well beyond the human usage levels. Then, using such a dose-response curve, one attempts to establish a safe dose based on a statistical low-dose extrapolation procedure. This thesis introduces a generalized multi-hit dose-response model. A biological interpretation of the model in terms of the occurrence of k hits to cause the toxic response, and a statistical interpretation in terms of a gamma tolerance distribution are given. Other dose-response models in the literature are reviewed, with the one-hit or linear model being seen as a special case of the proposed model. The method of maximum likelihood for estimating the parameters of the model, their large sample properties, and their use in risk assessment through extrapolation to low-doses is presented. A method of point estimation of the virtual safe dose, along with its lower 100(1 - ?)% confidence limit is treated. The resulting procedures are then applied to twelve sets of toxic response data from the literature. Based on these applications, it is seen that the peformance of the model for risk assessment is similar to that of the one-hit model under evidence of near linearity of the dose-response curve in the low-dose range. However, under evidence of concavity (convexity) in the low-dose range, the model is more (less) stringent in its risk assessment
International Nuclear Information System (INIS)
The Intelligent Extrapolation Criticality Device is used for automatic counting and automatic extrapolation during the criticality experiment on the reactor. Test must be performed on the zero-power reactor or other reactor before the Device is used. The paper describes the test situation and test results of the Device on the zero-power reactor. The test results show that the Device has the function of automatic counting and automatic extrapolation, the deviation of the extrapolation data is small, and it can satisfy the requirements of physical startup on the reactor. (author)
Miga, Michael I.; Dumpuri, Prashanth; Simpson, Amber L.; Weis, Jared A.; Jarnagin, William R.
2011-03-01
The problem of extrapolating cost-effective relevant information from distinctly finite or sparse data, while balancing the competing goals between workflow and engineering design, and between application and accuracy is the 'sparse data extrapolation problem'. Within the context of open abdominal image-guided liver surgery, one realization of this problem is compensating for non-rigid organ deformations while maintaining workflow for the surgeon. More specifically, rigid organ-based surface registration between CT-rendered liver surfaces and laser-range scanned intraoperative partial surface counterparts resulted in an average closest-point residual 6.1 +/- 4.5 mm with maximumsigned distances ranging from -13.4 to 16.2 mm. Similar to the neurosurgical environment, there is a need to correct for soft tissue deformation to translate image-guided interventions to the abdomen (e.g. liver, kidney, pancreas, etc.). While intraoperative tomographic imaging is available, these approaches are less than optimal solutions to the sparse data extrapolation problem. In this paper, we compare and contrast three sparse data extrapolation methods to that of datarich interpolation for the correction of deformation within a liver phantom containing 43 subsurface targets. The findings indicate that the subtleties in the initial alignment pose following rigid registration can affect correction up to 5- 10%. The best deformation compensation achieved was approximately 54.5% (target registration error of 2.0 +/- 1.6 mm) while the data-rich interpolative method was 77.8% (target registration error of 0.6 +/- 0.5 mm).
An empirical relationship for extrapolating sparse experimental lap joint data.
Energy Technology Data Exchange (ETDEWEB)
Segalman, Daniel Joseph; Starr, Michael James
2010-10-01
Correctly incorporating the influence of mechanical joints in built-up mechanical systems is a critical element for model development for structural dynamics predictions. Quality experimental data are often difficult to obtain and is rarely sufficient to determine fully parameters for relevant mathematical models. On the other hand, fine-mesh finite element (FMFE) modeling facilitates innumerable numerical experiments at modest cost. Detailed FMFE analysis of built-up structures with frictional interfaces reproduces trends among problem parameters found experimentally, but there are qualitative differences. Those differences are currently ascribed to the very approximate nature of the friction model available in most finite element codes. Though numerical simulations are insufficient to produce qualitatively correct behavior of joints, some relations, developed here through observations of a multitude of numerical experiments, suggest interesting relationships among joint properties measured under different loading conditions. These relationships can be generalized into forms consistent with data from physical experiments. One such relationship, developed here, expresses the rate of energy dissipation per cycle within the joint under various combinations of extensional and clamping load in terms of dissipation under other load conditions. The use of this relationship-though not exact-is demonstrated for the purpose of extrapolating a representative set of experimental data to span the range of variability observed from real data.
Sato, A.; Yomogida, K.
2014-12-01
The early warning system operated by Japan Meteorological Agency (JMA) has been available in public since October 2007.The present system is still not effective in cases, that we cannot assume a nearly circular wavefront expansion from a source. We propose a new approach based on the extrapolation of the early observed wavefield alone without estimating its epicenter. The idea is similar to the migration method in exploration seismology, but we use not only the information of wave field at an early stage (i.e., at time T2 in Figure, but also its normal derivatives the difference between T1 and T2), that is, we utilize the apparent velocity and direction of early-stage wave propagation to predict the wavefield later (at T3 in Fig.). For the extrapolation of wavefield, we need a reliable Green's function from the observed point to a target point at which the wave arrives later. Since the complete 3-D wave propagation is extremely complex, particularly in and around Japan of highly heterogeneous structures, we shall consider a phenomenological 2-D Green's function, that is, a wavefront propagates on the surface with a certain apparent velocity and direction of P wave. This apparent velocity and direction may vary significantly depending on, for example, event depth and an area of propagation, so we examined those of P wave propagating in Japan in various situations. For example, the velocity of shallow events in Hokkaido is 7.1km/s while that in Nagano prefecture is about 5.5km/s. In addition, the apparent velocity depends on event depth, 7.1km/s for the depth of 10km and 8.9km/s for 100km in Hokkaido. We also conducted f-k array analyses of adjacent five or six stations where we can accurately estimate the apparent velocity and direction of P wave. For deep events with relatively simple waveforms, they are easily obtained, but we may need site corrections to enhance correlations of waveforms among stations for shallow ones. In the above extrapolation scheme, we can only estimate the arrival times of P wave at remote stations, but we need to estimate S wave arrival time and intensity in practice. We compare the actual S wave arrival times with P wave ones for various epicenter distances, event depths and regions, so that some empirical relations between them are listed for our final goal of S wave estimations.
Wilk-Zasadna, Iwona; Bernasconi, Camilla; Pelkonen, Olavi; Coecke, Sandra
2015-06-01
Early consideration of the multiplicity of factors that govern the biological fate of foreign compounds in living systems is a necessary prerequisite for the quantitative in vitro-in vivo extrapolation (QIVIVE) of toxicity data. Substantial technological advances in in vitro methodologies have facilitated the study of in vitro metabolism and the further use of such data for in vivo prediction. However, extrapolation to in vivo with a comfortable degree of confidence, requires continuous progress in the field to address challenges such as e.g., in vitro evaluation of chemical-chemical interactions, accounting for individual variability but also analytical challenges for ensuring sensitive measurement technologies. This paper discusses the current status of in vitro metabolism studies for QIVIVE extrapolation, serving today's hazard and risk assessment needs. A short overview of the methodologies for in vitro metabolism studies is given. Furthermore, recommendations for priority research and other activities are provided to ensure further widespread uptake of in vitro metabolism methods in 21st century toxicology. The need for more streamlined and explicitly described integrated approaches to reflect the physiology and the related dynamic and kinetic processes of the human body is highlighted i.e., using in vitro data in combination with in silico approaches. PMID:25456264
Jiang, Chaowei
2013-01-01
Due to the absence of direct measurement, the magnetic field in the solar corona is usually extrapolated from the photosphere in numerical way. At the moment, the nonlinear force-free field (NLFFF) model dominates the physical models for field extrapolation in the low corona. Recently we have developed a new NLFFF model with MHD relaxation to reconstruct the coronal magnetic field. This method is based on CESE--MHD model with the conservation-element/solution-element (CESE) spacetime scheme. In this paper, we report the application of the CESE--MHD--NLFFF code to \\SDO/HMI data with magnetograms sampled for two active regions (ARs), NOAA AR 11158 and 11283, both of which were very non-potential, producing X-class flares and eruptions. The raw magnetograms are preprocessed to remove the force and then inputted into the extrapolation code. Qualitative comparison of the results with the \\SDO/AIA images shows that our code can reconstruct magnetic field lines resembling the EUV-observed coronal loops. Most importa...
Chiew, F. H. S.; Vaze, J.
2015-06-01
This paper provides an overview of this IAHS symposium and PIAHS proceeding on "hydrologic nonstationarity and extrapolating models to predict the future". The paper provides a brief review of research on this topic, presents approaches used to account for nonstationarity when extrapolating models to predict the future, and summarises the papers in this session and proceeding.
Characterization of an extrapolation chamber in a 90Sr/90Y beta radiation field
International Nuclear Information System (INIS)
The extrapolation chamber is a parallel plate chamber and variable volume based on the Bragg-Gray theory. It determines in absolute mode, with high accuracy the dose absorbed by the extrapolation of the ionization current measured for a null distance between the electrodes. This camera is used for dosimetry of external beta rays for radiation protection. This paper presents the characterization of an extrapolation chamber in a 90Sr/90Y beta radiation field. The absorbed dose rate to tissue at a depth of 0.07 mm was calculated and is (0.13206±0.0028) ?Gy. The extrapolation chamber null depth was determined and its value is 60 ?m. The influence of temperature, pressure and humidity on the value of the corrected current was also evaluated. Temperature is the parameter that has more influence on this value and the influence of pressure and the humidity is not very significant. Extrapolation curves were obtained. (Author)
Fuel cycle design for ITER and its extrapolation to DEMO
Energy Technology Data Exchange (ETDEWEB)
Konishi, Satoshi [Institute of Advanced Energy, Kyoto University, Kyoto 611-0011 (Japan)], E-mail: s-konishi@iae.kyoto-u.ac.jp; Glugla, Manfred [Forschungszentrum Karlsruhe, P.O. Box 3640, D 76021 Karlsruhe (Germany); Hayashi, Takumi [Apan Atomic Energy AgencyTokai, Ibaraki 319-0015 Japan (Japan)
2008-12-15
ITER is the first fusion device that continuously processes DT plasma exhaust and supplies recycled fuel in a closed loop. All the tritium and deuterium in the exhaust are recovered, purified and returned to the tokamak with minimal delay, so that extended burn can be sustained with limited inventory. To maintain the safety of the entire facility, plant scale detritiation systems will also continuously run to remove tritium from the effluents at the maximum efficiency. In this entire tritium plant system, extremely high decontamination factor, that is the ratio of the tritium loss to the processing flow rate, is required for fuel economy and minimized tritium emissions, and the system design based on the state-of-the-art technology is expected to satisfy all the requirements without significant technical challenges. Considerable part of the fusion tritium system will be verified with ITER and its decades of operation experiences. Toward the DEMO plant that will actually generate energy and operate its closed fuel cycle, breeding blanket and power train that caries high temperature and pressure media from the fusion device to the generation system will be the major addition. For the tritium confinement, safety and environmental emission, particularly blanket, its coolant, and generation systems such as heat exchanger, steam generator and turbine will be the critical systems, because the tritium permeation from the breeder and handling large amount of high temperature, high pressure coolant will be further more difficult than that required for ITER. Detritiation of solid waste such as used blanket and divertor will be another issue for both tritium economy and safety. Unlike in the case of ITER that is regarded as experimental facility, DEMO will be expected to demonstrate the safety, reliability and social acceptance issue, even if economical feature is excluded. Fuel and environmental issue to be tested in the DEMO will determine the viability of the fusion as a future energy source. Some of the subjects cannot be expected to be within the extrapolation of ITER technology and require long term efforts paralleling ITER.
Montiel, Ariadna; Sendra, Irene; Escamilla-Rivera, Celia; Salzano, Vincenzo
2014-01-01
In this work we present a nonparametric approach, which works on minimal assumptions, to reconstruct the cosmic expansion of the Universe. We propose to combine a locally weighted scatterplot smoothing method and a simulation-extrapolation method. The first one (Loess) is a nonparametric approach that allows to obtain smoothed curves with no prior knowledge of the functional relationship between variables nor of the cosmological quantities. The second one (Simex) takes into account the effect of measurement errors on a variable via a simulation process. For the reconstructions we use as raw data the Union2.1 Type Ia Supernovae compilation, as well as recent Hubble parameter measurements. This work aims to illustrate the approach, which turns out to be a self-sufficient technique in the sense we do not have to choose anything by hand. We examine the details of the method, among them the amount of observational data needed to perform the locally weighted fit which will define the robustness of our reconstructio...
International Nuclear Information System (INIS)
The Interface System for the Extrapolation Chamber (SICE) contains several devices handled by a personal computer (PC), it is able to get the required data to calculate the absorbed dose due to Beta radiation. The main functions of the system are: a) Measures the ionization current or charge stored in the extrapolation chamber. b) Adjusts the distance between the plates of the extrapolation chamber automatically. c) Adjust the bias voltage of the extrapolation chamber automatically. d) Acquires the data of the temperature, atmospheric pressure, relative humidity of the environment and the voltage applied between the plates of the extrapolation chamber. e) Calculates the effective area of the plates of the extrapolation chamber and the real distance between them. f) Stores all the obtained information in hard disk or diskette. A comparison between the desired distance and the distance in the dial of the extrapolation chamber, show us that the resolution of the system is of 20 ?m. The voltage can be changed between -399.9 V and +399.9 V with an error of less the 3 % with a resolution of 0.1 V. These uncertainties are between the accepted limits to be used in the determination of the absolute absorbed dose due to beta radiation. (Author)
Extrapolation distance in critical and time-dependent two-region spherical systems
International Nuclear Information System (INIS)
Extrapolation distances for the neutron flux distribution in bounded media are usually defined in such a way that agreement is obtained between diffusion theory and transport theory. A typical application is the interpretation of pulsed neutron experiments. In this work we extend the conventional treatment of extrapolation distances to two-region spherical bodies. Assuming neutrons of one speed, extrapolation distances are calculated for two-region spheres of different neutron properties. Anisotropic scattering of the neutrons is also taken into account and both critical and time-dependent cases are studied. (orig.)
International Nuclear Information System (INIS)
The Technical Committee for Ionizing Radiation (TCRI) of the Asia Pacific Metrology Programme (APMP) recently organized a regional key comparison of activity measurements of the radionuclide 133Ba. This paper reports on absolute measurements made at the National Metrology Institute of South Africa (NMISA) by the coincidence extrapolation technique, with liquid scintillation counting (LSC) comprising the 4? channel. A detection efficiency analysis was undertaken to predict the maximum efficiency likely to be achieved and to confirm that the method does indeed provide the source disintegration rate for 133Ba. Various experimental and data analysis difficulties to be aware of are discussed in the paper
Melting of “non-magic” argon clusters and extrapolation to the bulk limit
Energy Technology Data Exchange (ETDEWEB)
Senn, Florian, E-mail: f.senn@massey.ac.nz; Wiebke, Jonas; Schumann, Ole; Gohr, Sebastian; Schwerdtfeger, Peter, E-mail: p.a.schwerdtfeger@massey.ac.nz [Centre for Theoretical Chemistry and Physics, The New Zealand Institute for Advanced Study, Massey University Albany, Private Bag 102904, Auckland 0745 (New Zealand); Pahl, Elke, E-mail: e.pahl@massey.ac.nz [Centre for Theoretical Chemistry and Physics, Institute of Natural and Mathematical Sciences, Massey University Albany, Private Bag 102904, Auckland 0745 (New Zealand)
2014-01-28
The melting of argon clusters Ar{sub N} is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Ko?i, and P. Schwerdtfeger, “Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations,” Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes.
Melting of “non-magic” argon clusters and extrapolation to the bulk limit
International Nuclear Information System (INIS)
The melting of argon clusters ArN is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Ko?i, and P. Schwerdtfeger, “Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations,” Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes
Can Tauc plot extrapolation be used for direct-band-gap semiconductor nanocrystals?
International Nuclear Information System (INIS)
Despite that Tauc plot extrapolation has been widely adopted for extracting bandgap energies of semiconductors, there is a lack of theoretical support for applying it to nanocrystals. In this paper, direct-allowed optical transitions in semiconductor nanocrystals have been formulated based on a purely theoretical approach. This result reveals a size-dependant transition of the power factor used in Tauc plot, increasing from one half used in the 3D bulk case to one in the 0D case. This size-dependant intermediate value of power factor allows a better extrapolation of measured absorption data. Being a material characterization technique, the generalized Tauc extrapolation gives a more reasonable and accurate acquisition of the intrinsic bandgap, while the unjustified purpose of extrapolating any elevated bandgap caused by quantum confinement is shown to be incorrect
Design for low dose extrapolation of carcinogenicity data. Technical report No. 24
International Nuclear Information System (INIS)
Parameters for modelling dose-response relationships in carcinogenesis models were found to be very complicated, especially for distinguishing low dose effects. The author concluded that extrapolation always bears the danger of providing misleading information
Tay, Kim Gaik; Kek, Sie Long; Abdul-Kahar, Rosmila
2015-05-01
In this paper, we have further improved the limitations of our previous two Richardson's extrapolation spreadsheet calculators for computing differentiations numerically. The new feature in this new Richardson's extrapolation spreadsheet calculator is fully automated up to any level based on the stopping criteria using VBA programming. The new version is more flexible because it is controlled by programming. Furthermore, it reduces computational time and CPU memory.
International Nuclear Information System (INIS)
An intercomparison exercise sponsored by Eurados-Cendos was conducted among four European laboratories (NRPB, UK; PTB, FRG; CEA, France and Risoe National Laboratory, Denmark) in order to compare results and to standardise procedures for determining beta ray dose rates using extrapolation chambers. All the participants used 147Pm sources supplied by the same manufacturer. These were produced from the same batch and constructed to the same specification and are contained in identical holders. Two different types of extrapolation chambers were used for the characterisation of the beta radiation fields. Measurement procedures and evaluation methods differed in certain details between the laboratories. Sources were exchanged between the participating laboratories to enable measurements on the same source to be carried out with different types of chambers. Methods used for deriving correction factors for the effects of air absorption between the source and chamber are discussed. The procedures adopted for evaluating ionisation current at zero chamber volume and that for obtaining absorbed dose rate at the surface of the phantom are described. (author)
International Nuclear Information System (INIS)
Highlights: ? The maximal predictive step size is determined by the largest Lyapunov exponent. ? A proper forecasting step size is applied to load demand forecasting. ? The improved approach is validated by the actual load demand data. ? Non-linear fractal extrapolation method is compared with three forecasting models. ? Performance of the models is evaluated by three different error measures. - Abstract: Precise short-term load forecasting (STLF) plays a key role in unit commitment, maintenance and economic dispatch problems. Employing a subjective and arbitrary predictive step size is one of the most important factors causing the low forecasting accuracy. To solve this problem, the largest Lyapunov exponent is adopted to estimate the maximal predictive step size so that the step size in the forecasting is no more than this maximal one. In addition, in this paper a seldom used forecasting model, which is based on the non-linear fractal extrapolation (NLFE) algorithm, is considered to develop the accuracy of predictions. The suitability and superiority of the two solutions are illustrated through an application to real load forecasting using New South Wales electricity load data from the Australian National Electricity Market. Meanwhile, three forecasting models: the gray model, the seasonal autoregressive integrated moving average approach and the support vector machine method, which received high approval in STLF, are selected to compare with the NLFE algorithm. Comparison results also show that the NLFE model is outstanding, effective, practical and feasible.
Extrapolation and minimization procedures for the PageRank vector
Brezinski, Claude; Redivo-Zaglia, Michela
2007-01-01
An important problem in Web search is to determine the importance of each page. This problem consists in computing, by the power method, the left principal eigenvector (the PageRank vector) of a matrix depending on a parameter $c$ which has to be chosen close to 1. However, when $c$ is close to 1, the problem is ill-conditioned, and the power method converges slowly. So, the idea developed in this paper consists in computing the PageRank vector for several values of $c$, ...
EXTRAPOLATION OF THE SOLAR CORONAL MAGNETIC FIELD FROM SDO/HMI MAGNETOGRAM BY A CESE-MHD-NLFFF CODE
Energy Technology Data Exchange (ETDEWEB)
Jiang Chaowei; Feng Xueshang, E-mail: cwjiang@spaceweather.ac.cn, E-mail: fengx@spaceweather.ac.cn [SIGMA Weather Group, State Key Laboratory for Space Weather, Center for Space Science and Applied Research, Chinese Academy of Sciences, Beijing 100190 (China)
2013-06-01
Due to the absence of direct measurement, the magnetic field in the solar corona is usually extrapolated from the photosphere in a numerical way. At the moment, the nonlinear force-free field (NLFFF) model dominates the physical models for field extrapolation in the low corona. Recently, we have developed a new NLFFF model with MHD relaxation to reconstruct the coronal magnetic field. This method is based on CESE-MHD model with the conservation-element/solution-element (CESE) spacetime scheme. In this paper, we report the application of the CESE-MHD-NLFFF code to Solar Dynamics Observatory/Helioseismic and Magnetic Imager (SDO/HMI) data with magnetograms sampled for two active regions (ARs), NOAA AR 11158 and 11283, both of which were very non-potential, producing X-class flares and eruptions. The raw magnetograms are preprocessed to remove the force and then inputted into the extrapolation code. Qualitative comparison of the results with the SDO/AIA images shows that our code can reconstruct magnetic field lines resembling the EUV-observed coronal loops. Most important structures of the ARs are reproduced excellently, like the highly sheared field lines that suspend filaments in AR 11158 and twisted flux rope which corresponds to a sigmoid in AR 11283. Quantitative assessment of the results shows that the force-free constraint is fulfilled very well in the strong-field regions but apparently not that well in the weak-field regions because of data noise and numerical errors in the small currents.
International Nuclear Information System (INIS)
A GIS database was established for fertiliser recommendation domains in Kisii District by using FURP fertiliser trial results, KSS soils data and MDBP climatic data. These are manipulated in ESRI's (Personal Computer Environmental Systems Research Institute) ARCINFO and ARCVIEW softwares. The extrapolations were only done for the long rains season (March- August) with three to four years data. GIS technology was used to cluster fertiliser recommendation domains as a geographical area expressed in terms of variation over space and not limited to the site of experiment where a certain agronomic or economic fertiliser recommendation was made. The extrapolation over space was found to be more representative for any recommendation, the result being digital maps describing each area in the geographical space. From the results of the extrapolations, approximately 38,255 ha of the district require zero Nitrogen (N) fertilisation while 94,330 ha requires 75 kg ha-1 Nitrogen fertilisation during the (March-August) long rains. The extrapolation was made difficult since no direct relationships could be established to occur between the available-N, % Carbon (C) or any of the other soil properties with the obtained yields. Decision rules were however developed based on % C which was the soil variable with values closest to the obtained yields. 3% organic carbon was found to be the boundary between 0 application and 75 kg-N application. GIS techniques made it possible to model and extrapolates the results using the available data. The extrapolations still need to be verified with more ground data from fertiliser trials. Data gaps in the soil map left some soil mapping units with no recommendations. Elevation was observed to influence yields and it should be included in future extrapolation by clustering digital elevation models with rainfall data in a spatial model at the district scale
Developing and utilizing the wavefield kinematics for efficient wavefield extrapolation
Waheed, Umair bin
2015-08-01
Natural gas and oil from characteristically complex unconventional reservoirs, such as organic shale, tight gas and oil, coal-bed methane; are transforming the global energy market. These conventional reserves exist in complex geologic formations where conventional seismic techniques have been challenged to successfully image the subsurface. To acquire maximum benefits from these unconventional reserves, seismic anisotropy must be at the center of our modeling and inversion workflows. I present algorithms for fast traveltime computations in anisotropic media. Both ray-based and finite-difference solvers of the anisotropic eikonal equation are developed. The proposed algorithms present novel techniques to obtain accurate traveltime solutions for anisotropic media in a cost-efficient manner. The traveltime computation algorithms are then used to invert for anisotropy parameters. Specifically, I develop inversion techniques by using diffractions and diving waves in the seismic data. The diffraction-based inversion algorithm can be combined with an isotropic full-waveform inversion (FWI) method to obtain a high-resolution model for the anellipticity anisotropy parameter. The inversion algorithm based on diving waves is useful for building initial anisotropic models for depth-migration and FWI. I also develop the idea of ‘effective elliptic models’ for obtaining solutions of the anisotropic two-way wave equation. The proposed technique offers a viable alternative for wavefield computations in anisotropic media using a computationally cheaper wave propagation operator. The methods developed in the thesis lead to a direct cost savings for imaging and inversion projects, in addition to a reduction in turn-around time. With an eye on the next generation inversion methods, these techniques allow us to incorporate more accurate physics into our modeling and inversion framework.
Richardson Extrapolation Based Error Estimation for Stochastic Kinetic Plasma Simulations
Cartwright, Keigh
2014-10-01
To have a high degree of confidence in simulations one needs code verification, validation, solution verification and uncertainty qualification. This talk will focus on numerical error estimation for stochastic kinetic plasma simulations using the Particle-In-Cell (PIC) method and how it impacts the code verification and validation. A technique Is developed to determine the full converged solution with error bounds from the stochastic output of a Particle-In-Cell code with multiple convergence parameters (e.g. ?t, ?x, and macro particle weight). The core of this method is a multi parameter regression based on a second-order error convergence model with arbitrary convergence rates. Stochastic uncertainties in the data set are propagated through the model usin gstandard bootstrapping on a redundant data sets, while a suite of nine regression models introduces uncertainties in the fitting process. These techniques are demonstrated on Flasov-Poisson Child-Langmuir diode, relaxation of an electro distribution to a Maxwellian due to collisions and undriven sheaths and pre-sheaths. Sandia National Laboratories is a multie-program laboratory managed and operated by Sandia Corporation, a wholly owned subisidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Chiral extrapolation of lattice data for the hyperfine splittings of heavy mesons
International Nuclear Information System (INIS)
Full text: Hyperfine splittings between the heavy vector (D*, B*) and pseudoscalar (D, B) mesons have been calculated numerically in lattice QCD, where the pion mass (which is related to the light quark mass) is much larger than its physical value. Naive linear chiral extrapolations of the lattice data to the physical mass of the pion lead to hyperfine splittings which are smaller than experimental data. In order to extrapolate these lattice data to the physical mass of the pion more reasonably, we apply the effective chiral perturbation theory for heavy mesons, which is invariant under chiral symmetry when the light quark masses go to zero and heavy quark symmetry when the heavy quark masses go to infinity. This leads to a phenomenological functional form with three parameters to extrapolate the lattice data. It is found that the extrapolated hyperfine splittings are even smaller than those obtained using linear extrapolation. We conclude that the source of the discrepancy between lattice data for hyperfine splittings and experiment must lie in non-chiral physics
Mean field extrapolations of microscopic nuclear equations of state
Rrapaj, Ermal; Holt, Jeremy W
2015-01-01
We explore the use of mean field models to approximate microscopic nuclear equations of state derived from chiral effective field theory across the densities and temperatures relevant for simulating astrophysical phenomena such as core-collapse supernovae and binary neutron star mergers. We consider both relativistic mean field theory with scalar and vector meson exchange as well as energy density functionals based on Skyrme phenomenology and compare to thermodynamic equations of state derived from chiral two- and three-nucleon forces in many-body perturbation theory. Quantum Monte Carlo simulations of symmetric nuclear matter and pure neutron matter are used to determine the density regimes in which perturbation theory with chiral nuclear forces is valid. Within the theoretical uncertainties associated with the many-body methods, we find that select mean field models describe well microscopic nuclear thermodynamics. As an additional consistency requirement, we study as well the single-particle properties of ...
Possible sharp quantization of extrapolated high temperature viscosity- theory and experiment
Nussinov, Z; Blodgett, M; Kelton, K F
2014-01-01
Quantum effects in material systems are often pronounced at low energies and become insignificant at high temperatures. We find that, perhaps counterintuitively, certain quantum effects may follow the opposite route and become progressively sharper when extrapolated to the "classical" high temperature limit. In the current work, we derive basic relations, extend standard kinetic theory by taking into account a possible fundamental quantum time scale, find new general equalities connecting semi-classical dynamics and thermodynamics to Planck's constant, and compute current correlation functions. Our analysis suggests that, on average, the extrapolated high temperature viscosity of general liquids may tend to a value set by the product of the particle number density ${\\sf n}$ and Planck's constant $h$. We compare this theoretical result with experimental measurements of an ensemble of 23 metallic fluids where this seems to indeed be the case. The extrapolated high temperature viscosity of each of these liquids ...
Comparison between the response of two extrapolation chambers in low energy X-rays
International Nuclear Information System (INIS)
Full text: Extrapolation chambers are important metrological instruments for detection of beta radiation and low energy X-rays, since they are able to determine absolute measurements of radiations of soft penetration. These chambers are very useful, because they allow the determination of superficial doses through the variation of the air mass in its sensible volume. In this work, two extrapolation chambers were tested in order to establish which chamber presents the best response in some standard qualities of X-ray beams, radiotherapy level. For comparison, a commercial PTW extrapolation chamber model 23391 and an extrapolation chamber designed and constructed at the Radiation Metrology Laboratory of Instituto de Pesquisas Energeticas e Nucleares, were studied. The commercial chamber has a collecting electrode (40 mm diameter) and guard rings made of aluminum, and entrance window (0.025 mm thick) made of polyamide; the developed chamber presents a collecting electrode (10 mm diameter) and guard rings made of graphite, and entrance window (0,84 mg/cm2 thick) made of aluminized polyethylene terephthalate. Both chambers were positioned at 50 cm from the X-ray system focus. The ionization currents were measured at negative and positive polarities, and the mean values were considered. A Keithley 617 electrometer was utilized. The main characteristics of the extrapolation chambers, as ion collection efficiency, saturation curve, polarity effect, repeatability, long time stability, stabilization time, linearity response, extrapolation curve, energy dependency, and transmission factors were determined. The results show that both chambers present adequate responses for the verified X-ray beam qualities, confirming previous studies realized with these detectors. In conclusion, both chambers can be used for accurate measurements in low energy X-ray beams. (author)
[Study of the extrapolation reflex in the European beaver (Castor fiber L.)].
Krushinskaia, N L; Dmitrieva, I L; Zhurovski?, V
1980-01-01
An experimental study of elementary rational activity was carried out on European beavers (Castor fiber L.). The ability to solve an elementary logical task consisting in extrapolation of the direction in which the bait disappeared from the visual field, was chosen as a sign of this activity. It is shown that anomals of this species are able to cope with this extrapolational task. The ways of solving the problem manifest a considerable individual variability. The ability of European beavers to solve the simplest logical tasks permits to conclude on the development of elementary rational activity in these animals. PMID:7385999
Characterization of low energy X-rays beams with an extrapolation chamber
International Nuclear Information System (INIS)
In laboratories involving Radiological Protection practices, it is usual to use reference radiations for calibrating dosimeters and to study their response in terms of energy dependence. The International Organization for Standardization (ISO) established four series of reference X-rays beams in the ISO- 4037 standard: the L and H series, as low and high air Kerma rates, respectively, the N series of narrow spectrum and W series of wide spectrum. The X-rays beams with tube potential below 30 kV, called 'low energy beams' are, in most cases, critical as far as the determination of their parameters for characterization purpose, such as half-value layer. Extrapolation chambers are parallel plate ionization chambers that have one mobile electrode that allows variation of the air volume in its interior. These detectors are commonly used to measure the quantity Absorbed Dose, mostly in the medium surface, based on the extrapolation of the linear ionization current as a function of the distance between the electrodes. In this work, a characterization of a model 23392 PTW extrapolation chamber was done in low energy X-rays beams of the ISO- 4037 standard, by determining the polarization voltage range through the saturation curves and the value of the true null electrode spacing. In addition, the metrological reliability of the extrapolation chamber was studied with measurements of the value of leakage current and repeatability tests; limit values were established for the proper use of the chamber. The PTW23392 extrapolation chamber was calibrated in terms of air Kerma in some of the ISO radiation series of low energy; the traceability of the chamber to the National Standard Dosimeter was established. The study of energy dependency of the extrapolation chamber and the assessment of the uncertainties related to the calibration coefficient were also done; it was shown that the energy dependence was reduced to 4% when the extrapolation technique was used. Finally, the first half-value layers were determined for the low energy ISO N series with the extrapolation chamber, in collimated and uncollimated beams and it was showed that this detector is feasible for such measurements. (author)
Jiang, Chaowei; Feng, Xueshang
2015-01-01
In the solar corona, magnetic flux rope is believed to be a fundamental structure accounts for magnetic free energy storage and solar eruptions. Up to the present, the extrapolation of magnetic field from boundary data is the primary way to obtain fully three-dimensional magnetic information of the corona. As a result, the ability of reliable recovering coronal magnetic flux rope is important for coronal field extrapolation. In this paper, our coronal field extrapolation cod...
International Nuclear Information System (INIS)
A new technique for intensity modulated radiation therapy (IMRT) delivery is helical tomotherapy (HT). Like most IMRT delivery methods, HT utilizes many small fields as part of the treatment plan, which can be difficult to characterize. A novel technique for small field characterization, based on inter- and extrapolation of ion chamber readings, is presented in the context of HT. As a fan beam is characterized by its thickness and output factor, plane parallel chambers with different active volumes were used to scan the fan beam profiles. The fan beam thickness (FBT) can be determined from the thickness measured with the chamber by extrapolating to an infinitesimally small chamber size. The effective output was derived from the integral under the dose profile divided by the FBT. This was done for five FBTs and demonstrated a sharp fall off in dose when the FBT decreased below 8 mm. Similar techniques can be applied to other IMRT techniques to improve the characterization of various beam parameters
McNiven, Andrea; Kron, Tomas
2004-08-01
A new technique for intensity modulated radiation therapy (IMRT) delivery is helical tomotherapy (HT). Like most IMRT delivery methods, HT utilizes many small fields as part of the treatment plan, which can be difficult to characterize. A novel technique for small field characterization, based on inter- and extrapolation of ion chamber readings, is presented in the context of HT. As a fan beam is characterized by its thickness and output factor, plane parallel chambers with different active volumes were used to scan the fan beam profiles. The fan beam thickness (FBT) can be determined from the thickness measured with the chamber by extrapolating to an infinitesimally small chamber size. The effective output was derived from the integral under the dose profile divided by the FBT. This was done for five FBTs and demonstrated a sharp fall off in dose when the FBT decreased below 8 mm. Similar techniques can be applied to other IMRT techniques to improve the characterization of various beam parameters.
Amore, Paolo; Fernandez, Francisco M; Rösler, Boris
2015-01-01
We apply second order finite difference to calculate the lowest eigenvalues of the Helmholtz equation, for complicated non-tensor domains in the plane, using different grids which sample exactly the border of the domain. We show that the results obtained applying Richardson and Pad\\'e-Richardson extrapolation to a set of finite difference eigenvalues corresponding to different grids allows to obtain extremely precise values. When possible we have assessed the precision of our extrapolations comparing them with the highly precise results obtained using the method of particular solutions. Our empirical findings suggest an asymptotic nature of the FD series. In all the cases studied, we are able to report numerical results which are more precise than those available in the literature.
Czech Academy of Sciences Publication Activity Database
Mejsnar, Jan; Sokol, Zbyn?k; Pešice, Petr
Oberpfaffenhofen-Wessling : Institut für Physik der Atmosphäre, 2014. [ERAD 2014 - 8th European Conference on Radar in Meteorology and Hydrology. 01.09.2014-05.09.2014, Garmisch-Partenkirchen] Institutional support: RVO:68378289 Subject RIV: DG - Athmosphere Sciences, Meteorology http://www.pa.op.dlr.de/erad2014/programme/ShortAbstracts/262_short.pdf
International Nuclear Information System (INIS)
The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, (90 Sr/90 Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)
Potential Hydraulic Modelling Errors Associated with Rheological Data Extrapolation in Laminar Flow
International Nuclear Information System (INIS)
The potential errors associated with the modelling of flows of non-Newtonian slurries through pipes, due to inadequate rheological models and extrapolation outside of the ranges of data bases, are demonstrated. The behaviors of both dilatant and pseudoplastic fluids with yield stresses, and the errors associated with treating them as Bingham plastics, are investigated
Accurate Conformational Energy Differences of Carbohydrates: A Complete Basis Set Extrapolation.
Czech Academy of Sciences Publication Activity Database
Csonka, G. I.; Kaminský, Jakub
2011-01-01
Ro?. 7, ?. 4 (2011), s. 988-997. ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : MP2 * basis set extrapolation * saccharides Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.215, year: 2011
Kinetic energy of solid neon by Monte Carlo with improved Trotter- and finite-size extrapolation
Cuccoli, Alessandro; Macchi, Alessandro; Pedrolli, Gaia; Tognetti, Valerio; Vaia, Ruggero
1997-01-01
The kinetic energy of solid neon is calculated by a path-integral Monte Carlo approach with a refined Trotter- and finite-size extrapolation. These accurate data present significant quantum effects up to temperature T=20 K. They confirm previous simulations and are consistent with recent experiments.
Pre-operational characteristics of a mini-extrapolation chamber developed at IPEN-SP, Brazil
International Nuclear Information System (INIS)
A mini-extrapolation chamber was developed at IPEN for the calibration of 90 Sr + 90 Y beta radiation sources. The pre-operational characteristics (saturation curve, ion collection efficiency and polarity effects) were determined, and the results were highly satisfactory. (author)
Nowcasting of precipitation by an NWP model using assimilation of extrapolated radar reflectivity.
Czech Academy of Sciences Publication Activity Database
Sokol, Zbyn?k; Zacharov, Petr, jr.
2012-01-01
Ro?. 138, ?. 665 (2012), s. 1072-1082. ISSN 0035-9009 Institutional support: RVO:68378289 Keywords : precipitation forecast * radar extrapolation Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 3.327, year: 2012 http://onlinelibrary.wiley.com/doi/10.1002/qj.970/abstract
International Nuclear Information System (INIS)
Graphical abstract: Display Omitted Research highlights: ? GlyD1 exhibits inhibiting properties more than GlyD2 and Gly. ? Inhibition efficiency increases with inhibitor concentration. ? Inhibition efficiency decreases with temperature, suggesting physical adsorption. ? Validation of corrosion rates measured by Tafel extrapolation method is confirmed. - Abstract: A newly synthesized glycine derivative (GlyD1), 2-(4-(dimethylamino)benzylamino)acetic acid hydrochloride, was used to control mild steel corrosion in 4.0 M H2SO4 solutions at different temperatures (278-338 K). Tafel extrapolation, linear polarization resistance (LPR) and impedance methods were used to test corrosion inhibitor efficiency. An independent method of chemical analysis, namely ICP-AES (inductively coupled plasma atomic emission spectrometry) was also used to test validity of corrosion rate measured by Tafel extrapolation method. Results obtained were compared with an available glycine derivative (GlyD2) and glycine (Gly). Tafel polarization measurements revealed that the three tested inhibitors function as mixed-type compounds. The inhibition efficiency increased with increase in inhibitor concentration and decreased with temperature, suggesting the occurrence of physical adsorption. The adsorptive behaviour of the three inhibitors followed Temkin-type isotherm and the standard free energy changes of adsorption (?Gadso) were evaluated for the to) were evaluated for the three tested inhibitors as a function of temperature. The inhibition performance of GlyD1 was much better than those of GlyD2 and Gly itself. Results obtained from the different corrosion evaluation techniques were in good agreement.
Rong, Lu; Wang, Dayong; Zhou, Xun; Huang, Haochong; Li, Zeyu; Wang, Yunxin
2014-01-01
We report here on terahertz (THz) digital holography on a biological specimen. A continuous-wave (CW) THz in-line holographic setup was built based on a 2.52 THz CO2 pumped THz laser and a pyroelectric array detector. We introduced novel statistical method of obtaining true intensity values for the pyroelectric array detector's pixels. Absorption and phase-shifting images of a dragonfly's hind wing were reconstructed simultaneously from single in-line hologram. Furthermore, we applied phase retrieval routines to eliminate twin image and enhanced the resolution of the reconstructions by hologram extrapolation beyond the detector area. The finest observed features are 35 {\\mu}m width cross veins.
DEFF Research Database (Denmark)
Kissling, W. Daniel; Dalby, Lars
2014-01-01
Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species’ evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals (“MammalDIET”). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external validation showed that: (1) extrapolations were most reliable for primary food items; (2) several diet categories (“Animal”, “Mammal”, “Invertebrate”, “Plant”, “Seed”, “Fruit”, and “Leaf”) had high proportions of correctly predicted diet ranks; and (3) the potential of correctly extrapolating specific diet categories varied both within and among clades. Global maps of species richness and proportion showed congruence among trophic levels, but also substantial discrepancies between dietary guilds. MammalDIET provides a comprehensive, unique and freely available dataset on diet preferences for all terrestrial mammals worldwide. It enables broad-scale analyses for specific trophic levels and dietary guilds, and a first assessment of trait conservatism in mammalian diet preferences at a global scale. The digitalization, extrapolation and validation procedures could be transferable to other trait data and taxa.
Energy Technology Data Exchange (ETDEWEB)
Amin, Mohammed A., E-mail: maaismail@yahoo.co [Materials and Corrosion Lab (MCL), Department of Chemistry, Faculty of Science, Taif University, 888 Hawiya (Saudi Arabia); Department of Chemistry, Faculty of Science, Ain shams University, 11566 Abbassia, Cairo (Egypt); Ahmed, M.A. [Physics Department, Faculty of Science, Taif University, 888 Hawiya (Saudi Arabia); Arida, H.A. [Materials and Corrosion Lab (MCL), Department of Chemistry, Faculty of Science, Taif University, 888 Hawiya (Saudi Arabia); Arslan, Taner [Department of Chemistry, Eskisehir Osmangazi University, 26480 Eskisehir (Turkey); Saracoglu, Murat [Faculty of Education, Erciyes University, 38039 Kayseri (Turkey); Kandemirli, Fatma [Department of Chemistry, Nigde University, 41000 Nigde (Turkey)
2011-02-15
Research highlights: TX-305 exhibits inhibiting properties for iron corrosion more than TX-165 and TX 100. Inhibition efficiency increases with temperature, suggesting chemical adsorption. The three tested surfactants act as mixed-type inhibitors with cathodic predominance. Validation of corrosion rates measured by Tafel extrapolation method is confirmed. - Abstract: The inhibition characteristics of non-ionic surfactants of the TRITON-X series, namely TRITON-X-100 (TX-100), TRITON-X-165 (TX-165) and TRITON-X-305 (TX-305), on the corrosion of iron was studied in 1.0 M HCl solutions as a function of inhibitor concentration (0.005-0.075 g L{sup -1}) and solution temperature (278-338 K). Measurements were conducted based on Tafel extrapolation method. Electrochemical frequency modulation (EFM), a non-destructive corrosion measurement technique that can directly give values of corrosion current without prior knowledge of Tafel constants, is also presented. Experimental corrosion rates determined by the Tafel extrapolation method were compared with corrosion rates obtained by the EFM technique and an independent method of chemical analysis. The chemical method of confirmation of the corrosion rates involved determination of the dissolved cation, using ICP-AES (inductively coupled plasma atomic emission spectrometry). The aim was to confirm validation of corrosion rates measured by the Tafel extrapolation method. Results obtained showed that, in all cases, the inhibition efficiency increased with increase in temperature, suggesting that chemical adsorption occurs. The adsorptive behaviour of the three surfactants followed Temkin-type isotherm. The standard free energies of adsorption decreased with temperature, reflecting better inhibition performance. These findings confirm chemisorption of the tested inhibitors. Thermodynamic activation functions of the dissolution process were also calculated as a function of each inhibitor concentration. All the results obtained from the methods employed are in reasonable agreement.
Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicit...
Directory of Open Access Journals (Sweden)
Dalila Khalfa
2014-01-01
Full Text Available Increasing knowledge on wind shear models to strengthen their reliability appears as a crucial issue, markedly for energy investors to accurately predict the average wind speed at different turbine hub heights and thus the expected wind energy output. This is particularly helpful during the feasibility study to abate the costs of a wind power project. The extrapolation laws were found to provide the finest representation of the wind speed according to heights, thus avoiding installation of tall towers, or even more expensive devices such as LIDAR or SODAR. The proposed models are based on theories that determine the vertical wind profile from implicit relationships. However, these empirical extrapolation formulas have been developed for specific meteorological conditions and appropriate sites for wind turbines; reason that several studies have been made by various authors to determine the best suited formula to their own conditions. This study is aimed at proceeding the research issue addressed within a previous study, where some extrapolation models were tested and compared by extrapolating the energy resources at different heights. However, comparable results are returned by the power law and the log law which indeed proved to be preferable. In this context, this study deals the assessment of several wind speed extrapolation laws (six laws, by comparing the analytical results obtained with real data for two different meteorological Sites, different roughness, different altitudes and different measurement periods. The first site studied is an extremely rough site with daily measurements of March 2007, wind speed measurements are available at four different heights of Gantour/Gao site, obtained by the water, energy and environment company Senegal. The second site studied is a feeble rough site with monthly measurements for 2005, wind speed measurements are available at three different heights of Kuujjuarapik Site obtained by Hydro-Quebec Energy Helimax Canada. The study aims to determine the effectiveness and concordance between the extrapolation laws and the real measured data. The results show that the adjusted law is efficiently adequate for an extremely rough site and the modified laws with two other laws are efficiently adequate for a feeble rough site. The experimental results and numerical calculations exploited for the evaluation of the Weibull parameters fall the shape factors k greater than 9. The increase in altitude often causes an increase in the Weibull parameters values, however, our results show that the shape factor k can take lower values to those established in the reference altitude.
International Nuclear Information System (INIS)
Graphical abstract: The pseudo-cubic cobalt oxide microparticles have been successfully synthesized by a solution combustion method using Co(NO3)2·6H2O (oxidizer) and dextrose (sugar; fuel). The as-synthesized Co3O4 microparticles are crystalline and Rietveld refinement of calcined samples exhibited cubic structure with space group of Fm3m (No. 227). The generated Co3O4 microparticles were used to fabricate Zn–Co3O4 composite thin films for corrosion protection. Highlights: ? Synthesis of pseudo-cubic Co3O4 microparticles by solution combustion method. ? As-prepared Co3O4 compounds are calcined and structurally characterized. ? Prepared Co3O4 are utilized for the fabrication of Zn–Co3O4 composite thin films. - Abstract: Microcrystalline cobalt oxide (Co3O4) powder was successfully synthesized by a simple, fast, economical and eco-friendly solution-combustion method. The as-synthesized powder was calcined for an hour at temperatures ranging from 100 to 900 °C. The crystallite size, morphology, and chemical state of synthesized powders were characterized by powder XRD, TG-DTA, XPS, SEM/EDAX, TEM and FT-IR spectral methods. The as-synthesized Co3O4 powder was single-crystalline and Rietveld refinement of calcined samples exhibited cubic structure with space group of Fm3m (No. 227). The effect of calcination temperature on crystallite size and morphology was assessed. Scanning electron micrographs show a uniform, randomly oriented pseudo-cubic particle with porous like morphology and EDAX measurement showed its chemical composition. Thermal behavior of as-synthesized compound was examined. The TEM result revealed that, the particles are pseudo-cubic in nature with diameter of 0.2–0.6 ?m and a length of 0.9–1.2 ?m. The crystallite size increased with increase of calcination temperature. The synthesized Co3O4 powder was used to fabricate Zn–Co3O4 composite thin films and its corrosion behavior was analyzed by anodic polarization, tafel extrapolation and electrochemical impedance spectroscopy. The results indicate that the Zn–Co3O4 composite thin films have potential applications to corrosion protection.
131I-CRTX internal dosimetry: animal model and human extrapolation
International Nuclear Information System (INIS)
Snake venoms molecules have been shown to play a role not only in the survival and proliferation of tumor cells but also in the processes of tumor cell adhesion, migration and angiogenesis. 125I-Crtx, a radiolabeled version of a peptide derived from Crotalus durissus terrificus snake venom, specifically binds to tumor and triggers apoptotic signalling. At the present work, 125I-Crtx biokinetic data (evaluated in mice bearing Erlich tumor) were treated by MIRD formalism to perform Internal Dosimetry studies. Doses in several organs of mice were determinate, as well as in implanted tumor, for 131I-Crtx. Doses results obtained for animal model were extrapolated to humans assuming a similar concentration ratio among various tissues between mouse and human. In the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from 131I in the tissue were considered in dose calculations. (author)
Electric form factors of the octet baryons from lattice QCD and chiral extrapolation
International Nuclear Information System (INIS)
We apply a formalism inspired by heavy baryon chiral perturbation theory with finite-range regularization to dynamical 2+1-flavor CSSM/QCDSF/UKQCD Collaboration lattice QCD simulation results for the electric form factors of the octet baryons. The electric form factor of each octet baryon is extrapolated to the physical pseudoscalar masses, after finite-volume corrections have been applied, at six fixed values of Q2 in the range 0.2-1.3 GeV2. The extrapolated lattice results accurately reproduce the experimental form factors of the nucleon at the physical point, indicating that omitted disconnected quark loop contributions are small. Furthermore, using the results of a recent lattice study of the magnetic form factors, we determine the ratio ?pGEp/GMp. This quantity decreases with Q2 in a way qualitatively consistent with recent experimental results.
Electric form factors of the octet baryons from lattice QCD and chiral extrapolation
Shanahan, P. E.; Horsley, R.; Nakamura, Y.; Pleiter, D.; Rakow, P. E. L.; Schierholz, G.; Stüben, H.; Thomas, A. W.; Young, R. D.; Zanotti, J. M.; Cssm; Qcdsf/Ukqcd Collaborations
2014-08-01
We apply a formalism inspired by heavy-baryon chiral perturbation theory with finite-range regularization to dynamical 2+1-flavor CSSM/QCDSF/UKQCD Collaboration lattice QCD simulation results for the electric form factors of the octet baryons. The electric form factor of each octet baryon is extrapolated to the physical pseudoscalar masses, after finite-volume corrections have been applied, at six fixed values of Q2 in the range 0.2-1.3 GeV2. The extrapolated lattice results accurately reproduce the experimental form factors of the nucleon at the physical point, indicating that omitted disconnected quark loop contributions are small relative to the uncertainties of the calculation. Furthermore, using the results of a recent lattice study of the magnetic form factors, we determine the ratio ?pGEp/GMp. This quantity decreases with Q2 in a way qualitatively consistent with recent experimental results.
Electric form factors of the octet baryons from lattice QCD and chiral extrapolation
Shanahan, P E; Young, R D; Zanotti, J M; Horsley, R; Nakamura, Y; Pleiter, D; Rakow, P E L; Schierholz, G; Stüben, H
2014-01-01
We apply a formalism inspired by heavy baryon chiral perturbation theory with finite-range regularization to dynamical $2+1-$flavor CSSM/QCDSF/UKQCD Collaboration lattice QCD simulation results for the electric form factors of the octet baryons. The electric form factor of each octet baryon is extrapolated to the physical pseudoscalar masses, after finite-volume corrections have been applied, at six fixed values of $Q^2$ in the range 0.2-1.3 GeV$^2$. The extrapolated lattice results accurately reproduce the experimental form factors of the nucleon at the physical point, indicating that omitted disconnected quark loop contributions are small. Furthermore, using the results of a recent lattice study of the magnetic form factors, we determine the ratio $\\mu_p G^p_E/G^p_M$. This quantity decreases with $Q^2$ in a way qualitatively consistent with recent experimental results.
The immunogenicity of biosimilar infliximab: can we extrapolate the data across indications?
Ben-Horin, Shomron; Heap, Graham A; Ahmad, Tariq; Kim, HoUng; Kwon, TaekSang; Chowers, Yehuda
2015-09-01
Biopharmaceuticals or 'biologics' have revolutionized the treatment of many diseases. However, some patients generate an immune response to such drugs, potentially limiting clinical efficacy and safety. Infliximab (Remicade(®)) is a monoclonal antibody used to treat several immune-mediated inflammatory disorders. A biosimilar of infliximab, CT-P13 (Remsima(®), Inflectra(®)), has recently been approved in Europe for all indications in which infliximab is approved. Approval of CT-P13 was based in part on extrapolation of clinical trial data from two indications (rheumatoid arthritis and ankylosing spondylitis) to all other indications, including inflammatory bowel disease. This review discusses the validity of extrapolating immunogenicity data across indications - a process adopted by the EMA as part of their biosimilar approval process - with a focus on CT-P13. PMID:26395532
{sup 131}I-SPGP internal dosimetry: animal model and human extrapolation
Energy Technology Data Exchange (ETDEWEB)
Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soprani, Juliana; Santos, Raquel Gouvea dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: hma@cdtn.br; Figueiredo, Suely Gomes de [Universidade Federal do Espirito Santo, (UFES), Vitoria, ES (Brazil). Dept. de Ciencias Fisiologicas. Lab. de Quimica de Proteinas
2009-07-01
Scorpaena plumieri is commonly called moreia-ati or manganga and is the most venomous and one of the most abundant fish species of the Brazilian coast. Soprani 2006, demonstrated that SPGP - an isolated protein from S. plumieri fish- possess high antitumoral activity against malignant tumours and can be a source of template molecules for the development (design) of antitumoral drugs. In the present work, Soprani's {sup 125}ISPGP biokinetic data were treated by MIRD formalism to perform Internal Dosimetry studies. Absorbed doses due to the {sup 131}I-SPGP uptake were determinate in several organs of mice, as well as in the implanted tumor. Doses obtained for animal model were extrapolated to humans assuming a similar ratio for various mouse and human tissues. For the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from {sup 131}I were considered. (author)
Electric form factors of the octet baryons from lattice QCD and chiral extrapolation
Energy Technology Data Exchange (ETDEWEB)
Shanahan, P.E.; Thomas, A.W.; Young, R.D.; Zanotti, J.M. [Adelaide Univ., SA (Australia). ARC Centre of Excellence in Particle Physics at the Terascale and CSSM; Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Nakamura, Y. [RIKEN Advanced Institute for Computational Science, Kobe, Hyogo (Japan); Pleiter, D. [Forschungszentrum Juelich (Germany). JSC; Regensburg Univ. (Germany). Inst. fuer Theoretische Physik; Rakow, P.E.L. [Liverpool Univ. (United Kingdom). Theoretical Physics Div.; Schierholz, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Stueben, H. [Hamburg Univ. (Germany). Regionales Rechenzentrum; Collaboration: CSSM and QCDSF/UKQCD Collaborations
2014-03-15
We apply a formalism inspired by heavy baryon chiral perturbation theory with finite-range regularization to dynamical 2+1-flavor CSSM/QCDSF/UKQCD Collaboration lattice QCD simulation results for the electric form factors of the octet baryons. The electric form factor of each octet baryon is extrapolated to the physical pseudoscalar masses, after finite-volume corrections have been applied, at six fixed values of Q{sup 2} in the range 0.2-1.3 GeV{sup 2}. The extrapolated lattice results accurately reproduce the experimental form factors of the nucleon at the physical point, indicating that omitted disconnected quark loop contributions are small. Furthermore, using the results of a recent lattice study of the magnetic form factors, we determine the ratio ?{sub p}G{sub E}{sup p}/G{sub M}{sup p}. This quantity decreases with Q{sup 2} in a way qualitatively consistent with recent experimental results.
{sup 131}I-CRTX internal dosimetry: animal model and human extrapolation
Energy Technology Data Exchange (ETDEWEB)
Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soares, Marcella Araugio; Silveira, Marina Bicalho; Santos, Raquel Gouvea dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: hma@cdtn.br
2009-07-01
Snake venoms molecules have been shown to play a role not only in the survival and proliferation of tumor cells but also in the processes of tumor cell adhesion, migration and angiogenesis. {sup 125}I-Crtx, a radiolabeled version of a peptide derived from Crotalus durissus terrificus snake venom, specifically binds to tumor and triggers apoptotic signalling. At the present work, {sup 125}I-Crtx biokinetic data (evaluated in mice bearing Erlich tumor) were treated by MIRD formalism to perform Internal Dosimetry studies. Doses in several organs of mice were determinate, as well as in implanted tumor, for {sup 131}I-Crtx. Doses results obtained for animal model were extrapolated to humans assuming a similar concentration ratio among various tissues between mouse and human. In the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from {sup 131}I in the tissue were considered in dose calculations. (author)
131I-SPGP internal dosimetry: animal model and human extrapolation
International Nuclear Information System (INIS)
Scorpaena plumieri is commonly called moreia-ati or manganga and is the most venomous and one of the most abundant fish species of the Brazilian coast. Soprani 2006, demonstrated that SPGP - an isolated protein from S. plumieri fish- possess high antitumoral activity against malignant tumours and can be a source of template molecules for the development (design) of antitumoral drugs. In the present work, Soprani's 125ISPGP biokinetic data were treated by MIRD formalism to perform Internal Dosimetry studies. Absorbed doses due to the 131I-SPGP uptake were determinate in several organs of mice, as well as in the implanted tumor. Doses obtained for animal model were extrapolated to humans assuming a similar ratio for various mouse and human tissues. For the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from 131I were considered. (author)
Eco-label - simple environmental choice / Andres Viia, Külliki Tafel
Viia, Andres
2003-01-01
Autorid selgitavad ökomärgistuse olemust ja vajalikkust tarbijate teavitamisel vähem keskkonda kahjustavatest toodetest ning teenustest. Lisatud näiteid regionaalsetest ja rahvuslikest ökomärkidest EL-is, tuntumatest ökomärkidest väljaspool Euroopat, hoiatavatest ja informatiivsetest keskkonnamärkidest ning libaökomärkidest. Vt. samas: North-East Estonia - a seat of an environment-friendly batteries' recycling
Mediatransformasie dek die tafel vir ’n nuwe joernalistiek
Froneman, J D
1997-01-01
Media transformation sets the scene for a new journalismSince 1993 the South African media have been going through a period of fundamental transformation. This process has resulted in a phenomenon of black journalists and whites with credentials as anti-apartheid activists, moving into senior editorial positions at the South African Broadcasting Corporation (SABC) as well as at newspapers. This article briefly describes the said transformational steps within the framework of existing media mo...
Mediatransformasie dek die tafel vir ’n nuwe joernalistiek
Directory of Open Access Journals (Sweden)
J. D. Froneman
1997-05-01
Full Text Available Media transformation sets the scene for a new journalismSince 1993 the South African media have been going through a period of fundamental transformation. This process has resulted in a phenomenon of black journalists and whites with credentials as anti-apartheid activists, moving into senior editorial positions at the South African Broadcasting Corporation (SABC as well as at newspapers. This article briefly describes the said transformational steps within the framework of existing media models, inter alia the developmental, social-responsibility and democratic-participatory models. Journalism covering the arts, culture and literature is thereby placed within a broader media context. It is concluded that the dominant media model(s will determine the kind of journalism we can expect in future.
Standard electrode potential, Tafel equation, and the solvation thermodynamics
International Nuclear Information System (INIS)
Equilibrium in the electronic subsystem across the solution-metal interface is considered to connect the standard electrode potential to the statistics of localized electronic states in solution. We argue that a correct derivation of the Nernst equation for the electrode potential requires a careful separation of the relevant time scales. An equation for the standard metal potential is derived linking it to the thermodynamics of solvation. The Anderson-Newns model for electronic delocalization between the solution and the electrode is combined with a bilinear model of solute-solvent coupling introducing nonlinear solvation into the theory of heterogeneous electron transfer. We therefore are capable of addressing the question of how nonlinear solvation affects electrochemical observables. The transfer coefficient of electrode kinetics is shown to be equal to the derivative of the free energy, or generalized force, required to shift the unoccupied electronic level in the bulk. The transfer coefficient thus directly quantifies the extent of nonlinear solvation of the redox couple. The current model allows the transfer coefficient to deviate from the value of 0.5 of the linear solvation models at zero electrode overpotential. The electrode current curves become asymmetric in respect to the change in the sign of the electrode overpotential.
Lee, Howard
2013-01-01
CT-P13, the world’s first biosimilar monoclonal antibody to infliximab, was approved for marketing in South Korea for all the six indications of infliximab, which Europe may follow, although the product was tested only in rheumatoid arthritis (RA) with a limited pharmacokinetic comparison in ankylosing spondylitis. However, the extrapolation of the efficacy and safety findings of CT-P13 in RA to the other indications appears scientifically challenging when assessed by the current regulatory r...
Curvature of the chiral pseudo-critical line in QCD: continuum extrapolated results
Bonati, Claudio; Mariti, Marco; Mesiti, Michele; Negro, Francesco; Sanfilippo, Francesco
2015-01-01
We determine the curvature of the pseudo-critical line of strong interactions by means of numerical simulations at imaginary chemical potentials. We consider $N_f=2+1$ stout improved staggered fermions with physical quark masses and the tree level Symanzik gauge action, and explore four different sets of lattice spacings, corresponding to $N_t = 6,8,10,12$, in order to extrapolate results to the continuum limit. Our final estimate is $\\kappa = 0.0135(20)$.
A New Code for Nonlinear Force-Free Field Extrapolation of the Global Corona
Jiang, Chaowei; Feng, Xueshang; Xiang, Changqing
2012-01-01
Reliable measurements of the solar magnetic field are still restricted to the photosphere, and our present knowledge of the three-dimensional coronal magnetic field is largely based on extrapolation from photospheric magnetogram using physical models, e.g., the nonlinear force-free field (NLFFF) model as usually adopted. Most of the currently available NLFFF codes have been developed with computational volume like Cartesian box or spherical wedge while a global full-sphere e...
J-85 jet engine noise measured in the ONERA S1 wind tunnel and extrapolated to far field
Soderman, Paul T.; Julienne, Alain; Atencio, Adolph, Jr.
1991-01-01
Noise from a J-85 turbojet with a conical, convergent nozzle was measured in simulated flight in the ONERA S1 Wind Tunnel. Data are presented for several flight speeds up to 130 m/sec and for radiation angles of 40 to 160 degrees relative to the upstream direction. The jet was operated with subsonic and sonic exhaust speeds. A moving microphone on a 2 m sideline was used to survey the radiated sound field in the acoustically treated, closed test section. The data were extrapolated to a 122 m sideline by means of a multiple-sideline source-location method, which was used to identify the acoustic source regions, directivity patterns, and near field effects. The source-location method is described along with its advantages and disadvantages. Results indicate that the effects of simulated flight on J-85 noise are significant. At the maximum forward speed of 130 m/sec, the peak overall sound levels in the aft quadrant were attentuated approximately 10 dB relative to sound levels of the engine operated statically. As expected, the simulated flight and static data tended to merge in the forward quadrant as the radiation angle approached 40 degrees. There is evidence that internal engine or shock noise was important in the forward quadrant. The data are compared with published predictions for flight effects on pure jet noise and internal engine noise. A new empirical prediction is presented that relates the variation of internally generated engine noise or broadband shock noise to forward speed. Measured near field noise extrapolated to far field agrees reasonably well with data from similar engines tested statically outdoors, in flyover, in a wind tunnel, and on the Bertin Aerotrain. Anomalies in the results for the forward quadrant and for angles above 140 degrees are discussed. The multiple-sideline method proved to be cumbersome in this application, and it did not resolve all of the uncertainties associated with measurements of jet noise close to the jet. The simulation was complicated by wind-tunnel background noise and the propagation of low frequency sound around the circuit.
Scientific Electronic Library Online (English)
E., Ortiz-Rascón; N. C., Bruce; A. A., Rodríguez-Rosales; J., Garduño-Mejía; R., Ortega-Martínez.
2014-02-01
Full Text Available Este artículo presenta resultados de un método para la formación de imágenes resueltas temporalmente mediante la transmisión de luz usando una extrapolación temporal. La extrapolación temporal se realiza mediante la solución a la ecuación de transporte mediante la expansión en cumulantes. Los result [...] ados obtenidos se comparan con los resultados del mismo método pero usando la solución mediante la aproximación de difusión. Se encuentra que los resultados son consistentes pero la el método usando la expansión en cumulantes da mejor resolución, en un factor de aproximadamente 3, para el proceso de formación de imágenes, esto debido a que da una mejor estimación de la contribución de los fotones con tiempos de integración menores. Abstract in english This paper presents results of a time-resolved transillumination imaging method using temporal extrapolation. The temporal extrapolation is performed with the cumulant expansion solution to the transport equation. The results obtained are compared to results of the same method but using the diffusio [...] n approximation solution. It is found that the results are consistent but that the cumulant expansion method gives better resolution, by a factor of approximately 3, for the imaging process, because it gives a better estimation of the photon contribution for shorter integration times.
On the problem of extrapolating the data on Sr90 behaviour in dogs to a human organism
International Nuclear Information System (INIS)
Regularities in the metabolism of radiostrontium have been comparatively studied in dogs and man. The fact revealed that they are the same makes it possible to extrapolate the radiostrontium doses used for dogs to a human organism
Magnetic form factors of the octet baryons from lattice QCD and chiral extrapolation
International Nuclear Information System (INIS)
We present a 2+1-flavor lattice QCD calculation of the electromagnetic Dirac and Pauli form factors of the octet baryons. The magnetic Sachs form factor is extrapolated at six fixed values of Q2 to physical pseudoscalar masses and infinite volume using a formulation based on heavy baryon chiral perturbation theory with finite-range regularization. We properly account for omitted disconnected quark contractions using a partially-quenched effective field theory formalism. The results compare well with the experimental form factors of the nucleon and the magnetic moments of the octet baryons.
Alessandria, F; Ardito, R; Arnaboldi, C; Avignone, F T; Balata, M; Bandac, I; Banks, T I; Bari, G; Beeman, J W; Bellini, F; Bersani, A; Biassoni, M; Bloxham, T; Brofferio, C; Bryant, A; Bucci, C; Cai, X Z; Canonica, L; Capelli, S; Carbone, L; Cardani, L; Carrettoni, M; Chott, N; Clemenza, M; Cosmelli, C; Cremonesi, O; Creswick, R J; Dafinei, I; Dally, A; De Biasi, A; Decowski, M P; Deninno, M M; de Waard, A; Di Domizio, S; Ejzak, L; Faccini, R; Fang, D Q; Farach, H; Ferri, E; Ferroni, F; Fiorini, E; Foggetta, L; Freedman, S; Frossati, G; Giachero, A; Gironi, L; Giuliani, A; Gorla, P; Gotti, C; Guardincerri, E; Gutierrez, T D; Haller, E E; Han, K; Heeger, K M; Huang, H Z; Ichimura, K; Kadel, R; Kazkaz, K; Keppel, G; Kogler, L; Kolomensky, Y G; Kraft, S; Lenz, D; Li, Y L; Liu, X; Longo, E; Ma, Y G; Maiano, C; Maier, G; Martinez, C; Martinez, M; Maruyama, R H; Moggi, N; Morganti, S; Newman, S; Nisi, S; Nones, C; Norman, E B; Nucciotti, A; Orio, F; Orlandi, D; Ouellet, J; Pallavicini, M; Palmieri, V; Pattavina, L; Pavan, M; Pedretti, M; Pessina, G; Pirro, S; Previtali, E; Rampazzo, V; Rimondi, F; Rosenfeld, C; Rusconi, C; Salvioni, C; Sangiorgio, S; Schaeffer, D; Scielzo, N D; Sisti, M; Smith, A R; Stivanello, F; Taffarello, L; Terenziani, G; Tian, W D; Tomei, C; Trentalange, S; Ventura, G; Vignati, M; Wang, B; Wang, H W; Whitten, C A; Wise, T; Woodcraft, A; Xu, N; Zanotti, L; Zarra, C; Zhu, B X; Zucchelli, S
2011-01-01
The CUORE Crystal Validation Runs (CCVRs) have been carried out since the end of 2008 at the Gran Sasso National Laboratories, in order to test the performances and the radiopurity of the TeO$_2$ crystals produced at SICCAS (Shanghai Institute of Ceramics, Chinese Academy of Sciences) for the CUORE experiment. In this work the results of the first 5 validation runs are presented. Results have been obtained for bulk contaminations and surface contaminations from several nuclides. An extrapolation to the CUORE background has been performed.
Challenges for In vitro to in Vivo Extrapolation of Nanomaterial Dosimetry for Human Risk Assessment
Energy Technology Data Exchange (ETDEWEB)
Smith, Jordan N.
2013-11-01
The proliferation in types and uses of nanomaterials in consumer products has led to rapid application of conventional in vitro approaches for hazard identification. Unfortunately, assumptions pertaining to experimental design and interpretation for studies with chemicals are not generally appropriate for nanomaterials. The fate of nanomaterials in cell culture media, cellular dose to nanomaterials, cellular dose to nanomaterial byproducts, and intracellular fate of nanomaterials at the target site of toxicity all must be considered in order to accurately extrapolate in vitro results to reliable predictions of human risk.
Magnetic form factors of the octet baryons from lattice QCD and chiral extrapolation
Energy Technology Data Exchange (ETDEWEB)
Shanahan, P.E.; Thomas, A.W.; Young, R.D.; Zanotti, J.M. [Adelaide Univ. (Australia). School of Chemistry and Physics; Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Nakamura, Y. [RIKEN Advanced Institute for Computational Science, Kobe, Hyogo (Japan); Pleiter, D. [Forschungszentrum Juelich GmbH (Germany). Juelich Supercomputing Centre (JSC); Regensburg Univ. (Germany). Institut fuer Theoretische Physik; Rakow, P.E.L. [Liverpool Univ. (United Kingdom). Theoretical Physics Division; Schierholz, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Stueben, H. [Hamburg Univ. (Germany). Regionales Rechenzentrum; Collaboration: CSSM and QCDSF/UKQCD Collaborations
2014-01-15
We present a 2+1-flavor lattice QCD calculation of the electromagnetic Dirac and Pauli form factors of the octet baryons. The magnetic Sachs form factor is extrapolated at six fixed values of Q{sup 2} to physical pseudoscalar masses and infinite volume using a formulation based on heavy baryon chiral perturbation theory with finite-range regularization. We properly account for omitted disconnected quark contractions using a partially-quenched effective field theory formalism. The results compare well with the experimental form factors of the nucleon and the magnetic moments of the octet baryons.
Extrapolation of the Dutch 1 MW tunable free electron maser to a 5 MW ECRH source
International Nuclear Information System (INIS)
A Free Electron Maser (FEM) is now under construction at the FOM Institute (Rijnhuizen) Netherlands with the goal of producing 1 MW long pulse to CW microwave output in the range 130 GHz to 250 GHz with wall plug efficiencies of 50% (Verhoeven, et al EC-9 Conference). An extrapolated version of this device is proposed which by scaling up the beam current, would produce microwave power levels of up to 5 MW CW in order to reduce the cost per watt and increase the power per module, thus providing the fusion community with a practical ECRH source
International Nuclear Information System (INIS)
The radioactive wastes that would be produced in demonstration and commercial fusion reactors which could be extrapolated from the design database that will be provided by ITER and its supporting R and D and from a design database supplemented by advanced physics and advanced materials R and D programs are identified and characterized in terms of a number of possible criteria for near-surface burial. The results indicate that there is a possibility that all fusion wastes could satisfy a ''low level'' waste criterion for ''near-surface'' burial. (orig.)
Paasschens, J C J; Beenakker, C W J
2008-01-01
The linear intensity profile of multiply scattered light in a slab geometry extrapolates to zero at a certain distance beyond the boundary. The diffusion equation with this "extrapolated boundary condition" has been used in the literature to obtain analytical formulas for the transmittance of light through the slab as a function of angle of incidence and refractive index. The accuracy of these formulas is determined by comparison with a numerical solution of the Boltzmann equation for radiative transfer.
Ravichandran R; Binukumar J; Sivakumar S; Krishnamurthy K; Davis C
2009-01-01
The objective of the present study is to establish radiation standards for absorbed doses, for clinical high energy linear accelerator beams. In the nonavailability of a cobalt-60 beam for arriving at Nd, water values for thimble chambers, we investigated the efficacy of perspex mounted extrapolation chamber (EC) used earlier for low energy x-rays and beta dosimetry. Extrapolation chamber with facility for achieving variable electrode separations 10.5mm to 0.5mm using micrometer screw was use...
Richmond, Orien M. W.; McEntee, Jay P.; Hijmans, Robert J; Brashares, Justin S
2010-01-01
Species distribution models (SDMs) are increasingly used for extrapolation, or predicting suitable regions for species under new geographic or temporal scenarios. However, SDM predictions may be prone to errors if species are not at equilibrium with climatic conditions in the current range and if training samples are not representative. Here the controversial “Pleistocene rewilding” proposal was used as a novel example to address some of the challenges of extrapolating modeled species-climate...
Finite-Element Extrapolation of Myocardial Structure Alterations Across the Cardiac Cycle in Rats.
David Gomez, Arnold; Bull, David A; Hsu, Edward W
2015-10-01
Myocardial microstructures are responsible for key aspects of cardiac mechanical function. Natural myocardial deformation across the cardiac cycle induces measurable structural alteration, which varies across disease states. Diffusion tensor magnetic resonance imaging (DT-MRI) has become the tool of choice for myocardial structural analysis. Yet, obtaining the comprehensive structural information of the whole organ, in 3D and time, for subject-specific examination is fundamentally limited by scan time. Therefore, subject-specific finite-element (FE) analysis of a group of rat hearts was implemented for extrapolating a set of initial DT-MRI to the rest of the cardiac cycle. The effect of material symmetry (isotropy, transverse isotropy, and orthotropy), structural input, and warping approach was observed by comparing simulated predictions against in vivo MRI displacement measurements and DT-MRI of an isolated heart preparation at relaxed, inflated, and contracture states. Overall, the results indicate that, while ventricular volume and circumferential strain are largely independent of the simulation strategy, structural alteration predictions are generally improved with the sophistication of the material model, which also enhances torsion and radial strain predictions. Moreover, whereas subject-specific transversely isotropic models produced the most accurate descriptions of fiber structural alterations, the orthotropic models best captured changes in sheet structure. These findings underscore the need for subject-specific input data, including structure, to extrapolate DT-MRI measurements across the cardiac cycle. PMID:26299478
Tadesse, T; Inhester, B; Pevtsov, A; 10.1007/s11207-011-9764-z
2011-01-01
Extrapolation codes for modelling the magnetic field in the corona in cartesian geometry do not take the curvature of the Sun's surface into account and can only be applied to relatively small areas, \\textit{e.g.}, a single active region. We apply a method for nonlinear force-free coronal magnetic field modelling of photospheric vector magnetograms in spherical geometry which allows us to study the connectivity between multi-active regions. We use vector magnetograph data from the Synoptic Optical Long-term Investigations of the Sun survey (SOLIS)/Vector Spectromagnetograph(VSM) to model the coronal magnetic field, where we study three neighbouring magnetically connected active regions (ARs: 10987, 10988, 10989) observed on 28, 29, and 30 March 2008, respectively. We compare the magnetic field topologies and the magnetic energy densities and study the connectivities between the active regions(ARs). We have studied the time evolution of magnetic field over the period of three days and found no major changes in...
Extrapolation of lattice QCD results beyond the power-counting regime
Leinweber, D B; Young, R D
2005-01-01
Resummation of the chiral expansion is necessary to make accurate contact with current lattice simulation results of full QCD. Resummation techniques including relativistic formulations of chiral effective field theory and finite-range regularization (FRR) techniques are reviewed, with an emphasis on using lattice simulation results to constrain the parameters of the chiral expansion. We illustrate how the chiral extrapolation problem has been solved and use FRR techniques to identify the power-counting regime (PCR) of chiral perturbation theory. To fourth-order in the expansion at the 1% tolerance level, we find $0 \\le m_\\pi \\le 0.18$ GeV for the PCR, extending only a small distance beyond the physical pion mass.
Energy Technology Data Exchange (ETDEWEB)
King, A W
1991-12-31
A general procedure for quantifying regional carbon dynamics by spatial extrapolation of local ecosystem models is presented Monte Carlo simulation to calculate the expected value of one or more local models, explicitly integrating the spatial heterogeneity of variables that influence ecosystem carbon flux and storage. These variables are described by empirically derived probability distributions that are input to the Monte Carlo process. The procedure provides large-scale regional estimates based explicitly on information and understanding acquired at smaller and more accessible scales.Results are presented from an earlier application to seasonal atmosphere-biosphere CO{sub 2} exchange for circumpolar ``subarctic`` latitudes (64{degree}N-90{degree}N). Results suggest that, under certain climatic conditions, these high northern ecosystems could collectively release 0.2 Gt of carbon per year to the atmosphere. I interpret these results with respect to questions about global biospheric sinks for atmospheric CO{sub 2} .
Energy Technology Data Exchange (ETDEWEB)
Dowding, Kevin J.; Hills, Richard Guy (New Mexico State University, Las Cruces, NM)
2005-04-01
Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.
Modeling of systematic retention of beryllium in rats. Extrapolation to humans
International Nuclear Information System (INIS)
In this work, we analyzed different approaches, assayed in order to numerically describe the systemic behaviour of Beryllium. The experimental results used in this work, were previously obtained by Furchner et al. (1973), using Sprague-Dawley rats, and other animal species. Furchner's work includes the obtained model for whole body retention in rats but not for each target organ. In this work we present the results obtained by modeling the kinetic behaviour of Beryllium in several target organs. The results of this kind of models were used in order to establish correlations among the estimated kinetic constants. The parameters of the model were extrapolated to humans and, finally, compared with other previously published
Extrapolation of lattice QCD results beyond the power-counting regime
International Nuclear Information System (INIS)
Resummation of the chiral expansion is necessary to make accurate contact with current lattice simulation results of full QCD. Resummation techniques including relativistic formulations of chiral effective field theory and finite-range regularization (FRR) techniques are reviewed, with an emphasis on using lattice simulation results to constrain the parameters of the chiral expansion. We illustrate how the chiral extrapolation problem has been solved and use FRR techniques to identify the power-counting regime (PCR) of chiral perturbation theory. To fourth-order in the expansion at the 1% tolerance level, we find 0=?=<0.18 GeV for the PCR, extending only a small distance beyond the physical pion mass
Modeling the systemic retention of beryllium in rat. Extrapolation to human
International Nuclear Information System (INIS)
In this work, we analyzed different approaches, assayed in order to numerically describe the systemic behaviour of Beryllium. The experimental results used in this work, were previously obtained by Furchner et al. (1973), using Sprague-Dawley rats, and others animal species. Furchner's work includes the obtained model for whole body retention in rats, but not for each target organ. In this work we present the results obtained by modeling the kinetic behaviour of Beryllium in several target organs. The results of this kind of models were used in order to establish correlations among the estimated kinetic constants. The parameters of the model were extrapolated to humans and, finally, compared with others previously published. (Author) 12 refs
Andriessen, J. H. T. H.; van der Horst-Bruinsma, I. E.; ter Haar Romeny, B. M.
1989-05-01
The present phase of the clinical evaluation within the Dutch PACS project mainly focuses on the development and evaluation of a PACSystem for a few departments in the Utrecht University hospital (UUH). A report on the first clinical experiences and a detailed cost/savings analysis of the PACSystem in the UUH are presented elsewhere. However, an assessment of the wider fmancial and organizational implications for hospitals and for the health sector is also needed. To this end a model for (financial) cost assessment of PACSystems is being developed by BAZIS. Learning from the actual pilot implementation in UUH we realized that general Technology Assessment (TA) also calls for an extra-polation of the medical and organizational effects. After a short excursion into the various approaches towards TA, this paper discusses the (inter) organizational dimensions relevant to the development of the necessary exttapolationmodels.
Track extrapolation and distribution for the CDF-II trigger system
International Nuclear Information System (INIS)
The CDF-II experiment is a multipurpose detector designed to study a wide range of processes observed in the high energy proton-antiproton collisions produced by the Fermilab Tevatron. With event rates greater than 1 MHz, the CDF-II trigger system is crucial for selecting interesting events for subsequent analysis. This document provides an overview of the Track Extrapolation System (XTRP), a component of the CDF-II trigger system. The XTRP is a fully digital system that is utilized in the track-based selection of high momentum lepton and heavy flavor signatures. The design of the XTRP system includes five different custom boards utilizing discrete and FPGA technology residing in a single VME crate. We describe the design, construction, commissioning and operation of this system
DEFF Research Database (Denmark)
Thorndahl, SØren Liedtke; Grum, M.
2011-01-01
Forecasting of flows, overflow volumes, water levels, etc. in drainage systems can be applied in real time control of drainage systems in the future climate in order to fully utilize system capacity and thus save possible construction costs. An online system for forecasting flows and water levels in a small urban catchment has been developed. The forecast is based on application of radar rainfall data, which by a correlation based technique, is extrapolated with a lead time up to two hours. The runoff forecast in the drainage system is based on a fully distributed MOUSE model which is auto-calibrated on flow measurements in order to produce the best possible forecast for the drainage system at all times. The system shows great potential for the implementation of real time control in drainage systems and forecasting flows and water levels.
Kadoura, Ahmad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system\\'s potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide. © 2014 Elsevier Inc.
International Nuclear Information System (INIS)
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide
International Nuclear Information System (INIS)
Full text: The desired precision of 25 MeV for the W mass with the ATLAS detector is planed to be achieved by using the leptonic decay channel of the W : W ? l?, where l = e, ? . As the longitudinal momentum of the neutrino can not be measured, the measurement is done using the transverse momentum of lepton and neutrino, which is calculated through a recoil method. Results from CDF and D0 have shown that an unprecise knowledge of the total lepton energy and momentum scale is the dominating source of uncertainty of the W mass measurement. The knowledge of the lepton mass scale requires a deep understanding of the material in the ATLAS Inner Detector with a uncertainty of about 1 % which is of an order of magnitude better than in any comparable high energy physics experiment so far. In addition the magnetic field map has to be known with a precision of 0.1 %. This also requires tracking algorithms to process this detailed input. The methodology on how to achieve such a detailed description and its correct treatment including energy loss and multiple scattering effects during track extrapolation will be presented. In addition, results from the ATLAS Combined Testbeam 2004 using the new extrapolation scheme will be included into the presentation. (author)
Tadesse, Tilaye; Inhester, B; Pevtsov, A
2010-01-01
Routine measurements of the solar magnetic field are mainly carried out in the photosphere. Therefore, one has to infer the field strength in the higher layers of the solar atmosphere from the measured photospheric field based on the assumption that the corona is force-free. Meanwhile, those measured data are inconsistent with the above force-free assumption. Therefore, one has to apply some transformations to these data before nonlinear force-free extrapolation codes can be applied. Extrapolation codes in cartesian geometry for modelling the magnetic field in the corona do not take the curvature of the Sun's surface into account and can only be applied to relatively small areas, e.g., a single active region. Here we apply a method for nonlinear force-free coronal magnetic field modelling and preprocessing of photospheric vector magnetograms in spherical geometry using the optimization procedure.We solve the nonlinear force-free field equations by minimizing a functional in spherical coordinates over a restri...
Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide.
Energy Technology Data Exchange (ETDEWEB)
Kadoura, Ahmad; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide.
GÃƒÂ¼nter HÃƒÂ¤felinger; Alexander Neugebauer
2005-01-01
Abstract : Stationary points for four geometrically different states of methylene: bent and linear triplet methylene, bent and linear singlet methylene were investigated using the highly reliable post-HF CCSD(T) method. Extrapolations to the CCSD(T) basis set (CBS) limit from Dunning triple to quintuple correlation consistent polarized basis sets were performed for total energies, for the equilibrium CH distances re(CH), for singlettriplet separation energies, for energy barriers to linearity...
Lu, Yi-chun; Gasteiger, Hubert A.; Shao-Horn, Yang
2010-01-01
This study presents a new method to quantitatively determine the electrocatalytic activity of Vulcan carbon and Vulcan-supported Au nanoparticles, dispersed as catalyst thin films on glass carbon, for oxygen reduction in an aprotic electrolyte using rotating disk electrode measurements. The ORR activity of Vulcan carbon can be described by a Tafel slope of 120 mV/dec and Levich-Koutecky analysis of Vulcan carbon suggests that solvated LiO[subscript 2] is the initially formed O[subscript 2] re...
Reinisch, Walter; Louis, Edouard; Danese, Silvio
2015-09-01
Extrapolation of clinical data from other indications is an important concept in the development of biosimilars. This process depends on strict comparability exercises to establish similarity to the reference medicinal product. However, the extrapolation paradigm has prompted a fierce scientific debate. CT-P13 (Remsima(®), Inflectra(®)), an infliximab biosimilar, is a TNF antagonist used to treat immune-mediated inflammatory diseases. On the basis of totality of similarity data, the EMA approved CT-P13 for all indications held by its reference medicinal product (Remicade(®)) including inflammatory bowel disease. This article reviews the mechanisms of action of TNF antagonists in immune-mediated inflammatory diseases and illustrates the comparable profiles of CT-P13 and reference medicinal product on which the extrapolation of indications including inflammatory bowel disease is based. PMID:26395531
International Nuclear Information System (INIS)
Two secondary standard systems of beta radiation were used to calibrate a PTW extrapolation chamber Model 23391. Three 90Sr+90Y sources of different activities were used in this calibration procedure. Medium-term stability of the response of the chamber was also studied. The calibration was performed with and without field-flattening filters. The relative standard deviation of the obtained calibration factors was 8.3% for the aluminum collecting electrode and 4.1% for the graphite collecting electrode. - Highlights: ? 90Sr+90Y standard sources were used to calibrate a PTW extrapolation chamber. ? Characterization tests of the chamber response were performed. ? Chamber response showed very good short- and medium-term stabilities. ? Linear extrapolation curves were obtained. ? Calibration factors of the chamber were acceptable.
International Nuclear Information System (INIS)
Sixty-nine critical configurations of up to 186 kg of uranium are reported from very early experiments (1960s) performed at the Rocky Flats Critical Mass Laboratory near Denver, Colorado. Enriched (93%) uranium metal spherical and hemispherical configurations were studied. All were thick-walled shells except for two solid hemispheres. Experiments were essentially unreflected; or they included central and/or external regions of mild steel. No liquids were involved. Critical parameters are derived from extrapolations beyond subcritical data. Extrapolations, rather than more precise interpolations between slightly supercritical and slightly subcritical configurations, were necessary because experiments involved manually assembled configurations. Many extrapolations were quite long; but the general lack of curvature in the subcritical region lends credibility to their validity. In addition to delayed critical parameters, a procedure is offered which might permit the determination of prompt critical parameters as well for the same cases. This conjectured procedure is not based on any strong physical arguments
International Nuclear Information System (INIS)
The numerical analysis of practically all existing formulae such as expansion series, Tait, logarithm, Van der Waals and virial equations for interpolation of experimental molar volumes versus high pressure was carried out. One can conclude that extrapolating dependences of molar volumes versus pressure and temperature can be valid. It was shown that virial equations can be used for fitting experimental data at relatively low pressures P<3 kbar too in distinction to other equations. Direct solving of a linear equation of the third order relatively to volume using extrapolated virial coefficients allows us to obtain good agreement between existing experimental data for high pressure and calculated values
International Nuclear Information System (INIS)
We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics steered by effective solvation forces allows huge outer time steps up to tens of picoseconds without affecting the equilibrium and conformational properties, and thus provides a 100- to 500-fold effective speedup in comparison to conventional MD with explicit solvent. With the statistical-mechanical 3D-RISM-KH account for effective solvation forces, the method provides efficient sampling of biomolecular processes with slow and/or rare solvation events such as conformational transitions of hydrated alanine dipeptide with the mean life times ranging from 30 ps up to 10 ns for “flip-flop” conformations, and is particularly beneficial for biomolecular systems with exchange and localization of solvent and ions, ligand binding, and molecular recognition
Energy Technology Data Exchange (ETDEWEB)
Omelyan, Igor, E-mail: omelyan@ualberta.ca, E-mail: omelyan@icmp.lviv.ua [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Alberta T6G 2M9 (Canada); Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta T6G 2G8 (Canada); Institute for Condensed Matter Physics, National Academy of Sciences of Ukraine, 1 Svientsitskii Street, Lviv 79011 (Ukraine); Kovalenko, Andriy, E-mail: andriy.kovalenko@nrc-cnrc.gc.ca [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Alberta T6G 2M9 (Canada); Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta T6G 2G8 (Canada)
2013-12-28
We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics steered by effective solvation forces allows huge outer time steps up to tens of picoseconds without affecting the equilibrium and conformational properties, and thus provides a 100- to 500-fold effective speedup in comparison to conventional MD with explicit solvent. With the statistical-mechanical 3D-RISM-KH account for effective solvation forces, the method provides efficient sampling of biomolecular processes with slow and/or rare solvation events such as conformational transitions of hydrated alanine dipeptide with the mean life times ranging from 30 ps up to 10 ns for “flip-flop” conformations, and is particularly beneficial for biomolecular systems with exchange and localization of solvent and ions, ligand binding, and molecular recognition.
International Nuclear Information System (INIS)
The concentrations and the organ distribution patterns of 228Th, 230Th and 232Th in two 9-y-old dogs of our beagle colony were determined. The dogs were exposed only to background environmental levels of Th isotopes through ingestion (food and water) and inhalation as are humans. The organ distribution patterns of the isotopes in the beagles were compared to the organ distribution patterns in humans to determine if it is appropriate to extrapolate the beagle organ burden data to humans. Among soft tissues, only the lungs, lymph nodes, kidney and liver, and skeleton contained measurable amounts of Th isotopes. The organ distribution pattern of Th isotopes in humans and dog are similar, the majority of Th being in the skeleton of both species. The average skeletal concentrations of 228Th in dogs were 30 to 40 times higher than the average skeletal concentrations of the parent 232Th, whereas the concentration of 228Th in human skeleton was only four to five times higher than 232Th. This suggests that dogs have a higher intake of 228Ra through food than humans. There is a similar trend in the accumulations of 232Th, 230Th and 228Th in the lungs of dog and humans. The percentages of 232Th, 230Th and 228Th in human lungs are 26, 9.7 and 4.8, respectively, compared to 4.2, 2.6 and 0.48, respectively, in dog lungs. The larger percentages of Th isotopes in human lungs may be due simply to the longer life span of humans. If the burdens of Th isotopes in human lungs are normalized to an exposure time of 9.2 y (mean age of dogs at the time of sacrifice), the percent burden of 232Th, 230Th and 228Th in human lungs are estimated to be 3.6, 1.3 and 0.66, respectively. These results suggest that the beagle may be an appropriate experimental animal for extrapolating the organ distribution pattern of Th in humans
Singh, N P; Zimmerman, C J; Taylor, G N; Wrenn, M E
1988-03-01
The concentrations and the organ distribution patterns of 228Th, 230Th and 232Th in two 9-y-old dogs of our beagle colony were determined. The dogs were exposed only to background environmental levels of Th isotopes through ingestion (food and water) and inhalation as are humans. The organ distribution patterns of the isotopes in the beagles were compared to the organ distribution patterns in humans to determine if it is appropriate to extrapolate the beagle organ burden data to humans. Among soft tissues, only the lungs, lymph nodes, kidney and liver, and skeleton contained measurable amounts of Th isotopes. The organ distribution pattern of Th isotopes in humans and dog are similar, the majority of Th being in the skeleton of both species. The average skeletal concentrations of 228Th in dogs were 30 to 40 times higher than the average skeletal concentrations of the parent 232Th, whereas the concentration of 228Th in human skeleton was only four to five times higher than 232Th. This suggests that dogs have a higher intake of 228Ra through food than humans. There is a similar trend in the accumulations of 232Th, 230Th and 228Th in the lungs of dog and humans. The percentages of 232Th, 230Th and 228Th in human lungs are 26, 9.7 and 4.8, respectively, compared to 4.2, 2.6 and 0.48, respectively, in dog lungs. The larger percentages of Th isotopes in human lungs may be due simply to the longer life span of humans. If the burdens of Th isotopes in human lungs are normalized to an exposure time of 9.2 y (mean age of dogs at the time of sacrifice), the percent burden of 232Th, 230Th and 228Th in human lungs are estimated to be 3.6, 1.3 and 0.66, respectively. These results suggest that the beagle may be an appropriate experimental animal for extrapolating the organ distribution pattern of Th in humans. PMID:3346160
An age-classified projection matrix model has been developed to extrapolate the chronic (28-35d) demographic responses of Americamysis bahia (formerly Mysidopsis bahia) to population-level response. This study was conducted to evaluate the efficacy of this model for predicting t...
International Nuclear Information System (INIS)
This report addresses safety analysis of the whole repository life-cycle that may require long term performance assessment of its components and evaluation of potential impacts of the facility on the environment. Generic consideration of procedures for the development of predictive tools are completed by detailed characterization of selected principles and methods that were applied and presented within the co-ordinated research project (CRP). The project focused on different approaches to extrapolation, considering radionuclide migration/sorption, physical, geochemical and geotechnical characteristics of engineered barriers, irradiated rock and backfill performance, and on corrosion of metallic and vitreous materials. This document contains a comprehensive discussion of the overall problem and the practical results of the individual projects preformed within the CRP. Each of the papers on the individual projects has been indexed separately
International Nuclear Information System (INIS)
The potential energy curves (PECs) of the b1?g+ state of S2 have been calculated using a multi-reference configuration interaction method with the Davidson correction and a series of Dunning’s correlation-consistent basis sets: aug-c.c.-pVXZ and aug-c.c.-pV(X + d)Z (X = Q, 5 and 6). The calculated PECs are subsequently extrapolated to the complete basis set limit. Such PECs are then used to deduce the analytical potential energy functions (APEFs), which show small root mean square derivations. Based on the APEFs, we have calculated the spectroscopic parameters and compared them with the experimental data available at present. By solving the Schrödinger equation numerically, we also obtain the complete set of vibrational levels, classical turning points and rotation and centrifugal distortion constants when J=0. The present results can serve as a useful reference for future experimental and dynamics studies. (paper)
Cui, Jie; Krems, Roman V
2015-01-01
We consider a problem of extrapolating the collision properties of a large polyatomic molecule A-H to make predictions of the dynamical properties for another molecule related to A-H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A-X. We assume that the effect of the $-$H $\\rightarrow$ $-$X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values that corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C$_6$H$_5$CN with He on the unknown parameters describing the interaction of the H...
The risk of extrapolation in neuroanatomy: the case of the mammalian vomeronasal system
Directory of Open Access Journals (Sweden)
IgnacioSalazar
2009-10-01
Full Text Available The sense of smell plays a crucial role in mammalian social and sexual behaviour, identification of food, and detection of predators. Nevertheless, mammals vary in their olfactory ability. One reason for this concerns the degree of development of their pars basalis rhinencephali, an anatomical feature that has has been considered in classifying this group of animals as macrosmatic, microsmatic or anosmatic. In mammals, different structures are involved in detecting odours: the main olfactory system, the vomeronasal system (VNS, and two subsystems, namely the ganglion of Grüneberg and the septal organ. Here, we review and summarise some aspects of the comparative anatomy of the VNS and its putative relationship to other olfactory structures. Even in the macrosmatic group, morphological diversity is an important characteristic of the VNS, specifically of the vomeronasal organ and the accessory olfactory bulb. We conclude that it is a big mistake to extrapolate anatomical data of the VNS from species to species, even in the case of relatively close evolutionary proximity between them. We propose to study other mammalian VNS than those of rodents in depth as a way to clarify its exact role in olfaction. Our experience in this field leads us to hypothesise that the VNS, considered for all mammalian species, could be a system undergoing involution or regression, and could serve as one more integrated olfactory subsystem.
Yang, X; Zhou, Y-F; Yu, Y; Zhao, D-H; Shi, W; Fang, B-H; Liu, Y-H
2015-02-01
A multi-compartment physiologically based pharmacokinetic (PBPK) model to describe the disposition of cyadox (CYX) and its metabolite quinoxaline-2-carboxylic acid (QCA) after a single oral administration was developed in rats (200 mg/kg b.w. of CYX). Considering interspecies differences in physiology and physiochemistry, the model efficiency was validated by pharmacokinetic data set in swine. The model included six compartments that were blood, muscle, liver, kidney, adipose, and a combined compartment for the rest of tissues. The model was parameterized using rat plasma and tissue concentration data that were generated from this study. Model simulations were achieved using a commercially available software program (ACSLXL ibero version 3.0.2.1). Results supported the validity of the model with simulated tissue concentrations within the range of the observations. The correlation coefficients of the predicted and experimentally determined values for plasma, liver, kidney, adipose, and muscles in rats were 0.98, 0.98, 0.98, 0.99, and 0.95, respectively. The rat model parameters were then extrapolated to pigs to estimate QCA disposition in tissues and validated by tissue concentration of QCA in swine. The correlation coefficients between the predicted and observed values were over 0.90. This model could provide a foundation for developing more reliable pig models once more data are available. PMID:25378053
Extrapolation of experimental data on late effects of low-dose radionuclides in man
International Nuclear Information System (INIS)
The situation of living of population on radionuclide contamination areas was simulated in the experimental study using white strainless rats of different ages. The significance of age for late stochastic effects of internal radionuclide contamination with low doses of 131I, 137Cs, 144Ce and 106Ru was studied. Some common regularities and differences in late effects formation depending on age were found. Results of the study showed that the number of tumors developed increased in groups of animals exposed at the youngest age. The younger animal at the moment of internal radionuclide contamination, the higher percentage of malignant tumors appeared. It was especially so for tumors of endocrine glands (pituitary, suprarenal,- and thyroid). Differences in late effects formation related to different type of radionuclide distribution within the body were estimated. On the base of extrapolation the conclusion was made that human organism being exposed at early postnatal or pubertal period could be the most radiosensitive (1.5-2.0 or sometimes even 3-5 times higher than adults). Data confirmed the opinion that children are the most critical part of population even in case of low dose radiation exposure. (author)
Krishnan, Kannan; Haddad, Sami; Béliveau, Martin; Tardif, Robert
2002-12-01
The available data on binary interactions are yet to be considered within the context of mixture risk assessment because of our inability to predict the effect of a third or a fourth chemical in the mixture on the interacting binary pairs. Physiologically based pharmacokinetic (PBPK) models represent a potentially useful framework for predicting the consequences of interactions in mixtures of increasing complexity. This article highlights the conceptual basis and validity of PBPK models for extrapolating the occurrence and magnitude of interactions from binary to more complex chemical mixtures. The methodology involves the development of PBPK models for all mixture components and interconnecting them at the level of the tissue where the interaction is occurring. Once all component models are interconnected at the binary level, the PBPK framework simulates the kinetics of all mixture components, accounting for the interactions occurring at various levels in more complex mixtures. This aspect was validated by comparing the simulations of a binary interaction-based PBPK model with experimental data on the inhalation kinetics of m-xylene, toluene, ethyl benzene, dichloromethane, and benzene in mixtures of varying composition and complexity. The ability to predict the kinetics of chemicals in complex mixtures by accounting for binary interactions alone within a PBPK model is a significant step toward the development of interaction-based risk assessment for chemical mixtures. PMID:12634130
Spatial extrapolation of light use efficiency model parameters to predict gross primary production
Directory of Open Access Journals (Sweden)
Karsten Schulz
2011-12-01
Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.
Directory of Open Access Journals (Sweden)
Trevor G. Jones
2014-07-01
Full Text Available Information derived from high spatial resolution remotely sensed data is critical for the effective management of forested ecosystems. However, high spatial resolution data-sets are typically costly to acquire and process and usually provide limited geographic coverage. In contrast, moderate spatial resolution remotely sensed data, while not able to provide the spectral or spatial detail required for certain types of products and applications, offer inexpensive, comprehensive landscape-level coverage. This study assessed using an object-based approach to extrapolate detailed tree species heterogeneity beyond the extent of hyperspectral/LiDAR flightlines to the broader area covered by a Landsat scene. Using image segments, regression trees established ecologically decipherable relationships between tree species heterogeneity and the spectral properties of Landsat segments. The spectral properties of Landsat bands 4 (i.e., NIR: 0.76–0.90 µm, 5 (i.e., SWIR: 1.55–1.75 µm and 7 (SWIR: 2.08–2.35 µm were consistently selected as predictor variables, explaining approximately 50% of variance in richness and diversity. Results have important ramifications for ongoing management initiatives in the study area and are applicable to wide range of applications.
Extrapolating our understanding of local star formation to the early Universe
Rathborne, Jill
2015-08-01
Galaxies at redshifts z>2, when the Universe was only a few billion years old, are vigorously forming stars at rates up to several 1000 solar masses per year. Theoretical efforts aimed at explaining this rapid conversion of gas into stars have thus far extrapolated theories of supersonically turbulent, isothermal media describing the density structure of low-pressure (P/k10^7 K cm-3) environments. In this talk I will describe new ALMA observations of a cloud immersed in the high-pressure Galactic Centre environment, which represents the best local-Universe analogue to clouds in z>2 galaxies. Our analysis shows that the mean and dispersion of its normalised column density PDF very closely matches the predictions of theoretical models of supersonic turbulence in gas of such high density and turbulence. This is the first confirmation of these models in such an extreme, high-pressure environment. Moreover, our observations are consistent with the theoretically predicted environmentally-dependent threshold for star formation which may provide a natural explanation for the low star formation rate in the Galactic Centre environment. Our results provide the first empirical evidence that the current theoretical understanding of molecular cloud structure derived from clouds in the solar neighbourhood also holds in extreme, high-pressure environments, allowing its application to rapidly star-forming galaxies in the early Universe.
Extrapolated renormalization group calculation of the surface tension in square-lattice Ising model
International Nuclear Information System (INIS)
By using self-dual clusters (whose sizes are characterized by the numbers b=2, 3, 4, 5) within a real space renormalization group framework, the longitudinal surface tension of the square-lattice first-neighbour 1/2-spin ferromagnetic Ising model is calculated. The exact critical temperature T sub(c) is recovered for any value of b; the exact assymptotic behaviour of the surface tension in the limit of low temperatures is analytically recovered; the approximate correlation length critical exponents monotonically tend towards the exact value ?=1 (which, at two dimensions, coincides with the surface tension critical exponent ?) for increasingly large cells; the same behaviour is remarked in what concerns the approximate values for the surface tension amplitude in the limit T?T sub(c). Four different numerical procedures are developed for extrapolating to b?infinite the renormalization group results for the surface tension, and quite satisfactory agreement is obtained with Onsager's exact expression (error varying from zero to a few percent on the whole temperature domain). Furthermore the set of RG surface tensions is compared with a set of biased surface tensions (associated to appropriate misfit seams), and find only fortuitous coincidence among them. (Author)
Extrapolation of the relative risk of radiogenic neoplasms across mouse strains and to man
International Nuclear Information System (INIS)
We have examined two interrelated questions: is the susceptibility for radiogenic cancer related to the natural incidence, and are the responses of cancer induction by radiation described better by an absolute or a relative risk model. Also, we have examined whether it is possible to extrapolate relative risk estimates across species, from mice to humans. The answers to these questions were obtained from determinations of risk estimates for nine neoplasms in female and male C3Hf/Bd and C57BL/6 Bd mice and from data obtained from previous experiments with female BALB/c Bd and RFM mice. The mice were exposed to 137Cs gamma rays at 0.4 Gy/min to doses of 0, 0.5, 1.0, or 2.0 Gy. When tumors that were considered the cause of death were examined, both the control and induced mortality rates for the various tumors varied considerably among sexes and strains. The results suggest that in general susceptibility is determined by the control incidence. The relative risk model was significantly superior in five of the tumor types: lung, breast, liver, ovary, and adrenal. Both models appeared to fit myeloid leukemia and Harderian gland tumors, and neither provided good fits for thymic lymphoma and reticulum cell sarcoma. When risk estimates of radiation-induced tumors in humans and mice were compared, it was found that the relative risk estimates for lung, breast, and leukemia were not significantly different between humans and mice. In the case of liver tumors, mice hadice. In the case of liver tumors, mice had a higher risk than humans. These results indicate that the relative risk model is the appropriate approach for risk estimation for a number of tumors. The apparent concordance of relative risk estimates between humans and mice for the small number of cancers examined encourages us to undertake further studies
Measurement of absorbed dose with a bone-equivalent extrapolation chamber
International Nuclear Information System (INIS)
A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water trade mark sign and bone-equivalent material was used for determining absorbed dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain absorbed dose in bone for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC by 0.7% to ?2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water trade mark sign PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). In conjunction with appropriate correction factors determined with Monte Carlo techniques, the uncalibrated hybrid PEEC can be used for measuring absorbed dose in bone material to within 2% for high-energy photon and electron beams
Semiokhina, A F; Ochinskaia, E I; Rubtsova, N B; Pleskacheva, M G; Krushinski?, L V
1985-01-01
Sharp EEG changes are recorded in bioelectrical activity of the dorsal cortex and dorsal ventricular edge in marsh tortoises in conditions of free movement during solving of an extrapolation task (a test of elementary reasoning ability). These changes of a pathological character, accompanied by neurotic states, were observed in some animals having correctly solved the task several times in succession (2-5), beginning with the first presentation. Such changes of EEG and behaviour were not found in tortoises that committed errors at first presentations of the task and only gradually learned correct solving. Formation of the adequate behaviour can proceed by two means: on the basis of elementary reasoning ability and learning. Disturbance of adequate behaviour in the experiment with characteristic changes of EEG testifies to a difficult state of the animal during solving of the extrapolation task. PMID:4090728
DEFF Research Database (Denmark)
Thorndahl, SØren Liedtke; Rasmussen, Michael R.
2013-01-01
Model based short-term forecasting of urban storm water runoff can be applied in realtime control of drainage systems in order to optimize system capacity during rain and minimize combined sewer overflows, improve wastewater treatment or activate alarms if local flooding is impending. A novel online system, which forecasts flows and water levels in real-time with inputs from extrapolated radar rainfall data, has been developed. The fully distributed urban drainage model includes auto-calibration using online in-sewer measurements which is seen to improve forecast skills significantly. The radar rainfall extrapolation (nowcast) limits the lead time of the system to two hours. In this paper, the model set-up is tested on a small urban catchment for a period of 1.5 years. The 50 largest events are presented.
International Nuclear Information System (INIS)
The neutron count rate of detector in fast-thermal boundary showed quite different performance in critical extrapolation experiment on Venus 1# , which was listed as a benchmark of accelerator driven sub-critical system (ADS). In order to explain the abnormal phenomenon in experiment, numerical simulations of experiment and calculations of neutron spectrum in fast-thermal boundary were performed, analyses to the abnormal neutron count rate were also represented through calculations. The results indicate that neutron spectrum change during critical extrapolation is the main contributor to the abnormal performance of detector in experiment. This research work will supply theoretical basis for neutronics study on fast-thermal coupling sub-critical systems of the future. (authors)
International Nuclear Information System (INIS)
The metrological coherence among standard systems is a requirement for assuring the reliability of dosimetric quantities measurements in ionizing radiation field. Scientific and technologic improvements happened in beta radiation metrology with the installment of the new beta secondary standard BSS2 in Brazil and with the adoption of the internationally recommended beta reference radiations. The Dosimeter Calibration Laboratory of the Development Center for Nuclear Technology (LCD/CDTN), in Belo Horizonte, implemented the BSS2 and methodologies are investigated for characterizing the beta radiation fields by determining the field homogeneity, the accuracy and uncertainties in the absorbed dose in air measurements. In this work, a methodology to be used for verifying the metrological coherence among beta radiation fields in standard systems was investigated; an extrapolation chamber and radiochromic films were used and measurements were done in terms of absorbed dose in air. The reliability of both the extrapolation chamber and the radiochromic film was confirmed and their calibrations were done in the LCD/CDTN in 90Sr/90Y, 85Kr and 147Pm beta radiation fields. The angular coefficients of the extrapolation curves were determined with the chamber; the field mapping and homogeneity were obtained from dose profiles and isodose with the radiochromic films. A preliminary comparison between the LCD/CDTN and the Instrument Calibration Laboratory of the Nuclear and Energy Research Institute / Sao Paulo (LCI/IPEN) was carried out. Results with the extrapolation chamber measurements showed in terms of absorbed dose in air rates showed differences between both laboratories up to de -I % e 3%, for 90Sr/90Y, 85Kr and 147Pm beta radiation fields, respectively. Results with the EBT radiochromic films for 0.1, 0.3 and 0.15 Gy absorbed dose in air, for the same beta radiation fields, showed differences up to 3%, -9% and -53%. The beta radiation field mappings with radiochromic films in both BSS2 showed that some of them were not geometrically aligned. (author)
Jiang, Chaowei
2015-01-01
In the solar corona, magnetic flux rope is believed to be a fundamental structure accounts for magnetic free energy storage and solar eruptions. Up to the present, the extrapolation of magnetic field from boundary data is the primary way to obtain fully three-dimensional magnetic information of the corona. As a result, the ability of reliable recovering coronal magnetic flux rope is important for coronal field extrapolation. In this paper, our coronal field extrapolation code (CESE-MHD-NLFFF, Jiang & Feng 2012) is examined with an analytical magnetic flux rope model proposed by Titov & Demoulin (1999), which consists of a bipolar magnetic configuration holding an semi-circular line-tied flux rope in force-free equilibrium. By using only the vector field in the bottom boundary as input, we test our code with the model in a representative range of parameter space and find that the model field is reconstructed with high accuracy. Especially, the magnetic topological interfaces formed between the flux rop...
Energy Technology Data Exchange (ETDEWEB)
Meyer, M.; Lerjen, M.; Menth, S. [emkamatik GmbH, Wettingen (Switzerland); Luethi, M. [Swiss Federal Insitute of Technology (ETHZ), Institute for Transport Planning and Systems (IVT), Zuerich (Switzerland); Tuchschmid, M. [SBB AG, BahnUmwelt-Center, 3000 Bern (Switzerland)
2009-11-15
This appendix to a final report for the Swiss Federal Office of Energy (SFOE) presents the results of measurements made on trains and presents and discusses extrapolations made on the basis of these measurements. The evaluation and selection of the trains on which the measurements were to be made is discussed. Mainly passenger trains were selected as only few goods engines have the necessary equipment and equipping them would be costly. Measurements made on a Re 460 locomotive are presented and discussed. The methods used in the energy analysis are described and the results obtained on several itineraries that include partial single-track working are presented and discussed.
A study of the inhibition of iron corrosion in HCl solutions by some amino acids
International Nuclear Information System (INIS)
The performance of three selected amino acids, namely alanine (Ala), cysteine (Cys) and S-methyl cysteine (S-MCys) as safe corrosion inhibitors for iron in aerated stagnant 1.0 M HCl solutions was evaluated by Tafel polarization and impedance measurements. Results indicate that Ala acts mainly as a cathodic inhibitor, while Cys and S-MCys function as mixed-type inhibitors. Cys, which contains a mercapto group in its molecular structure, was the most effective among the inhibitors tested, while Ala was less effective than S-MCys. The low inhibition efficiency recorded for S-MCys compared with that of Cys was attributed to steric effects caused by the substituent methyl on the mercapto group. Electrochemical frequency modulation (EFM) technique and inductively coupled plasma atomic emission spectrometry (ICP-AES), were also applied to make accurate determination of corrosion rates. Validation of the Tafel extrapolation method for measuring corrosion rates was tested. Rates of corrosion rates (in ?m y-1) obtained from Tafel extrapolation method are in good agreement with those measured using EFM and ICP methods. Some theoretical studies, including molecular dynamics (MD) and density functional theory (DFT), were also employed to establish the correlation between the structure (molecular and electronic) of the three tested inhibitors and the inhibition efficiency. Adsorption via hydrogen bonding was discussed here based on some theoretical studies. Experimenta on some theoretical studies. Experimental and theoretical results were in good agreement.
Directory of Open Access Journals (Sweden)
Ravichandran R
2009-01-01
Full Text Available The objective of the present study is to establish radiation standards for absorbed doses, for clinical high energy linear accelerator beams. In the nonavailability of a cobalt-60 beam for arriving at Nd, water values for thimble chambers, we investigated the efficacy of perspex mounted extrapolation chamber (EC used earlier for low energy x-rays and beta dosimetry. Extrapolation chamber with facility for achieving variable electrode separations 10.5mm to 0.5mm using micrometer screw was used for calibrations. Photon beams 6 MV and 15 MV and electron beams 6 MeV and 15 MeV from Varian Clinac linacs were calibrated. Absorbed Dose estimates to Perspex were converted into dose to solid water for comparison with FC 65 ionisation chamber measurements in water. Measurements made during the period December 2006 to June 2008 are considered for evaluation. Uncorrected ionization readings of EC for all the radiation beams over the entire period were within 2% showing the consistency of measurements. Absorbed doses estimated by EC were in good agreement with in-water calibrations within 2% for photons and electron beams. The present results suggest that extrapolation chambers can be considered as an independent measuring system for absorbed dose in addition to Farmer type ion chambers. In the absence of standard beam quality (Co-60 radiations as reference Quality for Nd,water the possibility of keeping EC as Primary Standards for absorbed dose calibrations in high energy radiation beams from linacs should be explored. As there are neither Standard Laboratories nor SSDL available in our country, we look forward to keep EC as Local Standard for hospital chamber calibrations. We are also participating in the IAEA mailed TLD intercomparison programme for quality audit of existing status of radiation dosimetry in high energy linac beams. The performance of EC has to be confirmed with cobalt-60 beams by a separate study, as linacs are susceptible for minor variations in dose output on different days.
Richmond, Orien M W; McEntee, Jay P; Hijmans, Robert J; Brashares, Justin S
2010-01-01
Species distribution models (SDMs) are increasingly used for extrapolation, or predicting suitable regions for species under new geographic or temporal scenarios. However, SDM predictions may be prone to errors if species are not at equilibrium with climatic conditions in the current range and if training samples are not representative. Here the controversial "Pleistocene rewilding" proposal was used as a novel example to address some of the challenges of extrapolating modeled species-climate relationships outside of current ranges. Climatic suitability for three proposed proxy species (Asian elephant, African cheetah and African lion) was extrapolated to the American southwest and Great Plains using Maxent, a machine-learning species distribution model. Similar models were fit for Oryx gazella, a species native to Africa that has naturalized in North America, to test model predictions. To overcome biases introduced by contracted modern ranges and limited occurrence data, random pseudo-presence points generated from modern and historical ranges were used for model training. For all species except the oryx, models of climatic suitability fit to training data from historical ranges produced larger areas of predicted suitability in North America than models fit to training data from modern ranges. Four naturalized oryx populations in the American southwest were correctly predicted with a generous model threshold, but none of these locations were predicted with a more stringent threshold. In general, the northern Great Plains had low climatic suitability for all focal species and scenarios considered, while portions of the southern Great Plains and American southwest had low to intermediate suitability for some species in some scenarios. The results suggest that the use of historical, in addition to modern, range information and randomly sampled pseudo-presence points may improve model accuracy. This has implications for modeling range shifts of organisms in response to climate change. PMID:20877563
Gajewska, M; Worth, A; Urani, C; Briesen, H; Schramm, K-W
2014-06-16
The application of physiologically based toxicokinetic (PBTK) modelling in route-to-route (RtR) extrapolation of three cosmetic ingredients: coumarin, hydroquinone and caffeine is shown in this study. In particular, the oral no-observed-adverse-effect-level (NOAEL) doses of these chemicals are extrapolated to their corresponding dermal values by comparing the internal concentrations resulting from oral and dermal exposure scenarios. The PBTK model structure has been constructed to give a good simulation performance of biochemical processes within the human body. The model parameters are calibrated based on oral and dermal experimental data for the Caucasian population available in the literature. Particular attention is given to modelling the absorption stage (skin and gastrointestinal tract) in the form of several sub-compartments. This gives better model prediction results when compared to those of a PBTK model with a simpler structure of the absorption barrier. In addition, the role of quantitative structure-property relationships (QSPRs) in predicting skin penetration is evaluated for the three substances with a view to incorporating QSPR-predicted penetration parameters in the PBTK model when experimental values are lacking. Finally, PBTK modelling is used, first to extrapolate oral NOAEL doses derived from rat studies to humans, and then to simulate internal systemic/liver concentrations - Area Under Curve (AUC) and peak concentration - resulting from specified dermal and oral exposure conditions. Based on these simulations, AUC-based dermal thresholds for the three case study compounds are derived and compared with the experimentally obtained oral threshold (NOAEL) values. PMID:24731971
Energy Technology Data Exchange (ETDEWEB)
Sussmann, R.; Homburg, F.; Freudenthaler, V.; Jaeger, H. [Frauenhofer Inst. fuer Atmosphaerische Umweltforschung, Garmisch-Partenkirchen (Germany)
1997-12-31
The CCD image of a persistent contrail and the coincident LIDAR measurement are presented. To extrapolate the LIDAR derived optical thickness to the video field of view an anisotropy correction and calibration has to be performed. Observed bright halo components result from highly regular oriented hexagonal crystals with sizes of 200 {mu}m-2 mm. This explained by measured ambient humidities below the formation threshold of natural cirrus. Optical thickness from LIDAR shows significant discrepancies to the result from coincident NOAA-14 data. Errors result from anisotropy correction and parameterized relations between AVHRR channels and optical properties. (author) 28 refs.
International Nuclear Information System (INIS)
This paper describes the methodology and the results obtained at 1304 A wavelength from an analysis of the AFGL Polar Bear experiment. The basic measurement equipment provided data of a spatial resolution of 20 km over a large portion of the earth. The instrumentation also provided sampled outputs as the footprint scanned along the measurement track. The combination of the fine scanning and large area coverage provided opportunity for a spatial power spectral analysis that in turn provided a means for extrapolation to finer spatial scale
Kasprzak, W. T.; Newton, G. P.
1976-01-01
An investigation is conducted concerning the feasibility to extrapolate the data of the Ogo 6 empirical composition model to altitudes which are lower than 450 km. Extrapolated Ogo 6 model densities are, therefore, compared with data obtained in the Neutral Atmospheric Composition Experiment (Nace) carried out during the time from April to November 1971. The results of the investigation support the conclusions of an earlier comparison of Ogo 6 and Nace data conducted by Newton et al. (1973).
International Nuclear Information System (INIS)
Dynamic phenomena indicative of slipping reconnection and magnetic implosion were found in a time series of nonlinear force-free field (NLFFF) extrapolations for the active region 11515, which underwent significant changes in the photospheric fields and produced five C-class flares and one M-class flare over five hours on 2012 July 2. NLFFF extrapolation was performed for the uninterrupted 5 hour period from the 12 minute cadence vector magnetograms of the Helioseismic and Magnetic Imager on board the Solar Dynamic Observatory. According to the time-dependent NLFFF model, there was an elongated, highly sheared magnetic flux rope structure that aligns well with an H? filament. This long filament splits sideways into two shorter segments, which further separate from each other over time at a speed of 1-4 km s–1, much faster than that of the footpoint motion of the magnetic field. During the separation, the magnetic arcade arching over the initial flux rope significantly decreases in height from ?4.5 Mm to less than 0.5 Mm. We discuss the reality of this modeled magnetic restructuring by relating it to the observations of the magnetic cancellation, flares, a filament eruption, a penumbra formation, and magnetic flows around the magnetic polarity inversion line
Shida, Satomi; Utoh, Masahiro; Murayama, Norie; Shimizu, Makiko; Uno, Yasuhiro; Yamazaki, Hiroshi
2015-10-01
1.?Cynomolgus monkeys are widely used in preclinical studies as non-human primate species. Pharmacokinetics of human cytochrome P450 probes determined in cynomolgus monkeys after single oral or intravenous administrations were extrapolated to give human plasma concentrations. 2.?Plasma concentrations of slowly eliminated caffeine and R-/S-warfarin and rapidly eliminated omeprazole and midazolam previously observed in cynomolgus monkeys were scaled to human oral biomonitoring equivalents using known species allometric scaling factors and in vitro metabolic clearance data with a simple physiologically based pharmacokinetic (PBPK) model. Results of the simplified human PBPK models were consistent with reported experimental PK data in humans or with values simulated by a fully constructed population-based simulator (Simcyp). 3.?Oral administrations of metoprolol and dextromethorphan (human P450 2D probes) in monkeys reportedly yielded plasma concentrations similar to their quantitative detection limits. Consequently, ratios of in vitro hepatic intrinsic clearances of metoprolol and dextromethorphan determined in monkeys and humans were used with simplified PBPK models to extrapolate intravenous PK in monkeys to oral PK in humans. 4.?These results suggest that cynomolgus monkeys, despite their rapid clearance of some human P450 substrates, could be a suitable model for humans, especially when used in conjunction with simple PBPK models. PMID:26075833
Directory of Open Access Journals (Sweden)
T. Gerken
2012-04-01
Full Text Available This paper introduces a surface model with two soil-layers for use in a high-resolution circulation model that has been modified with an extrapolated surface temperature, to be used for the calculation of turbulent fluxes. A quadratic temperature profile based on the layer mean and base temperature is assumed in each layer and extended to the surface. The model is tested at two sites on the Tibetan Plateau near Nam Co Lake during four days during the 2009 Monsoon season. In comparison to a two-layer model without explicit surface temperature estimate, there is a greatly reduced delay in diurnal flux cycles and the modelled surface temperature is much closer to observations. Comparison with a SVAT model and eddy covariance measurements shows an overall reasonable model performance based on RMSD and cross correlation comparisons between the modified and original model. A potential limitation of the model is the need for careful initialisation of the initial soil temperature profile, that requires field measurements. We show that the modified model is capable of reproducing fluxes of similar magnitudes and dynamics when compared to more complex methods chosen as a reference.
Low concentrations and short environmental persistence times of some herbicides make it difficult to develop analytical methods to detect herbicide residues in plants or soils. In contrast, genomics may provide tools to identify herbicide exposure to plants in field settings. Usi...
Imaging of defects in girth welds using inverse wave field extrapolation of ultrasonic data:
Pörtzgen, N.
2007-01-01
Ultrasonic non-destructive testing is a renowned method for the inspection of girth welds. However, defect sizing and characterization remains challenging with the current inspection philosophy. In addition, data display and interpretation is not straightforward and requires skill and experience from the inspector. A better and more reliable inspection result would contribute to safer pipeline construction and economic benefits (like low false call rates and the possibility to use smaller wal...
Ezequiel Geremia
2013-01-01
The large size of the datasets produced by medical imaging protocols contributes to the success of supervised discriminative methods for semantic labelling of images. Our study makes use of a general and efficient emerging framework, discriminative random forests, for the detection of brain lesions in multi-modal magnetic resonance images (MRIs). The contribution is three-fold. First, we focus on segmentation of brain lesions which is an essential task to diagnosis, prognosis and therapy plan...
Full-disk nonlinear force-free field extrapolation of SDO/HMI and SOLIS/VSM magnetograms
Tadesse, Tilaye; Inhester, B; MacNeice, P; Pevtsov, A; Sun, X
2012-01-01
Extrapolation codes in Cartesian geometry for modelling the magnetic field in the corona do not take the curvature of the Sun's surface into account and can only be applied to relatively small areas, e.g., a single active region. We compare the analysis of the photospheric magnetic field and subsequent force-free modeling based on full-disk vector maps from Helioseismic and Magnetic Imager (HMI) on board solar dynamics observatory (SDO) and Vector Spectromagnetograph (VSM) of the Synoptic Optical Long-term Investigations of the Sun (SOLIS). We use Helioseismic and Magnetic Imager and Vector Spectromagnetograph photospheric magnetic field measurements to model the force-free coronal field above multiple solar active regions, assuming magnetic forces to dominate. We solve the nonlinear force-free field equations by minimizing a functional in spherical coordinates over a full disk excluding the poles. After searching for the optimum modeling parameters for the particular data sets, we compare the resulting nonli...
Wang, Zhen; Leung, Kenneth M Y
2015-10-01
Unionised ammonia (NH3) is highly toxic to freshwater organisms. Yet, most of the available toxicity data on NH3 were predominantly generated from temperate regions, while toxicity data on NH3 derived from tropical species were limited. To address this issue, we first conducted standard acute toxicity tests on NH3 using ten tropical freshwater species. Subsequently, we constructed a tropical species sensitivity distribution (SSD) using these newly generated toxicity data and available tropical toxicity data of NH3, which was then compared with the corresponding temperate SSD constructed from documented temperate acute toxicity data. Our results showed that tropical species were generally more sensitive to NH3 than their temperate counterparts. Based on the ratio between temperate and tropical hazardous concentration 10% values, we recommend an extrapolation factor of four to be applied when surrogate temperate toxicity data or temperate water quality guidelines of NH3 are used for protecting tropical freshwater ecosystems. PMID:26093078
International Nuclear Information System (INIS)
An intercomparison of the absorbed dose rate in tissue, Dt(0.07), at radiation protection levels for beta dosimetry was performed between two national metrology institutes, the D I Mendeleyev Institute for Metrology (VNIIM) in St Petersburg (Russia) and the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig (Germany), from 2009 to 2010. For this comparison, radiation sources of both institutes were calibrated using the primary standard measuring devices (extrapolation chambers) of both institutes, i.e. no transfer instrument was used as both primary standards were directly compared. The values of the absorbed dose rates in tissue agree within 1.2% for two different 90Sr/90Y sources, within 1.0% for one 85Kr source and within 1.5% and 4.2% for two different 147Pm sources. All these deviations are within 1 to 2 times the corresponding standard deviations. (authors)
Energy Technology Data Exchange (ETDEWEB)
Behrens, R. [Physikalisch-Technische Bundesanstalt (PTB), 38116 Braunschweig, (Germany); Fedina, S.; Oborin, A. [D I Mendeleyev Institute for Metrology (VNIIM), 198005 St Petersburg, (Russian Federation)
2011-07-01
An intercomparison of the absorbed dose rate in tissue, Dt(0.07), at radiation protection levels for beta dosimetry was performed between two national metrology institutes, the D I Mendeleyev Institute for Metrology (VNIIM) in St Petersburg (Russia) and the Physikalisch-Technische Bundesanstalt (PTB) in Braunschweig (Germany), from 2009 to 2010. For this comparison, radiation sources of both institutes were calibrated using the primary standard measuring devices (extrapolation chambers) of both institutes, i.e. no transfer instrument was used as both primary standards were directly compared. The values of the absorbed dose rates in tissue agree within 1.2% for two different {sup 90}Sr/{sup 90}Y sources, within 1.0% for one {sup 85}Kr source and within 1.5% and 4.2% for two different {sup 147}Pm sources. All these deviations are within 1 to 2 times the corresponding standard deviations. (authors)
Energy Technology Data Exchange (ETDEWEB)
Scott, B.R.; Muggenburg, B.A.; Welsh, C.A.; Angerstein, D.A.
1994-11-01
The alpha emitter plutonium-238 ({sup 238}Pu), which is produced in uranium-fueled, light-water reactors, is used as a thermoelectric power source for space applications. Inhalation of a mixed oxide form of Pu is the most likely mode of exposure of workers and the general public. Occupational exposures to {sup 238}PuO{sub 2} have occurred in association with the fabrication of radioisotope thermoelectric generators. Organs and tissue at risk for deterministic and stochastic effects of {sup 238}Pu-alpha irradiation include the lung, liver, skeleton, and lymphatic tissue. Little has been reported about the effects of inhaled {sup 238}PuO{sub 2} on peripheral blood cell counts in humans. The purpose of this study was to investigate hematological responses after a single inhalation exposure of Beagle dogs to alpha-emitting {sup 238}PuO{sub 2} particles and to extrapolate results to humans.
Extrapolation of Urn Models via Poissonization: Accurate Measurements of the Microbial Unknown
Lladser, Manuel; Reeder, Jens; 10.1371/journal.pone.0021105
2011-01-01
The availability of high-throughput parallel methods for sequencing microbial communities is increasing our knowledge of the microbial world at an unprecedented rate. Though most attention has focused on determining lower-bounds on the alpha-diversity i.e. the total number of different species present in the environment, tight bounds on this quantity may be highly uncertain because a small fraction of the environment could be composed of a vast number of different species. To better assess what remains unknown, we propose instead to predict the fraction of the environment that belongs to unsampled classes. Modeling samples as draws with replacement of colored balls from an urn with an unknown composition, and under the sole assumption that there are still undiscovered species, we show that conditionally unbiased predictors and exact prediction intervals (of constant length in logarithmic scale) are possible for the fraction of the environment that belongs to unsampled classes. Our predictions are based on a P...
Extrapolating ecological risks of ionizing radiation from individuals to populations to ecosystems
International Nuclear Information System (INIS)
Approaches for protecting ecosystems from ionizing radiation are quite different from those used for protecting ecosystems from adverse effects of toxic chemicals. The methods used for chemicals are conceptually similar to those used to assess risks of chemicals to human health in that they focus on the protection of the most sensitive or most highly exposed individuals. The assumption is that if sensitive or maximally exposed species and life stages are protected, then ecosystems will be protected. Radiological protection standards, on the other hand, are explicitly premised on the assumption that organisms, populations and ecosystems all possess compensatory capabilities to allow them to survive in the face of unpredictable natural variation in their environments. These capabilities are assumed to persist in the face of at least some exposure to ionizing radiation. The prevailing approach to radiological protection was developed more than 30 years ago, at a time when the terms risk assessment and risk management were rarely used. The expert review approach used to derive radiological protection standards is widely perceived to be inconsistent with the open, participatory approach that prevails today for the regulation of toxic chemicals. The available data for environmental radionuclides vastly exceeds that available for any chemical. Therefore, given an understanding of dose-response relationships for radiation effects and exposures for individual organisms, it shoexposures for individual organisms, it should be possible to develop methods for quantifying effects of radiation on populations. A tiered assessment scheme as well as available population models that could be used for the ecological risk assessment of radionuclides is presented. (author)
Extrapolating traditional DNA microarray statistics to tiling and protein microarray technologies.
Royce, Thomas E; Rozowsky, Joel S; Luscombe, Nicholas M; Emanuelsson, Olof; Yu, Haiyuan; Zhu, Xiaowei; Snyder, Michael; Gerstein, Mark B
2006-01-01
A credit to microarray technology is its broad application. Two experiments--the tiling microarray experiment and the protein microarray experiment--are exemplars of the versatility of the microarrays. With the technology's expanding list of uses, the corresponding bioinformatics must evolve in step. There currently exists a rich literature developing statistical techniques for analyzing traditional gene-centric DNA microarrays, so the first challenge in analyzing the advanced technologies is to identify which of the existing statistical protocols are relevant and where and when revised methods are needed. A second challenge is making these often very technical ideas accessible to the broader microarray community. The aim of this chapter is to present some of the most widely used statistical techniques for normalizing and scoring traditional microarray data and indicate their potential utility for analyzing the newer protein and tiling microarray experiments. In so doing, we will assume little or no prior training in statistics of the reader. Areas covered include background correction, intensity normalization, spatial normalization, and the testing of statistical significance. PMID:16939796
DEFF Research Database (Denmark)
Hui, Cang; McGeoch, Melodie A.
2009-01-01
The estimation of species abundances at regional scales requires a cost-efficient method that can be applied to existing broadscale data. We compared the performance of eight models for estimating species abundance and community structure from presence-absence maps of the southern African avifauna. Six models were based on the intraspecific occupancy-abundance relationship (OAR); the other two on the scaling pattern of species occupancy (SPO), which quantifies the decline in species range size when measured across progressively finer scales. The performance of these models was examined using five tests: the first three compared the predicted community structure against well-documented macroecological patterns; the final two compared published abundance estimates for rare species and the total regional abundance estimate against predicted abundances. Approximately two billion birds were estimated as occurring in South Africa, Lesotho, and Swaziland. SPO models outperformed the OAR models, due to OAR models assuming environmental homogeneity and yielding scale-dependent estimates. Therefore, OAR models should only be applied across small, homogenous areas. By contrast, SPO models are suitable for data at larger spatial scales because they are based on the scale dependence of species range size and incorporate environmental heterogeneity (assuming fractal habitat structure or performing a Bayesian estimate of occupancy). Therefore, SPO models are recommended for assemblage-scale regional abundance estimation based on spatially explicit presence-absence data.
Young, Sean; Dutta, Debo; Dommety, Gopal
2009-06-01
Online social network users may leave creative, subtle cues on their public profiles to communicate their motivations and interests to other network participants. This paper explores whether psychological predictions can be made about the motivations of social network users by identifying and analyzing these cues. Focusing on the domain of relationship seeking, we predicted that people using social networks for dating would reveal that they have a single relationship status as a method of eliciting contact from potential romantic others. Based on results from a pilot study (n = 20) supporting this hypothesis, we predicted that people attempting to attract users of the same religious background would report a religious affiliation along with a single relationship status. Using observational data from 150 Facebook profiles, results from a multivariate logistic regression suggest that people providing a religious affiliation were more likely to list themselves as single (a proxy for their interest in using the network to find romantic partners) than people who do not provide religious information. We discuss the implications for extracting psychological information from Facebook profiles. To our knowledge, this is the first study to suggest that information from publicly available online social networking profiles can be used to predict people's motivations for using social networks. PMID:19366321
International Nuclear Information System (INIS)
SCK-CEN is studying the disposal of high and long-lived medium level waste in the Boom Clay at Mol, Belgium. In the performance assessment for such a repository time extrapolation is an inherent problem due to the extremely long half-life of some important radionuclides. To increase the confidence in these time extrapolations SCK-CEN applies a combination of different experimental and modelling approaches including laboratory and in situ experiments, natural analogue studies, deterministic (or mechanistic) models and stochastical models. An overview is given of these approaches and some examples of applications to the different repository system components are given. (author)
Energy Technology Data Exchange (ETDEWEB)
Bastos, Fernanda Martins
2015-04-01
In laboratories involving Radiological Protection practices, it is usual to use reference radiations for calibrating dosimeters and to study their response in terms of energy dependence. The International Organization for Standardization (ISO) established four series of reference X-rays beams in the ISO- 4037 standard: the L and H series, as low and high air Kerma rates, respectively, the N series of narrow spectrum and W series of wide spectrum. The X-rays beams with tube potential below 30 kV, called 'low energy beams' are, in most cases, critical as far as the determination of their parameters for characterization purpose, such as half-value layer. Extrapolation chambers are parallel plate ionization chambers that have one mobile electrode that allows variation of the air volume in its interior. These detectors are commonly used to measure the quantity Absorbed Dose, mostly in the medium surface, based on the extrapolation of the linear ionization current as a function of the distance between the electrodes. In this work, a characterization of a model 23392 PTW extrapolation chamber was done in low energy X-rays beams of the ISO- 4037 standard, by determining the polarization voltage range through the saturation curves and the value of the true null electrode spacing. In addition, the metrological reliability of the extrapolation chamber was studied with measurements of the value of leakage current and repeatability tests; limit values were established for the proper use of the chamber. The PTW23392 extrapolation chamber was calibrated in terms of air Kerma in some of the ISO radiation series of low energy; the traceability of the chamber to the National Standard Dosimeter was established. The study of energy dependency of the extrapolation chamber and the assessment of the uncertainties related to the calibration coefficient were also done; it was shown that the energy dependence was reduced to 4% when the extrapolation technique was used. Finally, the first half-value layers were determined for the low energy ISO N series with the extrapolation chamber, in collimated and uncollimated beams and it was showed that this detector is feasible for such measurements. (author)
Oberdörster, G
1989-01-01
Dose-effect relationships of inhaled substances are complicated by the interrelationship between inhaled dose, deposited dose, and retained dose. Deposited and retained doses are most important for evaluating dose-effect relations; however, inhaled dose and exposure concentration that are not representative of the actual dose to target sites are widely used for this purpose. For extrapolating results of animal inhalation studies to humans, several factors have to be considered for calculating a human equivalent dose to the respiratory tract and for estimating a human equivalent exposure concentration. Among these factors are separate deposition in nasopharyngeal, tracheobronchial, and alveolar regions, both total regional deposition and deposited dose per unit surface area. Predictive particle deposition models for the respiratory tract can be used for calculating these. The retained dose is another factor that takes into account respiratory tract retention and determines the long-term dose to the respiratory tract. A rat inhalation study using Ni3S2 exposure (concentration, 970 micrograms m-3; duration, 78 wk; exposure, 6 h d-1, 5 d wk-1) resulted in bronchogenic and alveologenic tumors. Extrapolation modeling of rat data was performed based on dose factors discussed above, and assuming different conditions of pulmonary retention for Ni3S2 with half-times of 36 d for rats and 103 d for humans. Model calculations showed that deposited surface area dose was greater for the tracheobronchial than for the pulmonary region in both rat and man. The retained dose per gram of lung was greater in rat than in man under resting conditions. An equivalent exposure concentration would be lower in humans than in the rat if it is based on the retained dose expressed per square centimeter of alveolar surface area. However, inhaled equivalent concentration in man can be considerably higher when the tracheobronchial surface area dose is considered. The most sensitive region of the respiratory tract--for example, with regard to tumor induction--should be selected for estimating human equivalent exposure. PMID:2606684
Scientific Electronic Library Online (English)
José Francisco dos, Reis Sobrinho; Levi de Oliveira, Bueno.
2014-04-01
Full Text Available Hot tensile and creep data were obtained for 2.25Cr-1Mo steel, ASTM A387 Gr.22CL2, at the temperatures of 500-550-600-650-700 °C. Using the concept of equivalence between hot tensile data and creep data, the results were analyzed according to the methodology based on Kachanov Continuum Damage Mechan [...] ics proposed by Penny, which suggests the possibility of using short time creep data obtained in laboratory for extrapolation to long operating times corresponding to tens of thousands hours. The hot tensile data (converted to creep) define in a better way the region where ?=0 and the creep data define the region where ?=1, according to the methodology. Extrapolation to 10,000 h and 100,000 h is performed and the results compared with results obtained by other extrapolation procedures such as the Larson-Miller and Manson-Haferd methodologies. Extrapolation from ASTM and NIMS Datasheets for 10,000 h and 100,000 h as well as data from other authors on 2.25Cr-1Mo steel are used for assessing the reliability of the results.
cDNA Cloning of Fathead minnow (Pimephales promelas) Estrogen and Androgen Receptors for Use in Steroid Receptor Extrapolation Studies for Endocrine Disrupting Chemicals. Wilson, V.S.1,, Korte, J.2, Hartig P. 1, Ankley, G.T.2, Gray, L.E., Jr 1, , and Welch, J.E.1. 1U.S...
International Nuclear Information System (INIS)
Analysis of Type I ELMs from ongoing experiments shows that ELM energy losses are correlated with the density and temperature of the pedestal plasma before the ELM crash. The Type I ELM plasma energy loss normalized to the pedestal energy is found to correlate across experiments with the collisionality of the pedestal plasma (?*ped), decreasing with increasing ?*ped. Other parameters affect the ELM size, such as the edge magnetic shear, etc, which influence the plasma volume affected by the ELMs. ELM particle losses are influenced by this ELM affected volume and are weakly dependent on other pedestal plasma parameters. In JET and DIII-D, under some conditions, ELMs can be observed ('minimum' Type I ELMs with energy losses acceptable for ITER), that do not affect the plasma temperature. The duration of the divertor ELM power pulse is correlated with the typical ion transport time from the pedestal to the divertor target (?||Front = 2?Rq95/cs,ped) and not with the duration of the ELM-associated MHD activity. Similarly, the timescale of ELM particle fluxes is also determined by ?||Front. The extrapolation of the present experimental results to ITER is summarized
International Nuclear Information System (INIS)
The distributions of approximately 7000 flares of importance >= 1 were plotted relative to the sector-structure boundaries of the interplanetary magnetic field (+-) and (-+) extrapolated to the Sun. The data obtained for the time period July 1955 - December 1961 were used. The distributions obtained were analy ed jointly with the same distributions for 1964-1974. It is shown that stable concentration of the flares is observed only near the boundaries (-+) for both hemispheres of the Sun during the increase of the activity and near the maxima cycles No 19 and 20. There are no difference between ''Hale'' and ''non-Hale'' boundaries for these flares. The decrease of the flares was revealed even near the boundary type (+-). For activity decrease phase, after the Sun's general field polarity inversion the concentration of the flares to the boundaries is absent. The difference between Hale and non-Hale boundaries for flares is revealed only in some increase of the flare concentration near the Hale boundaries. The results obtained are likely to give additional evidence in favour of solar magnetic field and flare activity connection
Maingi, R.
2014-11-01
Large edge localized modes (ELMs) typically accompany good H-mode confinement in fusion devices, but can present problems for plasma facing components because of high transient heat loads. Here the range of techniques for ELM control deployed in fusion devices is reviewed. Two strategies in the ITER baseline design are emphasized: rapid ELM triggering and peak heat flux control via pellet injection, and the use of magnetic perturbations to suppress or mitigate ELMs. While both of these techniques are moderately well developed, with reasonable physical bases for projecting to ITER, differing observations between multiple devices are also discussed to highlight the needed community R&D. In addition, recent progress in ELM-free regimes, namely quiescent H-mode, I-mode, and enhanced pedestal H-mode is reviewed, and open questions for extrapolability are discussed. Finally progress and outstanding issues in alternate ELM control techniques are reviewed: supersonic molecular beam injection, edge electron cyclotron heating, lower hybrid heating and/or current drive, controlled periodic jogs of the vertical centroid position, ELM pace-making via periodic magnetic perturbations, ELM elimination with lithium wall conditioning, and naturally occurring small ELM regimes.
Energy Technology Data Exchange (ETDEWEB)
Maingi, R [PPPL
2014-07-01
Large edge localized modes (ELMs) typically accompany good H-mode confinement in fusion devices, but can present problems for plasma facing components because of high transient heat loads. Here the range of techniques for ELM control deployed in fusion devices is reviewed. The two baseline strategies in the ITER baseline design are emphasized: rapid ELM triggering and peak heat flux control via pellet injection, and the use of magnetic perturbations to suppress or mitigate ELMs. While both of these techniques are moderately well developed, with reasonable physical bases for projecting to ITER, differing observations between multiple devices are also discussed to highlight the needed community R & D. In addition, recent progress in ELM-free regimes, namely Quiescent H-mode, I-mode, and Enhanced Pedestal H-mode is reviewed, and open questions for extrapolability are discussed. Finally progress and outstanding issues in alternate ELM control techniques are reviewed: supersonic molecular beam injection, edge electron cyclotron heating, lower hybrid heating and/or current drive, controlled periodic jogs of the vertical centroid position, ELM pace-making via periodic magnetic perturbations, ELM elimination with lithium wall conditioning, and naturally occurring small ELM regimes.
Directory of Open Access Journals (Sweden)
B. Deutsch
2010-04-01
Full Text Available Rates of denitrification in sediments were measured with the isotope pairing technique at different sites in the southern and central Baltic Sea. They varied between 0.5 ?mol m^{?2} h^{?1} in sands and 28.7 ?mol m^{?2} h^{?1} in muddy sediments and showed a good correlation to the organic carbon contents of the surface sediments. N-removal rates via sedimentary denitrification were estimated for the entire Baltic Sea calculating sediment specific denitrification rates and interpolating them to the whole Baltic Sea area. Another approach was carried out by using the relationship between the organic carbon content and the rate of denitrification. For the entire Baltic Sea the N-removal by denitrification in sediments varied between 426–652 kt N a^{?1}, which is around 48–73% of the external N inputs delivered via rivers, coastal point sources and atmospheric deposition. Moreover, an expansion of the anoxic bottom areas was considered under the assumption of a rising oxycline from 100 to 80 m water depth. This leads to an increase of the area with anoxic conditions and an overall decrease in sedimentary denitrification by 14%. Overall we can show here that this type of data extrapolation is a powerful tool to estimate the nitrogen losses for a whole coastal sea and may be applicable to other coastal regions and enclosed seas, too.
What you see may not always be what you get : Bioavailability and extrapolation from in vitro tests
DEFF Research Database (Denmark)
Nielsen, Jesper Bo
2008-01-01
In human risk assessment, bioavailability needs to be considered when relying on in vitro toxicity results. For single chemicals, this quantitative challenge is often handled through a bioavailability factor. For mixtures, however, things are more complicated. Thus, individual constituents may not only interact toxicodynamically and toxicokinetically, but the composition of constituents reaching the target site may also differ from what was present at the site of exposure due to the differences in their bioavailabilities. A recent study concluded on the in vivo potential of Australian tea-tree oil (TTO) to act as an endocrine disruptor based on an in vitro protocol measuring the growth of MCF-7 cells following chemical exposure to TTO. TTO is primarily used topically in humans, and is not a single chemical but is a mixture with some constituents penetrating the skin which others do not. The present study evaluated in an identical in vitro model to what extent TTO and its skin penetrating constituents affected the growth of MCF-7 cells. The estrogenic potency of TTO was confirmed, but none of the bioavailable TTO constituents demonstrated estrogenicity. The present study, therefore, cautions in vitro to in vivo extrapolations from the mixtures of constituents with potentially varying bioavailabilities. Udgivelsesdato: June
Tassis, Konstantinos; Pavlidou, Vasiliki
2015-07-01
Recent Planck results have shown that radiation from the cosmic microwave background passes through foregrounds in which aligned dust grains produce polarized dust emission, even in regions of the sky with the lowest level of dust emission. One of the most commonly used ways to remove the dust foreground is to extrapolate the polarized dust emission signal from frequencies where it dominates (e.g. ˜350 GHz) to frequencies commonly targeted by cosmic microwave background experiments (e.g. ˜150 GHz). In this Letter, we describe an interstellar medium effect that can lead to decorrelation of the dust emission polarization pattern between different frequencies due to multiple contributions along the line of sight. Using a simple 2-cloud model we show that there are two conditions under which this decorrelation can be large: (a) the ratio of polarized intensities between the two clouds changes between the two frequencies; (b) the magnetic fields between the two clouds contributing along a line of sight are significantly misaligned. In such cases, the 350 GHz polarized sky map is not predictive of that at 150 GHz. We propose a possible correction for this effect, using information from optopolarimetric surveys of dichroicly absorbed starlight.
Mathematical methods for physical and analytical chemistry
Goodson, David Z
2011-01-01
Mathematical Methods for Physical and Analytical Chemistry presents mathematical and statistical methods to students of chemistry at the intermediate, post-calculus level. The content includes a review of general calculus; a review of numerical techniques often omitted from calculus courses, such as cubic splines and Newton's method; a detailed treatment of statistical methods for experimental data analysis; complex numbers; extrapolation; linear algebra; and differential equations. With numerous example problems and helpful anecdotes, this text gives chemistry students the mathematical
International Nuclear Information System (INIS)
Testing was performed to determine if gravel particles will creep into and puncture the high-density polyethylene (HDPE) liner in the catch basin of a grout vault over a nominal 30-year period. Testing was performed to support a design without a protective geotextile cover after the geotextile was removed from the design. Recently, a protective geotextile cover over the liner was put back into the design. The data indicate that the geotextile has an insignificant effect on the creep of gravel into the liner. However, the geotextile may help to protect the liner during construction. Two types of tests were performed to evaluate the potential for creep-related puncture. In the first type of test, a very sensitive instrument measured the rate at which a probe crept into HDPE over a 20-minute period at temperatures of 176 degrees F to 212 degrees F (80 degrees C to 100 degrees C). The second type of test consisted of placing the liner between gravel and mortar at 194 degrees F (90 degrees C) and 45.1 psi overburden pressure for periods up to 1 year. By combining data from the two tests, the long-term behavior of the creep was extrapolated to 30 years of service. After 30 years of service, the liner will be in a nearly steady condition and further creep will be extremely small. The results indicate that the creep of gravel into the liner will not create a puncture during service at 194 degrees F (90 degrees C). The estimated creep over 30 years is expected to be less than 25 mils out of the total initial thickness of 60 mils. The test temperature of 194 degrees F (90 degrees C) corresponds to the design basis temperature of the vault. Lower temperatures are expected at the liner, which makes the test conservative. Only the potential for failure of the liner resulting from creep of gravel is addressed in this report
International Nuclear Information System (INIS)
Full text : It is offered a mathematical model of overvoltage limiter and multi-step extrapolation method for solution of ordinary differential equations with the purposes of calculation of wave processes with presence of elements of high voltage networks
Directory of Open Access Journals (Sweden)
Leonieke Vermeer
2007-01-01
Full Text Available
'If the table turns, science will stagger'. The relationship between spiritualism and science in the Netherlands around 1900
Spiritualism is the belief that living men can keep contact, usually through an intermediary called a 'medium', with spirits of the dead. The history of modern spiritualism started in 1848 in America and in the decades that followed it spread all over the world. Especially as a result of British influences modern Anglo-Saxon spiritualism is characterized by a search for scientific proof of the so-called spiritualist phenomena. In the 1920s the Netherlands were late in comparison with neighbouring countries to institutionalize the scientific study of these phenomena. But this does not imply that there was no earlier discussion about it. Indeed, around 1900 there were attempts for a debate about the scientific underpinning of spiritualism and the main stage for it was the journal Het toekomstig leven [The future life]. In the historical conceptualization of this debate it has long been common to see the spiritualists as an anti-modern counterculture and the scientists as the representatives of modernity. Recently this dichotomic order has been replaced for a nuanced view that does more justice to the historical reality. Although Het toekomstig leven often used rhetoric strategies that emphasized the confrontation with science, the journal also lavishly incorporated scientific elements and made inexhaustible attempts for a scientific debate and study of the paranormal phenomena. Unlike in neighbouring countries there were hardly any natural scientists who responded, but there were some physicians as well as pioneers of the new field of parapsychology who pleaded for scientific research of spiritualism. This research eventually became reality in 1920 under the direction of some heavyweight scientists, but just as Het toekomstig leven the Dutch Society for Psychical Research was also marked by the difference between the critical-scientific approach and the not so critical approach of the believers. In my contribution I have showed that this demarcation was however not the same as the one between science and spiritualism, because these boundaries were considerably permeable.
Scientific Electronic Library Online (English)
Pradeep, Kumar; A. Nityananda, Shetty.
2013-01-01
Full Text Available The corrosion behaviour of welded maraging steel in hydrochloric acid solutions was studied over a range of acid concentration and solution temperature by electrochemical techniques like Tafel extrapolation method and electrochemical impedance spectroscopy. The corrosion rate of welded maraging stee [...] l increases with the increase in temperature and concentration of hydrochloric acid in the medium. The energies of activation, enthalpy of activation and entropy of activation for the corrosion process were calculated. The surface morphology of the corroded sample was evaluated by surface examination using scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDS).
Corrosion inhibition of brass by aliphatic amines
International Nuclear Information System (INIS)
Aliphatic amines hexylamine (HCA), octylamine (OCA) and decylamine (DCA) have been used as corrosion inhibitors for (70/30) brass in 0.I M HCIO4. The inhibitor efficiency (%P) calculated using weight loss, Tafel extrapolation, linear polarization and impedance methods was found to be in the order DCA> OCA> HCA. These adsorb on brass surface following bockris-swinkels' isotherm. DCA, OCA and HCA displaced 4, 3 and 2 molecules of water from interface respectively. Displacement of water molecules brought a great reorganization of double layer at the interface. These amines during corrosion form complexes with dissolved zinc and copper ions.(Author)
Simeonides, G. A.
2009-01-01
The previously demonstrated success of the reference enthalpy concept in heat transfer prediction at hypersonic flow conditions is utilized herein to propose a cost- effective methodology for extrapolation- o-flight of Stanton number measurements (or baseline computational results), and the determination of radiation-equilibrium surface temperatures that develop on actual vehicle surfaces during hypersonic / high enthalpy flight conditions. The methodology couples the (analytical) generalized reference enthalpy solution with Euler computations (providing input data along the edge of thin boundary layers) and is, therefore, significantly cheaper and more efficient than the execution of full Navier- Stokes computations that are presently incorporated, particularly so in the thermo-chemically active high enthalpy flow regime. The validity of the proposed methodology is demonstrated in a first step by means of two- dimensional test cases, whereby extrapolated data accuracy is better than 20%.
Pellicciotti, F.; Ragettli, S.; Carenzo, M.; Ayala, A.; McPhee, J. P.; Stoffel, M.
2014-12-01
While glacier responses to climate are understood in general terms and in their main trends, model based projections are affected by the type of model used and uncertainties in the meteorological input data, among others. Recent works have attempted at improving glacio-hydrological models by including neglected processes and investigating uncertainties in their outputs. In this work, we select two knowledge gaps in current modelling practices and illustrate their importance through modelling with a fully distributed mass balance model that includes some of the state of the art approaches for calculations of glacier ablation, accumulation and glacier geometry changes. We use an advanced mass balance model applied to glaciers in the Andes of Chile, Swiss Alps and Nepalese Himalaya to investigate two issues that seem of importance for a sound assessment of glacier changes: 1) the use of physically-based models of glacier ablation (energy balance) versus more empirical models (enhanced temperature index approaches); 2) the importance of the correct extrapolation of air temperature forcing on glaciers and the large uncertainty in model outputs associated with it. The ablation models are calibrated with a large amount of data from in-situ campaigns, and distributed observations of air temperature used to calculate lapse rates and calibrate a thermodynamic model of temperature distribution. We show that no final assessment can be made of what type of melt model is more appropriate or accurate for simulation of glacier ablation at the glacier scale, not even for relatively well studied glaciers. Both models perform in a similar manner at low elevations, but important differences are evident at high elevations, where lack of data prevents a final statement on which model better represent the actual ablation amounts. Accurate characterization of air temperature is important for correct simulations of glacier mass balance and volume changes. Substantial differences are obtained if we use the common approach of constant in time LRs (even if properly calibrated) or more sophisticated approaches accounting for the different thermal regime off and on-glacier, as distinct thermal conditions exist on and off glacier associated with the presence of the glacier boundary layer where katabatic flow is important.
Jiang, Chaowei; Feng, Xueshang; Hu, Qiang
2014-01-01
Solar filament are commonly thought to be supported in magnetic dips, in particular, of magnetic flux ropes (FRs). In this Letter, from the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is the first in that current NLFFF extrapolations with presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion line (PIL) with strong transverse field and magnetic shear, and the existence of a FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength $\\lesssim 100$ G), where the PIL is very fragmented due to small parasitic polarities on both side of the PIL and the transverse field has a low value of signal-to-noise ratio. Thus it represents a far more difficult challenge to extrapolate a large-sc...
International Nuclear Information System (INIS)
In this paper, we propose a fast numerical scheme to estimate Partition Functions (PF) of symmetric Potts fields. Our strategy is first validated on 2D two-color Potts fields and then on 3D two- and three-color Potts fields. It is then applied to the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated, deactivated and inactivated brain regions and to estimate region dependent hemodynamic filters. For any brain region, a specific 3D Potts field indeed embodies the spatial correlation over the hidden states of the voxels by modeling whether they are activated, deactivated or inactive. To make spatial regularization adaptive, the PFs of the Potts fields over all brain regions are computed prior to the brain activity estimation. Our approach is first based upon a classical path-sampling method to approximate a small subset of reference PFs corresponding to pre-specified regions. Then, we propose an extrapolation method that allows us to approximate the PFs associated to the Potts fields defined over the remaining brain regions. In comparison with preexisting methods either based on a path sampling strategy or mean-field approximations, our contribution strongly alleviates the computational cost and makes spatially adaptive regularization of whole brain fMRI datasets feasible. It is also robust against grid inhomogeneities and efficient irrespective of the topological configurations of the brain regions. (authors)
The proposed paradigm for ?Toxicity Testing in the 21st Century? supports the development of mechanistically-based, high-throughput in vitro assays as a potential cost effective and scientifically-sound alternative to some whole animal hazard testing. To accomplish this long-term...
International Nuclear Information System (INIS)
The following report has as objective to present the obtained results of measuring - with a camera of extrapolation of variable electrodes (CE) - the dose speed absorbed in equivalent fabric given by the group of sources of the secondary pattern of radiation Beta Nr. 86, (PSB), and to compare this results with those presented by the calibration certificates that accompany the PSB extended by the primary laboratory Physikalisch Technische Bundesanstalt, (PTB), of the R.F.A. as well as the uncertainties associated to the measure process. (Author)
Scientific Electronic Library Online (English)
Pedro, Höfig; Elvio, Giasson; Pedro Rodolfo Siqueira, Vendrame.
2014-12-01
Full Text Available O objetivo deste trabalho foi testar metodologias de mapeamento digital de solos (MDS) e avaliar a possibilidade de extrapolação de mapas entre áreas fisiograficamente semelhantes. A área de referência para o treinamento do modelo localizou-se no Município de Sentinela do Sul, RS, e a extrapolação f [...] oi feita para o Município Cerro Grande do Sul, RS. Desenvolveram-se pelo MDS modelos com o uso de variáveis ambientais, como preditoras, e as classes de solos - obtidas de um levantamento convencional na escala 1:50.000 - como variáveis dependentes. Testou-se o uso combinado de dois modelos de árvore de decisão (AD), treinados em duas paisagens com diferentes classes de drenagem. Para Sentinela do Sul, a concordância dos mapas preditos com os produzidos pelo levantamento convencional foi avaliada por matrizes de erro. Como a importância dos erros de mapeamento é variável, criou-se uma matriz ponderada, para atribuir diferentes importâncias aos erros específicos de mapeamento entre as distintas unidades de mapeamento. A acurácia do mapa de Cerro Grande do Sul foi avaliada pela verdade de campo. A extrapolação dos mapas gera resultados satisfatórios, com acurácia maior do que 75%. O uso de modelos com duas AD separadas por paisagens homogêneas gera mapas extrapolados com maior acurácia, avaliada pela verdade de campo. Abstract in english The objective of this work was to test methodologies for digital soil mapping (DSM) and to evaluate the possibility of map extrapolation between physiographically similar areas. The reference area for model training was located at the municipality of Sentinela do Sul, in the state of Rio Grand do Su [...] l (RS), Brazil, and the extrapolation was done for the municipality of Cerro Grande do Sul, RS. Models were developed by DSM using environmental variables as predictors, and soil classes - obtained from a conventional soil survey at 1:50,000 scale - as dependent variables. The combined use of two decision trees (DT), trained in two landscapes with different drainage classes, was tested. For Sentinela do Sul, the agreement between the predicted maps with the ones produced by conventional survey was evaluated using error matrices. Since the importance of mapping errors is variable, a weighted error matrix was created to assign different importances to specific mapping errors between different mapping units. Map accuracy of Cerro Grande do Sul was evaluated by ground truth. Map extrapolation yields satisfactory results, with accuracy higher than 75%. The use of models with two DTs divided by homogeneous landscapes generates extrapolated maps with a greater accuracy, evaluated by ground truth.
Energy Technology Data Exchange (ETDEWEB)
Bastos, Fernanda M.; Silva, Teogenes A. da, E-mail: fernanda_mbastos@yahoo.com.br, E-mail: silvata@cdtn.br [Centro de Desenvolvimeto da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2014-07-01
This work was with the main objective to study the energy dependence of extrapolation chamber in low energy X-rays to determine the value of the uncertainty associated with the variation of the incident radiation energy in the measures in which it is used. For studying the dependence of energy, were conducted comparative ionization current measurements between the extrapolation chamber and two ionization chambers: a chamber mammography, RC6M model, Radcal with energy dependence less than 5% and a 2575 model radioprotection chamber NE Technology; both chambers have very thin windows, allowing its application in low power beams. Measurements were made at four different depths of 1.0 to 4.0 mm extrapolation chamber, 1.0 mm interval, for each reference radiation. The study showed that there is a variable energy dependence on the volume of the extrapolation chamber. In other analysis, it is concluded that the energy dependence of extrapolation chamber becomes smaller when using the slope of the ionization current versus depth for the different radiation reference; this shows that the extrapolation technique, used for the absorbed dose calculation, reduces the uncertainty associated with the influence of the response variation with energy radiation.
Gangodagamage, Chandana; Rowland, Joel C; Hubbard, Susan S; Brumby, Steven P.; Liljedahl, Anna K; Wainwright, Haruko; Wilson, Cathy J; Altmann, Garrett L; Dafflon, Baptiste; Peterson, John; Ulrich, Craig; Tweedie, Craig E; Stan D. Wullschleger
2014-01-01
Landscape attributes that vary with microtopography, such as active layer thickness (ALT), are labor intensive and difficult to document effectively through in situ methods at kilometer spatial extents, thus rendering remotely sensed methods desirable. Spatially explicit estimates of ALT can provide critically needed data for parameterization, initialization, and evaluation of Arctic terrestrial models. In this work, we demonstrate a new approach using high-resolution remotely sensed data for...
Directory of Open Access Journals (Sweden)
Botton R.
2006-12-01
Full Text Available Les unités de production en lits fluidisés catalytiques sont apparues vers 1942 dans l'industrie pétrolière et vers 1960 dans l'industrie chimique. On se limitera ici au problème de l'extrapolation des lits fluidisés catalytiques pour l'industrie chimique, qui exigent de très hautes performances (> 99 % de conversion. Leur mise au point a, dans le passé, nécessité l'exploitation sur des sites industriels de coûteux pilotes de 0,5 m de diamètre et de plus de 10 m de hauteur. Nous montrerons que ces pilotes peuvent être évités et que le passage direct du laboratoire à l'échelle industrielle est réalisable. Cette possibilité offre en plus une méthode simple pour améliorer les catalyseurs des unités industrielles. Elle ouvre aussi cette technique, très appréciée en production, aux produits de petits tonnages. La présentation de cet article sera faite en trois parties : - La première, présentée ci-après, expose les problèmes majeurs posés par l'extrapolation, puis résume les études effectuées. Les travaux d'extrapolation relatifs à deux procédés effectués avec des pilotes sont ensuite présentés, à titre d'exemples. De ces travaux sont déduites les performances que l'on peut espérer obtenir avec un réacteur catalytique à lit fluidisé, ainsi que les règles de tendances à suivre pour y parvenir. - La deuxième partie, intitulée Stratégie n'utilisant que des expériences de laboratoire , propose une stratégie expérimentale permettant d'obtenir en laboratoire les informations nécessaires pour passer directement à l'échelle industrielle avec des expériences suggérées en partie par les résultats exposés dans le premier article. Les relations expérimentales établies lors de ces études montrent que les propriétés d'un lit fluidisé ne dépendent (mis à part quelquefois le diamètre du réacteur que d'un paramètre appelé vitesse minimum de fluidisation de comportement . - La troisième partie est intitulée Études théoriques, réalités expérimentales, suggestions . Les bulles des lits fluidisés ont fait l'objet de très nombreux travaux, dont les résultats sont très souvent explicités sous la forme de modèles mécanistiques à un paramètre qui est le diamètre des bulles. Pour confronter ces modèles à l'expérience, une relation est établie entre le diamètre des bulles et la vitesse minimum de fluidisation de comportement. Des suggestions sont alors faites pour améliorer les modèles, et l'on propose des conclusions générales sur les lits fluidisés. The firsts catalytic fluidized beds appear near 1942 in petroleum industry and near 1960 in chemical industry. We only consider very high performances chemical fluidized bed reactors (> 99%. In the past, they were developed through the use of very expensive pilot plants of about 0. 5 m diameter and 10 in high. We will demonstrate that direct scale up from laboratory data is possible. This possibility gives also a simple method to improve catalysts used into operating units and opens fluidized bed technique to products that need only low production. Presentation is made with three articles:- In the first, Studies, Models, Learning from Pilot Plants : after a description of the major scale-up problems, studies to solve then are summarized. Then scale-up works of two processes with the use of about 0. 5 m diameter pilot plant are given. From the results it is deduced the possible performances of a catalytic fluidized bed and how to operate to obtain then. - In the second*, Scale up with Only Laboratory Data , it is experimentally demonstrated that the information's scale-up can be obtained in a laboratory. A strategy to obtain them is suggested. An another result of theses experimental studies is that all physical properties of catalytic fluidized bed depends of only one parameter. It is called comportment incipient fluidization velocity . - In the third*, Theoretical Studies, Experimental Reality, Suggestionsresults of theoretical studies about bubbles of the fluidized beds are summarize
International Nuclear Information System (INIS)
An attempt is made to determine the dose within 24,48 and 72 hours of eventual exposure of healthy individuals to ionizing radiation through extrapolation of data retrieved from rats exposed to irradiation with 1, 3, 6 and 9 Gy X-rays. Seven clinic-chemical parameters are used: urea in the urine, taurine in the urine, urea in the serum, serum alkaline phosphatase, total serum lipids, sialic acid and thromboxane in the serum. A special formula is worked out and used for extrapolation of the experimental data, retrieved from irradiated rats, with due consideration to differences in the intensity of metabolic processes and species' radiosensitivity of rats and humans. The values of the aforementioned parameters that could be obtained upon eventual exposure of persons to ionizing irradiation are determined through computerization of the experimental data. It is believed that an accessible model for radiation dose assessment in the first three days after accidental exposure of human beings to ionizing irradiation is created. 5 refs., 4 figs. (author)
International Nuclear Information System (INIS)
The austenitic stainless steel X6 CrNi 1811 (DIN 1.4948) that is used as a structure material for the German Fast Breeder Reactor SNR 300 was creep tested in a temperature range of 550-650 deg under base material condition as well as welded material condition. The main point of this program (''Extrapolation-Program'') lies in the knowledge of the cree-rupture-strength and creep-behaviour up to 3X10-4 hours at higher temperatures in order to extrapolate up to (>=)105 hours for operating temperatures. In order to study the stress dependency of the minimum creep rate additional tests were carried out over temperature range 550 deg - 750 deg C. The present report describes the state in the running program with test-time up to 35.000 hours. Besides the cree-rupture behaviour it is possible to make a distinct quantitative statement for the creep-behaviour and ductility. Extensive metallographic examinations show the fracture behaviour and changes in structure. (author)
International Nuclear Information System (INIS)
The austenitic stainless steel X6crni1811 (Din 1.4948) used as a structure material for the German Fast Breeder Reactor SNR 300 was creep tested in a temperature range of 550-650 degree centigree material condition as well as welded material condition. The main point of this program (Extrapolation-Program) lies in the knowledge of the creep-rupture-strength and creep-behaviour up to 3 x 104 hours higher temperatures in order to extrapolated up to ?105 hours for operating temperatures. In order to study the stress dependency of the minimum creep rate additional tests were carried out of 550 degree centigree - 750 degree centigree. The present report describes the state in the running program with test-times of 23.000 hours and results from tests up to 55.000 hours belonging to other parallel programs are taken into account. Besides the creep-rupture behaviour it is also made a study of ductility between 550 and 750 degree centigree. Extensive metallographic examinations have been made to study the fracture behaviour and changes in structure. (Author)
International Nuclear Information System (INIS)
The austenitic stainless steel X6CrNi1811 (DIN 1.4948) used as a structure material for the German Fast Breeder Reactor SNR 300 was creep tested in a temperature range of 550-650 deg under base material condition as well as welded material condition. The main point of this program (''Extrapolation-Program'') lies in the knowledge of the creep-rupture-strength and creepbehaviour up to 3 x 104 hours at higher temperatures in order to extrapolate up to >=105 hours for operating temperatures. In order to study the stress dependency of the minimum creep rate additional tests were carried out of 550 deg - 750 deg C. The present report describes the state in the running program with test-times of 23.000 hours and results from tests up to 55.000 hours belonging to other parallel programs are taken into account. Besides the creep-rupture behaviour it is also made a study of ductility between 550 and 750 deg C. Extensive metallographic examinations have been made to study the fracture behaviour and changes in structure. (author)
International Nuclear Information System (INIS)
Thiosemicarbazones have attracted great pharmacological interest because of their biological properties, such as cytotoxic activity against multiple strains of human tumors. Due to the excellent properties of 64Cu, the copper complex N(4)-ortho-toluyl-2-acetylpyridine thiosemicarbazone ( (64Cu)(H2Ac4oT)Cl) was developed for tumor detection by positron emission tomography. The radiopharmaceuticals were produced in the nuclear reactor TRIGA-IPR-R1 from CDTN. At the present work, (64Cu)(H2Ac4oT)Cl biokinetic data (evaluated in mice bearing Ehrlich tumor) were treated by MIRD formalism to perform Internal Dosimetry studies. Doses in several organs of mice were determinate, as well as in implanted tumor, for (64Cu)(H2Ac4oT)Cl. Doses results obtained for animal model were extrapolated to humans assuming a similar concentration ratio among various tissues between mouse and human. In the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from 64Cu in the tissue were considered in dose calculations. (author)
Application of Rossi-? method to the critical experiment
International Nuclear Information System (INIS)
The prompt neutron decay constant ? is measured with different loading by Rossi-? method. Relation formula between loading and ? can be given. The critical mass is also given from the formula with prompt neutron decay constant ?c at critical. The critical mass from formula with ?c is in better agreement with one of extrapolation and interpolation
Energy Technology Data Exchange (ETDEWEB)
Fracassi, G.; Grattieri, W.; Insinga, F.; Malafarina, L.; Mazzoni, M.
1991-12-31
This paper presents a procedure developed by ENEL (Italian National Electricity Board) for the automatic prediction of territorial loads in its national distribution system. This procedure is based on an extrapolation method incorporating annual power consumption historical series, disaggregated by consuming sector, divided by voltage level, and further subdivided by zone. An analysis is made of the method used to transform the annual power consumption figures into suitable data to be input into distribution network planning studies. Attention is also focussed on the method for assigning determined loads to each particular network node. The paper indicates how forecasted power consumption figures are substituted by actual figures as they become available so as to improve prediction accuracy. A discussion is made of the `final use` technique for the planning of urban network expansions. Data representing the evolution of power consumption in major Italian cities is used to give an indication of national trends.
Primary standardization of activity using the coincidence method based on analogue instrumentation
International Nuclear Information System (INIS)
Widely implemented at national metrology institutes (NMIs), the coincidence method is a technique to assay a wide variety of radionuclides which decay through two or more types of radiation. Through a survey of the literature, this paper seeks to describe the main aspects of one of the most powerful direct methods available in radionuclide metrology. The basics of coincidence counting and the efficiency extrapolation method are covered. The problem of non-linearities in the extrapolation curve is also considered. The main characteristics of variants to the conventional coincidence instrumentation are presented. (author)
Escobedo, Fernando A.
2014-03-01
In this work, a variant of the Gibbs-Duhem integration (GDI) method is proposed to trace phase coexistence lines that combines some of the advantages of the original GDI methods such as robustness in handling large system sizes, with the ability of histogram-based methods (but without using histograms) to estimate free-energies and hence avoid the need of on-the-fly corrector schemes. This is done by fitting to an appropriate polynomial function not the coexistence curve itself (as in GDI schemes) but the underlying free-energy function of each phase. The availability of a free-energy model allows the post-processing of the simulated data to obtain improved estimates of the coexistence line. The proposed method is used to elucidate the phase behavior for two non-trivial hard-core mixtures: a binary blend of spheres and cubes and a system of size-polydisperse cubes. The relative size of the spheres and cubes in the first mixture is chosen such that the resulting eutectic pressure-composition phase diagram is nearly symmetric in that the maximum solubility of cubes in the sphere-rich solid (˜20%) is comparable to the maximum solubility of spheres in the cube-rich solid. In the polydisperse cube system, the solid-liquid coexistence line is mapped out for an imposed Gaussian activity distribution, which produces near-Gaussian particle-size distributions in each phase. A terminal polydispersity of 11.3% is found, beyond which the cubic solid phase would not be stable, and near which significant size fractionation between the solid and isotropic phases is predicted.
On boundary conditions in lattice Boltzmann methods
International Nuclear Information System (INIS)
A lattice Boltzmann boundary condition for simulation of fluid flow using simple extrapolation is proposed. Numerical simulations, including two-dimensional Poiseuille flow, unsteady Couette flow, lid-driven square cavity flow, and flow over a column of cylinders for a range of Reynolds numbers, are carried out, showing that this scheme is of second order accuracy in space discretization. Applications of the method to other boundary conditions, including pressure condition and flux condition are discussed. copyright 1996 American Institute of Physics
Detectors for LEP: methods and techniques
International Nuclear Information System (INIS)
This note surveys detection methods and techniques of relevance for the LEP physics programme. The basic principles of the detector physics are sketched, as recent improvement in understanding points towards improvements and also limitations in performance. Development and present status of large detector systems is presented and permits some conservative extrapolations. State-of-the-art techniques and technologies are presented and their potential use in the LEP physics programme assessed. (Auth.)
Shinada, Hiroyuki; Midoh, Yoshihiro; Shimakura, Tomokazu; Nakamae, Koji
The magnetic field of a recording head is measured by projection electron-beam tomography with a resolution of about ten nanometers, and the magnetic field closer to the sample surface than the measurement plane is estimated by numerical calculation. The magnetic field at 20 nm calculated from the field measured at 50 nm adequately agrees with the directly measured field at 20 nm. The combination of projection electron-beam tomography and this calculation method make it possible to determine the magnetic field close to a head (air-bearing) surface.
Gangodagamage, Chandana; Rowland, Joel C.; Hubbard, Susan S.; Brumby, Steven P.; Liljedahl, Anna K.; Wainwright, Haruko; Wilson, Cathy J.; Altmann, Garrett L.; Dafflon, Baptiste; Peterson, John; Ulrich, Craig; Tweedie, Craig E.; Wullschleger, Stan D.
2014-08-01
Landscape attributes that vary with microtopography, such as active layer thickness (ALT), are labor intensive and difficult to document effectively through in situ methods at kilometer spatial extents, thus rendering remotely sensed methods desirable. Spatially explicit estimates of ALT can provide critically needed data for parameterization, initialization, and evaluation of Arctic terrestrial models. In this work, we demonstrate a new approach using high-resolution remotely sensed data for estimating centimeter-scale ALT in a 5 km2 area of ice-wedge polygon terrain in Barrow, Alaska. We use a simple regression-based, machine learning data-fusion algorithm that uses topographic and spectral metrics derived from multisensor data (LiDAR and WorldView-2) to estimate ALT (2 m spatial resolution) across the study area. Comparison of the ALT estimates with ground-based measurements, indicates the accuracy (r2 = 0.76, RMSE ±4.4 cm) of the approach. While it is generally accepted that broad climatic variability associated with increasing air temperature will govern the regional averages of ALT, consistent with prior studies, our findings using high-resolution LiDAR and WorldView-2 data, show that smaller-scale variability in ALT is controlled by local eco-hydro-geomorphic factors. This work demonstrates a path forward for mapping ALT at high spatial resolution and across sufficiently large regions for improved understanding and predictions of coupled dynamics among permafrost, hydrology, and land-surface processes from readily available remote sensing data.
Groothuis, Floris A; Heringa, Minne B; Nicol, Beate; Hermens, Joop L M; Blaauboer, Bas J; Kramer, Nynke I
2015-06-01
Challenges to improve toxicological risk assessment to meet the demands of the EU chemical's legislation, REACH, and the EU 7th Amendment of the Cosmetics Directive have accelerated the development of non-animal based methods. Unfortunately, uncertainties remain surrounding the power of alternative methods such as in vitro assays to predict in vivo dose-response relationships, which impedes their use in regulatory toxicology. One issue reviewed here, is the lack of a well-defined dose metric for use in concentration-effect relationships obtained from in vitro cell assays. Traditionally, the nominal concentration has been used to define in vitro concentration-effect relationships. However, chemicals may differentially and non-specifically bind to medium constituents, well plate plastic and cells. They may also evaporate, degrade or be metabolized over the exposure period at different rates. Studies have shown that these processes may reduce the bioavailable and biologically effective dose of test chemicals in in vitro assays to levels far below their nominal concentration. This subsequently hampers the interpretation of in vitro data to predict and compare the true toxic potency of test chemicals. Therefore, this review discusses a number of dose metrics and their dependency on in vitro assay setup. Recommendations are given on when to consider alternative dose metrics instead of nominal concentrations, in order to reduce effect concentration variability between in vitro assays and between in vitro and in vivo assays in toxicology. PMID:23978460
The S(E) factor of 7Li(p,?)8Be and consequences for S(E) extrapolation in 7Be(p,?0)8B.
Zahnow, D.; Angulo, C.; Rolfs, C.; Schmidt, S.; Schulte, W. H.; Somorjai, E.
1995-03-01
Excitation functions and forward-backward anisotropies have been measured for the 7Li(p,?)8Be capture reaction over the proton energy range Ep=100 to 1500 keV, using a 4? summing crystal and Ge(Li) detectors, respectively. The data show at all energies the presence of E1 and M1 capture amplitudes arising from the direct capture (DC) process and the ER=441 and 1030 keV resonances, respectively. Due to the observed DC process, the present data increase significantly the reaction rates compared to values given in the compilation. The data and their analyses remove the recent criticism on DC model calculations, which had implied a significant reduction in the extrapolated S(E) factor for 7Be(p,?)8B and thus in the predicted flux of high-energy solar neutrinos.
Energy Technology Data Exchange (ETDEWEB)
Montero Prieto, M.; Vidania Munoz, R. de
1994-07-01
In this work, we analyzed different approaches, assayed in order to numerically describe the systemic behaviour of Beryllium. The experimental results used in this work, were previously obtained by Furchner et al. (1973), using Sprague-Dawley rats, and others animal species. Furchner's work includes the obtained model for whole body retention in rats, but not for each target organ. In this work we present the results obtained by modeling the kinetic behaviour of Beryllium in several target organs. The results of this kind of models were used in order to establish correlations among the estimated kinetic constants. The parameters of the model were extrapolated to humans and, finally, compared with others previously published. (Author) 12 refs.
International Nuclear Information System (INIS)
This report addresses questions that arose after having completed a detailed study of a simulant-material experimental investigation of flow dynamics in the Upper Core Structures during a Core Disruptive Accident of a Liquid-Metal Fast Breeder Reactor. The main findings of the experiments were about the reduction of work potential of the expanding fuel by the presence of the Upper Core Structures. This report describes how the experimental data can be extrapolated to prototypic conditions, which phenomena modelled in code predictions by SIMMER-II are different for simulant and prototypic transients, and how the experimental results compare to effects of prototypic phenomena which could not be modelled in the experiment. (orig.)
International Nuclear Information System (INIS)
Concentrations and organ distribution patterns of alpha-emitting isotopes of U (238U and 234U), Th (232Th, 230Th, and 228Th), and Pu (239,240Pu) were determined for beagle dogs of our colony. The dogs were exposed to environmental levels of U and Th isotopes through ingestion (food and water) and inhalation to stimulate environmental exposures of the general human population. The organ distribution patterns of these radionuclides in beagles are compared to patterns in humans to determine if it is appropriate to extrapolate organ content data from beagles to humans. The results indicated that approximately 80% of the U and Th accumulated in bone in both species. The organ content percentages of these radionuclides in soft tissues such as liver, kidney, etc. of both species were comparable. The human lung contained higher percentages of U and Th than the beagle lung, perhaps because the longer life span of humans resulted in a longer exposure time. If the U and Th content of dog lung is normalized to an exposure time of 58 y and 63 y, median ages of the U and Th study populations, respectively, the lung content for both species is comparable. The organ content of 239,240Pu in humans and beagles differed slightly. In the beagle, the liver contained more than 60%, and the skeleton contained less than 40% of the Pu body content. In humans, the liver contained approximately 37%, and the skeleton contained approximately 58% of the body content. This difference may have been due to differences in the mode of intake of Pu in each species or to differences in the chemical form of Pu. In general, the results suggest that the beagle may be an appropriate experimental animal from which to extrapolate data to humans with reference to the percentage of U, Th, and Pu found in the organs
Singh, N P; Wrenn, M E
1989-01-01
Concentrations and organ distribution patterns of alpha-emitting isotopes of U (238U and 234U), Th (232Th, 230Th, and 228Th), and Pu (239,240Pu) were determined for beagle dogs of our colony. The dogs were exposed to environmental levels of U and Th isotopes through ingestion (food and water) and inhalation to stimulate environmental exposures of the general human population. The organ distribution patterns of these radionuclides in beagles are compared to patterns in humans to determine if it is appropriate to extrapolate organ content data from beagles to humans. The results indicated that approximately 80% of the U and Th accumulated in bone in both species. The organ content percentages of these radionuclides in soft tissues such as liver, kidney, etc. of both species were comparable. The human lung contained higher percentages of U and Th than the beagle lung, perhaps because the longer life span of humans resulted in a longer exposure time. If the U and Th content of dog lung is normalized to an exposure time of 58 y and 63 y, median ages of the U and Th study populations, respectively, the lung content for both species is comparable. The organ content of 239,240Pu in humans and beagles differed slightly. In the beagle, the liver contained more than 60%, and the skeleton contained less than 40% of the Pu body content. In humans, the liver contained approximately 37%, and the skeleton contained approximately 58% of the body content. This difference may have been due to differences in the mode of intake of Pu in each species or to differences in the chemical form of Pu. In general, the results suggest that the beagle may be an appropriate experimental animal from which to extrapolate data to humans with reference to the percentage of U, Th, and Pu found in the organs. PMID:2606709
Determination of corrosion rate from electrode kinetic measurement
International Nuclear Information System (INIS)
Electrode kinetic measurements provide a valuable technique for determining the corrosion rates of metals. The processes controlled by activation polarization are described by an exponential type of relation involving three unknowns. Corrosion behaviour of a metal can be fully described if these three parameters i.e. corrosion current, Ic and the two Tafel constants Beta/sub a/ and Beta/sub c/ are known. Experimentally the current density is measured as a function of the applied over relationship is such that an accurate determination of the three parameters by Ordinary Least Squares Fit is generally not possible. Various attempts have been made to simplify the equation using approximations. These are reflected in the well known methods such as Tafel Line Extrapolation and Linear Polarization. In Three Point and Four Point methods, selected data points are used which transform the original equation into similar relation such as a quadratic equation. Recently some computer based methods have been developed such as BETACRUNCH in which the method of averages is used. We have developed a new method to analyze experimental polarization data and determine the unknown parameters. We have formulated three independent relationships to solve for Ic, Ba and Beta/sub c/. In addition to the original version we have improved the methods by using some additional numerical approaches. These methods are robust and give accurate determination of Tafel constants and the corrosion current density for a wide range of systems. Using the suggested scheme, it is also possible to avoid data corresponding to very low and very high over potentials and yet obtain excellent results. These methods are less sensitive to experimental errors as compared to other existing methods. (author)
Scientific Electronic Library Online (English)
Jorge A, Calderón; Carmen P, Buitrago.
2007-12-01
Full Text Available La corrosión de recipientes fabricados en hojalata expuestos a diferentes soluciones fue evaluada usando técnicas eletroquímicas. Los recipientes con y sin la aplicación de barniz fueron expuestos a diferentes soluciones. La susceptibilidad a sufrir corrosión se evaluó utilizando voltametría cíclica [...] , curvas de polarización y espectroscopia de impedancia electroquímica. La posibilidad de formación de películas pasivas en la superfi cie de los recipientes se evaluó según la histéresis presente en el primer ciclo de las medidas de voltametría. Las curvas de polarización revelaron que el comportamiento del recubrimiento de estaño puede cambiar de anódico a catódico según la naturaleza de la solución en contacto con el recipiente, alertando sobre el riesgo de corrosión localizada. Mediante impedancia electroquímica se evaluó el efecto del uso de un aditivo en las soluciones o productos empacados en dos recipientes. Las medidas de impedancia mostraron un efecto perjudicial del aditivo utilizado y una rápida aparición de procesos corrosivos cuando se usó la solución modifi cada con el aditivo. Abstract in english Corrosion of lacquered tinplate cans in different solutions was assessed using electrochemical methods. Samples with and without lacquer coating were exposed to different solutions and their susceptibility to corrosion was evaluated using cyclic voltammetry, Tafel curves and electrochemical impedanc [...] e spectroscopy. The possible formation of a passive layer on the container surface was evaluated according to the kind of hysteresis presented in the fi rst cycle of voltammeter measurements. Tafel plots showed how the behaviour of the tin layer can change from anodic to cathodic depending on the nature of the solution in contact with it, revealing the risk of localized corrosion. The effect of one additive in the solutions on the electrochemical performance containers was evaluated by electrochemical impedance. The impedance showed a deleterious effect of the additive, and corrosion processes appeared more quickly in containers packed with solutions modifi ed with additive.
Energy Technology Data Exchange (ETDEWEB)
1985-01-01
Two groups of methods for studying microbiological and corrosive events have been followed in this study. They are based on the fact that metal corrosion, in particular of iron, is an electrochemical phenomena even if biological processes can intervene. The first group of methods consisted of classical measurements of the following parameters: (1) corrosion potential; (2) polarization resistance; (3) slope of the anodic and cathodic TAFEL lines; (4) velocity of the general corrosion; and (5) pitting index. The second group of methods is more original, and concerns the potential measurements in ''biological electrical batteries.'' Such a battery consists of two half elements (electrodes immerged in an electrolyte) that are identical at the onset of the experiment except for the presence of sulfate-reducing bacteria in one cell whereas the other one is sterile. The influence of temperature and bacteric population on the corrosion activity has been studied. The temperatures varied between 25 and 45/sup 0/C and the bacteric concentration between 0 and 20,000 bacteria/ml. Bacterial corrosion can already be detected a few days after the start of an experiment. This fast response indicates the potential of this method relative to classical microbiological methods. 29 figs.
Directory of Open Access Journals (Sweden)
M. J. Kosch
Full Text Available Ionospheric conductivity is not very easily measured directly. Incoherent scatter radars perhaps offer the best method but can only measure at one point in the sky at any one time and are limited in their time resolution. Statistical models of average conductivity are available but these may not be applied to individual case studies such as substorms. There are many instances where a real-time estimate of ionospheric conductivity over a large field-of-view is highly desirable at a high temporal and spatial resolution. We show that it is possible to make a reasonable estimate of the noctural height-integrated Pedersen conductivity, or conductance, with a single all-sky TV camera operating at 557.7 nm. This is not so in the case of the Hall conductance where at least two auroral wavelengths should be imaged in order to estimate additionally the energy of the precipitating particles.
Key words. Atmospheric composition and structure (airglow and aurora · Magnetospheric physics (auroral phenomena · instruments and techniques
Directory of Open Access Journals (Sweden)
Tynan Anna
2012-09-01
Full Text Available Abstract Background Male circumcision (MC has been shown to reduce the risk of HIV acquisition among heterosexual men, with WHO recommending MC as an essential component of comprehensive HIV prevention programs in high prevalence settings since 2007. While Papua New Guinea (PNG has a current prevalence of only 1%, the high rates of sexually transmissible diseases and the extensive, but unregulated, practice of penile cutting in PNG have led the National Department of Health (NDoH to consider introducing a MC program. Given public interest in circumcision even without active promotion by the NDoH, examining the potential health systems implications for MC without raising unrealistic expectations presents a number of methodological issues. In this study we examined health systems lessons learned from a national no-scalpel vasectomy (NSV program, and their implications for a future MC program in PNG. Methods Fourteen in-depth interviews were conducted with frontline health workers and key government officials involved in NSV programs in PNG over a 3-week period in February and March 2011. Documentary, organizational and policy analysis of HIV and vasectomy services was conducted and triangulated with the interviews. All interviews were digitally recorded and later transcribed. Application of the WHO six building blocks of a health system was applied and further thematic analysis was conducted on the data with assistance from the analysis software MAXQDA. Results Obstacles in funding pathways, inconsistent support by government departments, difficulties with staff retention and erratic delivery of training programs have resulted in mixed success of the national NSV program. Conclusions In an already vulnerable health system significant investment in training, resources and negotiation of clinical space will be required for an effective MC program. Focused leadership and open communication between provincial and national government, NGOs and community is necessary to assist in service sustainability. Ensuring clear policy and guidance across the entire sexual and reproductive health sector will provide opportunities to strengthen key areas of the health system.
Scientific Electronic Library Online (English)
Joel, Negin; Robert G, Cumming.
2010-11-01
Full Text Available OBJETIVO: Cuantificar el número de casos y la prevalencia de la infección por el virus de la inmunodeficiencia humana (VIH) entre los adultos de mayor edad en el África subsahariana. MÉTODOS: Se han analizado los datos procedentes de las Encuestas demográficas y de salud (EDS). Aunque en estos estud [...] ios todas las mujeres entrevistadas son menores de 50 años, 18 de estas encuestas contenían datos sobre la infección por VIH en hombres con una edad igual o superior a los 50 años. Para calcular el porcentaje de adultos de mayor edad (es decir, personas de 50 o más años de edad) con positividad al VIH (VIH+), se extrapolaron los datos procedentes del Programa Conjunto de las Naciones Unidas sobre el VIH/SIDA sobre la cantidad estimada de personas con el VIH y sobre la prevalencia de la infección por este virus entre los adultos con edades comprendidas entre 15 y 49 años. RESULTADOS: En 2007, en el África subsahariana había unos 3 millones de personas de 50 años o mayores con el VIH. La prevalencia de la infección por el VIH en este grupo fue del 4,0%, en comparación con el 5,0% correspondiente al grupo con edades comprendidas entre 15 y 49 años. De la cantidad aproximada de 21 millones de personas > 15 años con VIH en el África subsahariana, el 14,3% tenía 50 años de edad o más. CONCLUSIÓN: Para poder reflejar mejor la mayor supervivencia de las personas con VIH y el envejecimiento de la población VIH+, se deben ampliar los indicadores de la prevalencia de la infección por el VIH, de manera que incluyan a las personas mayores de 49 años. Se sabe poco sobre la morbilidad asociada y el comportamiento sexual de los adultos VIH+ de mayor edad o acerca de los factores biológicos y culturales que aumentan el riesgo de transmisión. Los servicios relacionados con el VIH deben orientarse mejor para responder a las necesidades crecientes de los adultos de edad más avanzada que se ven afectados por esta enfermedad. Abstract in english OBJECTIVE: To quantify the number of cases and prevalence of human immunodeficiency virus (HIV) infection among older adults in sub-Saharan Africa. METHODS: We reviewed data from Demographic and Health Surveys (DHS). Although in these surveys all female respondents are [...] urveys contained data on HIV infection among men aged > 50 years. To estimate the percentage of older adults (i.e. people > 50 years of age) who were positive for HIV (HIV+), we extrapolated from data from the Joint United Nations Programme on HIV/AIDS on the estimated number of people living with HIV and on HIV infection prevalence among adults aged 15-49 years. FINDINGS: In 2007, approximately 3 million people aged > 50 years were living with HIV in sub-Saharan Africa. The prevalence of HIV infection in this group was 4.0%, compared with 5.0% among those aged 15-49 years. Of the approximately 21 million people in sub-Saharan Africa aged > 15 years that were HIV+, 14.3% were > 50 years old. CONCLUSION: To better reflect the longer survival of people living with HIV and the ageing of the HIV+ population, indicators of the prevalence of HIV infection should be expanded to include people > 49 years of age. Little is known about comorbidity and sexual behaviour among HIV+ older adults or about the biological and cultural factors that increase the risk of transmission. HIV services need to be better targeted to respond to the growing needs of older adults living with HIV.
Greinert, J.; Veloso, M.; Mienert, J.; Sommer, S.; Bussmann, I.; Haren, H.
2012-04-01
Increased (5-100 nM) and sometimes strongly increased (> 100 nM) methane concentrations in the water column, at the sea surface and even in the atmosphere (8 ppm) have been reported from Arctic areas. Some increases are clearly related to localized methane seep sites, others show a strong link to river runoff or to a widely spread (diffuse) methane release from degrading organic matter possibly linked to thawing permafrost. An important question in the marine science community is if the warming of the Arctic is already accelerating methane fluxes from the seabed into the water column and whether we are experiencing a significant flux into the atmosphere. Marine methane fluxes from localized seep sites have been studied for several decades already and the general biogeochemical processes and transport mechanisms have been identified (e.g. AOM, carbonate precipitation, bubble release, sea-atmosphere fluxes) and are fairly well understood. But we still know very little about the temporal variability of methane release and the link to thawing offshore permafrost is still very un-researched. Two areas, the Eastern Siberian Shelf and W-Spitzbergen have been targeted by repeated research cruises to gain more knowledge about this topic. Here, we present work from W-Spitzbergen carried out from 2009 to 2011. Since the discovery of methane seepage offshore Svalbard in 2008 (Westbrook et al., 2008), there has been an international effort to study this area by geophysical, oceanographic, visual and geochemical methods. Repeated hydroacoustic surveys with singlebeam and multibeam systems proved that bubble release in seep areas, at the upper gas hydrate boundary and the shelf edge has been continuous over the three years period. However, specific bubble releasing vents do show intermediate activity with episodic or cyclic release. In addition to this inconstant release, changing currents and internal waves physically influence the methane distribution in the water column, in addition biological processes reduce the methane concentration due to aerobic methane oxidation. Data show that tide-related currents and regional currents off W-Spitzbergen influence methane concentrations and distribution patterns more significantly than previously assumed. Further, we think that internal waves complicate the distribution pattern and thus the possible transport of methane into the atmosphere. Microbial activity oxidizing methane has been detected but rates are not very high.
Energy Technology Data Exchange (ETDEWEB)
Alvarez R, J.T.; Morales P, R
1992-06-15
The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, ({sup 90} Sr/{sup 90} Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)
Energy Technology Data Exchange (ETDEWEB)
Miliordos, Evangelos; Xantheas, Sotiris S.
2015-03-07
We report the variation of the binding energy of the formic acid dimer at the CCSD(T)/ Complete Basis Set limit and examine the validity of the BSSE-correction, previously challenged by Kalescky, Kraka and Cremer [J. Chem. Phys. 140 (2014) 084315]. Our best estimate of D0=14.3±0.1 kcal/mol is in excellent agreement with the experimental value of 14.22±0.12 kcal/mol. The BSSE correction is indeed valid for this system since it exhibits the expected behavior of decreasing with increasing basis set size and its inclusion produces the same limit (within 0.1 kcal/mol) as the one obtained from extrapolation of the uncorrected binding energy. This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. A portion of this research was performed using the Molecular Science Computing Facility (MSCF) in EMSL, a national scientific user facility sponsored by the Department of Energy’s Office of Biological and Environmental Research and located at PNNL.
Production and characterization of TI/PbO2 electrodes by a thermal-electrochemical method
Directory of Open Access Journals (Sweden)
Laurindo Edison A.
2000-01-01
Full Text Available Looking for electrodes with a high overpotential for the oxygen evolution reaction (OER, useful for the oxidation of organic pollutants, Ti/PbO2 electrodes were prepared by a thermal-electrochemical method and their performance was compared with that of electrodeposited electrodes. The open-circuit potential for these electrodes in 0.5 mol L-1 H2SO4 presented quite stable similar values. X-ray diffraction analyses showed the thermal-electrochemical oxide to be a mixture of ort-PbO, tetr-PbO and ort-PbO2. On the other hand, the electrodes obtained by electrodeposition were in the tetr-PbO2 form. Analyses by scanning electron microscopy showed that the basic morphology of the thermal-electrochemical PbO2 is determined in the thermal step, being quite distinct from that of the electrodeposited electrodes. Polarization curves in 0.5 mol L-1 H2SO4 showed that in the case of the thermal-electrochemical PbO2 electrodes the OER was shifted to more positive potentials. However, the values of the Tafel slopes, quite high, indicate that passivating films were possibly formed on the Ti substrates, which could eventually explain the somewhat low current values for OER.
Spectral method and its high performance implementation
Wu, Z.
2014-01-01
We have presented a new method that can be dispersion free and unconditionally stable. Thus the computational cost and memory requirement will be reduced a lot. Based on this feature, we have implemented this algorithm on GPU based CUDA for the anisotropic Reverse time migration. There is almost no communication between CPU and GPU. For the prestack wavefield extrapolation, it can combine all the shots together to migration. However, it requires to solve a bigger dimensional problem and more meory which can\\'t fit into one GPU cards. In this situation, we implement it based on domain decomposition method and MPI for distributed memory system.
Report on the uncertainty methods study
International Nuclear Information System (INIS)
The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced 'best estimate' thermal-hydraulic codes: the Pisa method (based on extrapolation from integral experiments) and four methods identifying and combining input uncertainties. Three of these, the GRS, IPSN and ENUSA methods, use subjective probability distributions, and one, the AEAT method, performs a bounding analysis. Each method has been used to calculate the uncertainty in specified parameters for the LSTF SB-CL-18 5% cold leg small break LOCA experiment in the ROSA-IV Large Scale Test Facility (LSTF). The uncertainty analysis was conducted essentially blind and the participants did not use experimental measurements from the test as input apart from initial and boundary conditions. Participants calculated uncertainty ranges for experimental parameters including pressurizer pressure, primary circuit inventory and clad temperature (at a specified position) as functions of time
Tirone, M.
2015-09-01
The equation that relates pressure, temperature and volume and is described by parameters that are function of temperature at 1 bar (hereafter called thermal equation of state, TEOS), has practical computational advantages for petrological and geophysical applications over the equation that considers explicitly a thermal pressure. Some considerations that justify the use of the TEOS are discussed here. (1) The assumption that the parameters are function of temperature is perhaps better understood by looking at the Helmholtz energy function that is implicitly assumed in the case of an equation of state (EOS) derived from interatomic potentials. A test case shows that the Helmholtz energy related to the Vinet EOS and the Helmholtz energy from the Debye model are very similar. (2) The TEOS should be able to reproduce thermal expansion (?), isothermal bulk modulus (KT) and heat capacity (Cp and Cv) at high P, T computed from a lattice vibration model. The generalized Rydberg EOS applied to MgO is able to fit reasonably well the properties computed using Jacobs' lattice dynamics formulation (T range = 300-3000 K, P range = 1 bar-1500 kbar). (3) It is shown that in the case of MgO, the TEOS can be used quite successfully for extrapolation that goes beyond the P, T range of the measured/given data. Some physical constraints need to be applied to the derivation of the volume, bulk modulus and derivative of the bulk modulus with pressure at 1 bar. (4) The pressure dependence of the reference parameters in the TEOS that was inferred several decades ago is only apparent. A numerical computation demonstrates that the combined pressure effect in the terms defining the partial derivative of the reference V and K (and K') over temperature cancels out, making the reference parameters independent of pressure at any condition.
Lang, M; Vain, A; Bunce, R G H; Jongman, R H G; Raet, J; Sepp, K; Kuusemets, V; Kikas, T; Liba, N
2015-03-01
Habitat surveillance and subsequent monitoring at a national level is usually carried out by recording data from in situ sample sites located according to predefined strata. This paper describes the application of remote sensing to the extension of such field data recorded in 1-km squares to adjacent squares, in order to increase sample number without further field visits. Habitats were mapped in eight central squares in northeast Estonia in 2010 using a standardized recording procedure. Around one of the squares, a special study site was established which consisted of the central square and eight surrounding squares. A Landsat-7 Enhanced Thematic Mapper Plus (ETM+) image was used for correlation with in situ data. An airborne light detection and ranging (lidar) vegetation height map was also included in the classification. A series of tests were carried out by including the lidar data and contrasting analytical techniques, which are described in detail in the paper. Training accuracy in the central square varied from 75 to 100 %. In the extrapolation procedure to the surrounding squares, accuracy varied from 53.1 to 63.1 %, which improved by 10 % with the inclusion of lidar data. The reasons for this relatively low classification accuracy were mainly inherent variability in the spectral signatures of habitats but also differences between the dates of imagery acquisition and field sampling. Improvements could therefore be made by better synchronization of the field survey and image acquisition as well as by dividing general habitat categories (GHCs) into units which are more likely to have similar spectral signatures. However, the increase in the number of sample kilometre squares compensates for the loss of accuracy in the measurements of individual squares. The methodology can be applied in other studies as the procedures used are readily available. PMID:25648761
International Nuclear Information System (INIS)
The effect of sub-sterilizing dose of gamma radiation (125 Gy) alone or in joint with different concentrations of tafla leaves extract Nerium oleander on the histology and histochemistry of the larval male reproductive system were studied. The treatment caused histopathological changes in the testes including necrosis of spermatocytes, retardation of sperm maturation, bursting of sperm bundles and the vacuolated area resulting from depletion of spermatogonia that increased in size.Histochemical studies showed that protein contents and RNA were increased while DNA content was decreased in male gonads.
Tafel, Külliki, 1979-
2008-01-01
Tartu, Turu ja Bergen koostasid Põhjamaade Innovatsioonikeskuse projekti "Nordic Model for Creative Industries Development Center" raames oma linnade loomemajanduse arendamise dokumendi. Võrreldakse valminud dokumente
Tafel, Külliki
2006-01-01
Äriühingute valitsemine postsotsialistlikes riikides - teoreetilised dilemmad, eripärad, uurimisvõimalused. Skeemid: Internal and external relations of corporate governanace; The changing context of corporate governance
Automatic numerical integration methods for Feynman integrals through 3-loop
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.
2015-05-01
We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.
Quick Method for Determining Plant Available Water
International Nuclear Information System (INIS)
For the first few days after heavy rain or irrigations, water drains from the soil profile until its water content approaches a relatively stable value called the drained upper limit or field capacity. When plants have extracted all of the water available to them, the root zone water content approaches a lower limit of available water, or permanent wilting water content. The water held by the soil between these two limits is called plant available water. These two limits are often associated with water content values at specific soil water potentials, (a measure of pressure at soil water is extracted). Field capacity is often taken as the water content of a soil at -33 kPa water potential. Permanent wilt is taken as the water content at -1500 kPa. The methods typically used to determine plant available water are slow and inaccurate. We present here (1) a method for measuring field capacity using a tensiometer and an extrapolation technique, and (2) a method for measuring permanent wilting water content with a dew point potentiameter and an extrapolation method which are much faster and more accurate than traditional methods. (author)
Application of the Normalized Full Gradient (NFG) Method to Resistivity Data
AYDIN, AL?
2010-01-01
This paper proposes the application of the normalized full gradient (NFG) method to resistivity studies and illustrates that the method can greatly reduce the time and work load needed in detecting buried bodies using resistivity measurement. The NFG method calculates resistivity values at desired electrode offsets by extrapolation of a function of resistivity measurements (i.e. the gradient) to other depth levels using resistivity measurements done at one electrode offset only. The performan...
Population survey sampling methods in a rural African setting: measuring mortality
Byass Peter; Fottrell Edward
2008-01-01
Abstract Background Population-based sample surveys and sentinel surveillance methods are commonly used as substitutes for more widespread health and demographic monitoring and intervention studies in resource-poor settings. Such methods have been criticised as only being worthwhile if the results can be extrapolated to the surrounding 100-fold population. With an emphasis on measuring mortality, this study explores the extent to which choice of sampling method affects the representativeness ...
Gilmanov, Tagir G.; Tieszen, Larry L.; Wylie, Bruce K.; Flanagan, Larry B.; Frank, Albert B.; Haferkamp, Marshall R.; Meyers, Tilden P.; Morgan, Jack A.
2005-01-01
Aim? Extrapolation of tower CO2 fluxes will be greatly facilitated if robust relationships between flux components and remotely sensed factors are established. Long-term measurements at five Northern Great Plains locations were used to obtain relationships between CO2fluxes and photosynthetically active radiation (Q), other on-site factors, and Normalized Difference Vegetation Index (NDVI) from the SPOT VEGETATION data set. Location? CO2 flux data from the following stations and years were analysed: Lethbridge, Alberta 1998–2001; Fort Peck, MT 2000, 2002; Miles City, MT 2000–01; Mandan, ND 1999–2001; and Cheyenne, WY 1997–98. Results? Analyses based on light-response functions allowed partitioning net CO2 flux (F) into gross primary productivity (Pg) and ecosystem respiration (Re). Weekly averages of daytime respiration, ?day, estimated from light responses were closely correlated with weekly averages of measured night-time respiration, ?night (R2 0.64 to 0.95). Daytime respiration tended to be higher than night-time respiration, and regressions of ?day on ?night for all sites were different from 1 : 1 relationships. Over 13 site-years, gross primary production varied from 459 to 2491 g CO2 m?2 year?1, ecosystem respiration from 996 to 1881 g CO2 m?2 year?1, and net ecosystem exchange from ?537 (source) to +610 g CO2 m?2 year?1 (sink). Maximum daily ecological light-use efficiencies, ?d,max = Pg/Q, were in the range 0.014 to 0.032 mol CO2 (mol incident quanta)?1. Main conclusions? Ten-day average Pg was significantly more highly correlated with NDVI than 10-day average daytime flux, Pd (R2 = 0.46 to 0.77 for Pg-NDVI and 0.05 to 0.58 for Pd-NDVI relationships). Ten-day average Re was also positively correlated with NDVI, with R2values from 0.57 to 0.77. Patterns of the relationships of Pg and Re with NDVI and other factors indicate possibilities for establishing multivariate functions allowing scaling-up local fluxes to larger areas using GIS data, temporal NDVI, and other factors.
International Nuclear Information System (INIS)
Highlights: ? FSW demonstrated higher corrosion resistance than GTAW of 6061 Al alloy. ? FSW and GTAW both demonstrated poorer corrosion behavior than the base metal. ? FSW produced ?1–2 ?m equiaxed grains in joint region and ?150 ?m in base metal. ? GTAW resulted in semi-cast dendritic structure. ? T6 heat treatment improved corrosion resistance of both FSW and GTAW joints. -- Abstract: Wrought aluminum sheets with thickness of 13 mm were square butt-welded by friction stir welding (FSW) and gas tungsten arc welding (GTAW) methods. Corrosion behavior of the welding zone was probed by Tafel polarization curve. Optical metallography (OM) and scanning electron microscopy together with energy dispersive spectroscopy (SEM-EDS) were used to determine morphology and semi-quantitative analysis of the welded zone. FSW resulted in equiaxed grains of about 1–2 ?m, while GTAW caused dendritic structure of the welded region. Resistance to corrosion was greater for the FSW grains than the GTAW structure. In both cases, susceptibility to corrosion attack was greater in the welded region than the base metal section. T6 heat treatment resulted in shifting of the corrosion potential towards bigger positive values. This effect was stronger in the welded regions than the base metal section.
Comparative study among calibration methods of clinical applicators of beta radiation
International Nuclear Information System (INIS)
90Sr+90Y clinical applicators are instruments used in brachytherapy procedures and they have to be periodically calibrated, according to international standards and recommendations. In this work, four calibration methods of dermatological and ophthalmic applicators were studied, comparing the results with those given by the calibration certificates of the manufacturers. The methods included the use of the standard applicator of the Calibration Laboratory (LCI), calibrated by the National Institute of Standards and Technology; an Amersham applicator (LCI) as reference; a mini-extrapolation chamber developed at LCI as an absolute standard; and thermoluminescent dosimetry. The mini-extrapolation chamber and a PTW commercial extrapolation chamber were studied in relation to their performance through quality control tests of their response, as leakage current, repeatability and reproducibility. The distribution of the depth dose in water, that presents high importance in dosimetry of clinical applicators, was determined using the mini extrapolation chamber and the thermoluminescent dosimeters. The results obtained were considered satisfactory for the both cases, and comparable to the data of the IAEA (2002) standard. Furthermore, a dosimetry postal kit was developed for the calibration of clinical applicators using the thermoluminescent technique, to be sent to clinics and hospitals, without the need of the transport of the sources to IPEN for calibration. (author)
Mechanical Properties and Corrosion Behavior of Low Carbon Steel Weldments
Directory of Open Access Journals (Sweden)
Mohamed Mahdy
2013-01-01
Full Text Available This research involves studying the mechanical properties and corrosion behavior of ?low carbon steel? (0.077wt% C before and after welding using Arc, MIG and TIG welding. The mechanical properties include testing of microhardness, tensile strength, the results indicate that microhardness of TIG, MIG welding is more than arc welding, while tensile strength in arc welding more than TIG and MIG.The corrosion behavior of low carbon weldments was performed by potentiostat at scan rate 3mV.sec-1 in 3.5% NaCl to show the polarization resistance and calculate the corrosion rate from data of linear polarization by ?Tafel extrapolation method?. The results indicate that the TIG welding increase the corrosion current density and anodic Tafel slop, while decrease the polarization resistance compared with unwelded low carbon steel. Cyclic polarization were measured to show resistance of specimens to pitting corrosion and to calculate the forward and reveres potentials. The results show shifting the forward, reverse and pitting potentials toward active direction for weldments samples compared with unwelded sample.
Preparation of ultrafine tungsten wire via electrochemical method in an ionic liquid
International Nuclear Information System (INIS)
Highlights: ? The method of electrochemical corrosion is used to prepare ultra-fine tungsten wire less than 10 ?m in diameter. ? Ionic liquid as a non-aqueous electrolyte was used in electrochemical corrosion experiments. ? The situation of anode polarization was different from the usual situation. ? Diameter of tungsten wire has been cut down to 8.5 ?m uniformly under the optimized electric potential. - Abstract: Ultrafine tungsten wire less than 10 ?m in diameter is often used as wire array load applied in Inertial Confinement Fusion (ICF) physical experiments. In order to obtain a higher yield of X-ray, both initial radius and line quality of metal wire were required to be of high quality simultaneously. This paper has studied the electrochemical method to corrode tungsten wires uniformly in an ionic liquid electrolyte containing 1 wt% sodium hydroxide. A three electrode system composed of a tungsten anode electrode, a stainless steel cathode and a saturated calomel electrode as a reference electrode, was used in the electrochemical experiments. Liner sweep voltammetry (LSV) and Tafel experiments were used to investigate the electrochemical behaviors of tungsten wires in ionic liquid and aqueous solution. Based on scanning electron microscope (SEM) observation, the morphologies of tungsten wire surface with uniform corrosion under different applied voltages have been demonstrated. X-ray diffraction (XRD) methods were employed to track the evolution of the crystal structure before and after corrosions, and there is an obvious difference in peak intensities. The ultrafine tungsten wire with a uniform diameter of 8.5 ?m was obtained under the optimized electric potential (2.5 V) applied for decreasing diameter at 30 °C.
The direct current method for measuring charged membrane conductance
International Nuclear Information System (INIS)
This paper deals with a method for measuring electrical resistance in charged membranes. The method is based on the application of a step change in direct current and on the analysis of the potential transient subsequent to the application of the current step. Membrane electrical resistance was determined by an extrapolation to zero time of potential differences measured after the current step. Experimental results obtained with commercial ion-exchange membranes were in good agreement with those computed from the Fick equation. The method developed gives more accurate values with a standard deviation lower than traditional techniques and allows the resistance of an asymmetrical membrane to be determined in both current directions. (orig.)
Large deviations and asymptotic methods in finance
Gatheral, Jim; Gulisashvili, Archil; Jacquier, Antoine; Teichmann, Josef
2015-01-01
Topics covered in this volume (large deviations, differential geometry, asymptotic expansions, central limit theorems) give a full picture of the current advances in the application of asymptotic methods in mathematical finance, and thereby provide rigorous solutions to important mathematical and financial issues, such as implied volatility asymptotics, local volatility extrapolation, systemic risk and volatility estimation. This volume gathers together ground-breaking results in this field by some of its leading experts. Over the past decade, asymptotic methods have played an increasingly important role in the study of the behaviour of (financial) models. These methods provide a useful alternative to numerical methods in settings where the latter may lose accuracy (in extremes such as small and large strikes, and small maturities), and lead to a clearer understanding of the behaviour of models, and of the influence of parameters on this behaviour. Graduate students, researchers and practitioners will find th...
Energy Technology Data Exchange (ETDEWEB)
Yarmukhamedov, R. [Institute of Nuclear Physics, Academy of Sciences of Uzbekistan, 100214 Tashkent (Uzbekistan)
2014-05-09
The basic methods of the determination of asymptotic normalization coefficient for A+a?B of astrophysical interest are briefly presented. The results of the application of the specific asymptotic normalization coefficients derived within these methods for the extrapolation of the astrophysical S factors to experimentally inaccessible energy regions (E ? 25 keV) for the some specific radiative capture A(a,?)B reactions of the pp-chain and the CNO cycle are presented.
International Nuclear Information System (INIS)
A newly synthesized glycine derivative (termed GlyD), 2-(4-(dimethylamino)benzylamino)acetic acid hydrochloride, was used to inhibit uniform and pitting corrosion processes of Al in 0.50 M KSCN solutions (pH 6.8) at 25 oC. For uniform corrosion inhibition study, Tafel extrapolation, linear polarization resistance and impedance methods were used, complemented with SEM examinations. An independent method of chemical analysis, namely ICP-AES (inductively coupled plasma atomic emission spectrometry) was also used to test validity of corrosion rate measured by Tafel extrapolation method. GlyD inhibited uniform corrosion, even at low concentrations, reaching a value of inhibition efficiency up to 97% at a concentration of 5 x 10-3 M. Results obtained from the different corrosion evaluation techniques were in good agreement. This new synthesized glycine derivative was also used to control pit nucleation and growth on the pitted Al surface based on cyclic polarization, potentiostatic and galvanostatic measurements. The pitting potential (Epit) and the repassivation potential (Erp) increased by the addition of GlyD. Thus GlyD suppressed pit nucleation and propagation. Nucleation of pit was found to take place after an incubation time (ti). The rate of pit nucleation and growth decreased with increase in inhibitor concentration. Morphology of pitting was also studied as a function of the applied anodic potential and solution temperature. Cross-sectional view of pitted surface revealed the formation of large distorted hemispherical and narrow deep pits. GlyD was much better than Gly in controlling uniform and pitting corrosion processes of Al in these solutions.
International Nuclear Information System (INIS)
X-ray diffraction, Raman spectroscopy and Optical absorption estimates of the thickness of graphene multi layer stacks (number of graphene layers) are presented for three different growth techniques. The objective of this work was focused on comparison and reconciliation of the two already widely used methods for thickness estimates (Raman and Absorption) with the calibration of the X-ray method as far as Scherer constant K is concerned and X-ray based Wagner-Aqua extrapolation method
Energy Technology Data Exchange (ETDEWEB)
Tokarczyk, M., E-mail: mateusz.tokarczyk@fuw.edu.pl; Kowalski, G.; K?pa, H.; Grodecki, K.; Drabi?ska, A. [University of Warsaw, Institute of Experimental Physics, Faculty of Physics (Poland); Strupi?ski, W. [Institute of Electronic Materials Technology (Poland)
2013-12-15
X-ray diffraction, Raman spectroscopy and Optical absorption estimates of the thickness of graphene multi layer stacks (number of graphene layers) are presented for three different growth techniques. The objective of this work was focused on comparison and reconciliation of the two already widely used methods for thickness estimates (Raman and Absorption) with the calibration of the X-ray method as far as Scherer constant K is concerned and X-ray based Wagner-Aqua extrapolation method.
Energy Technology Data Exchange (ETDEWEB)
Castillo, Jhonny Antonio Benavente
2011-07-01
The metrological coherence among standard systems is a requirement for assuring the reliability of dosimetric quantities measurements in ionizing radiation field. Scientific and technologic improvements happened in beta radiation metrology with the installment of the new beta secondary standard BSS2 in Brazil and with the adoption of the internationally recommended beta reference radiations. The Dosimeter Calibration Laboratory of the Development Center for Nuclear Technology (LCD/CDTN), in Belo Horizonte, implemented the BSS2 and methodologies are investigated for characterizing the beta radiation fields by determining the field homogeneity, the accuracy and uncertainties in the absorbed dose in air measurements. In this work, a methodology to be used for verifying the metrological coherence among beta radiation fields in standard systems was investigated; an extrapolation chamber and radiochromic films were used and measurements were done in terms of absorbed dose in air. The reliability of both the extrapolation chamber and the radiochromic film was confirmed and their calibrations were done in the LCD/CDTN in {sup 90}Sr/{sup 90}Y, {sup 85}Kr and {sup 147}Pm beta radiation fields. The angular coefficients of the extrapolation curves were determined with the chamber; the field mapping and homogeneity were obtained from dose profiles and isodose with the radiochromic films. A preliminary comparison between the LCD/CDTN and the Instrument Calibration Laboratory of the Nuclear and Energy Research Institute / Sao Paulo (LCI/IPEN) was carried out. Results with the extrapolation chamber measurements showed in terms of absorbed dose in air rates showed differences between both laboratories up to de -I % e 3%, for {sup 90}Sr/{sup 90}Y, {sup 85}Kr and {sup 147}Pm beta radiation fields, respectively. Results with the EBT radiochromic films for 0.1, 0.3 and 0.15 Gy absorbed dose in air, for the same beta radiation fields, showed differences up to 3%, -9% and -53%. The beta radiation field mappings with radiochromic films in both BSS2 showed that some of them were not geometrically aligned. (author)
Dissolution of chromium in sulfuric acid
Directory of Open Access Journals (Sweden)
J. P. POPIC
2002-11-01
Full Text Available By combining electrochemical corrosion rate measurements and spectrophotometric analysis of the electrolyte it was shown that at room temperature chromium dissolves in deaerated 0.1 M Na2SO4 + H2SO4 (pH 1 solution as Cr(II and Cr(III ions in he ratio Cr(II : Cr(III @ 7 : 1. This process was stable over 4 h without any detectable change. The total corrosion rate of chromium calculated from the analytical data is about 12 times higher, than that determined electrochemically by cathodic Tafel line extrapolation to the corrosion potential. This finding was confirmed by applying the weight-loss method for the determination of the corrosion rate. This enormous difference between these experimentally determined corrosion rates can be explained by the rather fast, ?anomalous? dissolution process proposed by Kolotyrkin and coworkers (chemical reaction of Cr with H2O molecules occurring simultaneously with the electrochemical corrosion process.
International Nuclear Information System (INIS)
The effect of small addition of Al on the electrochemical performances was investigated by open circuit potential and Tafel Extrapolation method. The results show that open circuit potential reveals as-cast Mg containing Ca alloys with minor content of Al maintained highly negative potential with the range of -1.68 to -1.63 VSCE in comparison to both pure Mg (-1.60VSCE) and commercial high potential Mg content. Corrosion rate for the as-cast samples remains higher (30-17 mpy) than pure Mg (3 mpy) and commercial high potential Mg anode (14 mpy). The increasing small content of Al results in the reduction of corrosion rate significantly. Therefore, it proves that the performance of Mg containing Ca alloy is strongly influenced by the concentration of Al. (author)
International Nuclear Information System (INIS)
PVD based hard coatings have remarkable achievements in order to improve Tribological and surface properties of coating tools and dies. As PVD based hard coatings have a wide range of industrial applications especially in aerospace and automobile parts where they met different chemical attacks and in order to improve industrial performance these coatings must provide an excellent resistance against corrosion, high temperature oxidation and chemical reaction. This paper focuses on study of behaviour of PVD based hard coatings under different corrosive environments like as H/sub 2/SO/sub 4/, HCl, NaCl, KCl, NaOH etc. Corrosion rate was calculate under linear sweep voltammetry method where the Tafel extrapolation curves used for continuously monitoring the corrosion rate. The results show that these coatings have an excellent resistance against chemical attack. (author)
Interpolation methods and their use in radiation protection
International Nuclear Information System (INIS)
The presentation summarizes results of using various interpolation methods for getting spatial data from point measurements. These methods were evaluated within the State Office for Nuclear Safety (SONS) Science and Research Project No. 2/2008 'Methods and Measures to Limit Generation and Liquidation of Consequences of Radioactive Matter Misuse by Terrorists'. Several field tests in which the short life-time radioactive matter was released by explosion were realized and the measured data were processed. The essential goal is to find the most realistic method for radiation events assessment. Within the research project, three methods were used: Multilevel B-Spline, Triangulation and Kriging, using freely available SAGA GIS software. The best solution for this sort of radiation events appears to be the Multilevel B-Spline method. It is quick and produces good quality output data comparable with the much slower Kriging method and allows extrapolation in contrast to Triangulation. (author)
Internal Error Propagation in Explicit Runge--Kutta Methods
Ketcheson, David I.
2014-09-11
In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.
Clark, Joseph Warren
2012-01-01
In turbulent business environments, change is rapid, continuous, and unpredictable. Turbulence undermines those adaptive problem solving methods that generate solutions by extrapolating from what worked (or did not work) in the past. To cope with this challenge, organizations utilize trial-based problem solving (TBPS) approaches in which they…
International Nuclear Information System (INIS)
The method is based on perturbation of the reactor cell from a few up to few tens of percent. Measurements were performed for square lattice calls of zero power reactors Anna, NORA and RB, with metal uranium and uranium oxide fuel elements, water, heavy water and graphite moderators. Character and functional dependence of perturbations were obtained from the experimental results. Zero perturbation was determined by extrapolation thus obtaining the real physical neutron flux distribution in the reactor cell. Simple diffusion theory for partial plate cell perturbation was developed for verification of the perturbation method. The results of these calculation proved that introducing the perturbation sample in the fuel results in flattening the thermal neutron density dependent on the amplitude of the applied perturbation. Extrapolation applied for perturbed distributions was found to be justified
Production and characterization of TI/PbO2 electrodes by a thermal-electrochemical method
Scientific Electronic Library Online (English)
Edison A., Laurindo; Nerilso, Bocchi; Romeu C., Rocha-Filho.
2000-08-01
Full Text Available Visando obter eletrodos com alto sobrepotencial para a reação de desprendimento de oxigênio (RDO), úteis para a oxidação de poluentes orgânicos, prepararam-se eletrodos de Ti/PbO2 por um método térmico-eletroquímico e compararam-se seus desempenhos com o de eletrodepositados. O potencial de circuito [...] aberto em solução de H2SO4 0,5 mol L-1 para esses eletrodos apresentou valores bastante estáveis, próximos entre si, na faixa de potenciais para a região de estabilidade de PbO2 em diagramas de Pourbaix. Análises por difração de raios X mostraram que o óxido térmico-eletroquímico é uma mistura de ort-PbO, tetr-PbO e ort-PbO2. Já os eletrodos produzidos por eletrodeposição se apresentaram mais provavelmente na forma tetr-PbO2. Micrografias obtidas por microscopia eletrônica de varredura mostraram que a morfologia básica do PbO2 térmico-eletroquímico é determinada na etapa térmica, sendo bem distinta da dos eletrodos eletrodepositados. Curvas de polarização, em H2SO4 0,5 mol L-1, mostraram que no caso dos eletrodos de Ti/PbO2 térmico-eletroquímicos a RDO foi deslocada para potenciais mais positivos. Entretanto, os valores dos coeficientes de Tafel, bastante altos, indicam que possivelmente houve formação de filmes passivantes sobre os substratos de Ti, o que pode eventualmente explicar os valores de corrente algo baixos para a RDO. Abstract in english Looking for electrodes with a high overpotential for the oxygen evolution reaction (OER), useful for the oxidation of organic pollutants, Ti/PbO2 electrodes were prepared by a thermal-electrochemical method and their performance was compared with that of electrodeposited electrodes. The open-circuit [...] potential for these electrodes in 0.5 mol L-1 H2SO4 presented quite stable similar values. X-ray diffraction analyses showed the thermal-electrochemical oxide to be a mixture of ort-PbO, tetr-PbO and ort-PbO2. On the other hand, the electrodes obtained by electrodeposition were in the tetr-PbO2 form. Analyses by scanning electron microscopy showed that the basic morphology of the thermal-electrochemical PbO2 is determined in the thermal step, being quite distinct from that of the electrodeposited electrodes. Polarization curves in 0.5 mol L-1 H2SO4 showed that in the case of the thermal-electrochemical PbO2 electrodes the OER was shifted to more positive potentials. However, the values of the Tafel slopes, quite high, indicate that passivating films were possibly formed on the Ti substrates, which could eventually explain the somewhat low current values for OER.
Wavefield reconstruction methods for reverse time migration
International Nuclear Information System (INIS)
During pre-stack reverse time migration (RTM), the shot and receiver wavefields are extrapolated separately along opposite directions, which means the shot wavefield should be saved and it is a bottleneck of RTM. The random boundary condition (RBC) method could be used to reconstruct the shot wavefield to solve this problem. The disadvantage of RBC is that the free surface boundary condition (FSBC) should be used because the RBC at the surface boundary will induce severe noise all through the imaging profile. The use of FSBC is also harmful because the reflections from the surface will generate imaging illusions. In this paper, we use two different boundary conditions, which use an absorbing boundary condition on the upper boundary, to perfectly reconstruct the shot wavefield. The new schemes could solve the free surface boundary problem and would not demand much memory. The numerical examples prove the efficiency of these methods. (paper)
Mccoy, M. J.
1980-01-01
Various finite difference techniques used to solve Laplace's equation are compared. Curvilinear coordinate systems are used on two dimensional regions with irregular boundaries, specifically, regions around circles and airfoils. Truncation errors are analyzed for three different finite difference methods. The false boundary method and two point and three point extrapolation schemes, used when having the Neumann boundary condition are considered and the effects of spacing and nonorthogonality in the coordinate systems are studied.
Munjanja Stephan P; Manandhar Dharma S; Mwansambo Charles; Kazembe Peter N; Osrin David; Fottrell Edward; Vergnano Stefania; Byass Peter; Lewycka Sonia; Costello Anthony
2011-01-01
Abstract Background Verbal autopsy (VA) is a widely used method for analyzing cause of death in absence of vital registration systems. We adapted the InterVA method to extrapolate causes of death for stillbirths and neonatal deaths from verbal autopsy questionnaires, using data from Malawi, Zimbabwe, and Nepal. Methods We obtained 734 stillbirth and neonatal VAs from recent community studies in rural areas: 169 from Malawi, 385 from Nepal, and 180 from Zimbabwe. Initial refinement of the Inte...
Tresch, Simon; Fister, Wolfgang; Marzen, Miriam; Kuhn, Nikolaus J.
2015-04-01
The quality of data obtained by rainfall experiments depends mainly on the quality of the rainfall simulation itself. However, the best rainfall simulation cannot deliver valuable data, if runoff and sediment discharge from the plot are not sampled at a proper interval or if poor interpolation methods are being used. The safest way to get good results would be to collect all runoff and sediment amounts that come off the plot in the shortest possible intervals. Unfortunately, high rainfall amounts often coincide with limited transport and analysis capacities. Therefore, it is in most cases necessary to find a good compromise between sampling frequency, interpolation method, and available analysis capacities. The aim of this study was to compare different methods to calculate total sediment yield based on aliquot sampling intervals. The methods tested were (1) simple extrapolation of one sample until next sample was collected; (2) averaging between two successive samples; (3) extrapolation of the sediment concentration; (4) extrapolation using a regression function. The results indicate that all methods could, theoretically, be used to calculate total sediment yields, but errors between 10-25% would have to be taken into account for interpretation of the gained data. Highest deviations were always found for the first measurement interval, which shows that it is very important to capture the initial flush of sediment from the plot to be able to calculate reliable total values.
Scientific Electronic Library Online (English)
Francisco Javier, Ozores Suárez.
2013-09-01
Full Text Available Introducción: la excursión sistólica del plano lateral del anillo tricuspídeo (TAPSE) es un parámetro útil en la evaluación de la función del ventrículo derecho en pacientes pediátricos. Objetivos: mostrar los valores normales del TAPSE en niños cubanos según grupos etarios, y describir su relación [...] con la edad, gasto del ventrículo izquierdo, tiempo de aceleración pulmonar y la fracción de eyección del ventrículo izquierdo. Métodos: se realizó un estudio prospectivo en el que se incluyeron 102 niños normales, a cuya medición del TAPSE se les realizó adaptando el programa para la mensuración de la distancia entre el punto E y el septum interventricular. Resultados: el TAPSE medio fue de 19,4 mm (DS±6) con valores medios en la primera semana de 9,5 mm hasta 21,2 a los 5 años y 24,1 en niños mayores. Se encontró correlación positiva significativa entre el TAPSE y la edad (r= 0,679) descrita por la ecuación TAPSE= 13,2787 + 5,2354 log (X). Se mostraron los valores del TAPSE en 5 grupos de edades. Se encontró también una correlación significativa entre el TAPSE, el tiempo de aceleración pulmonar y el gasto sistólico del ventrículo izquierdo. Conclusiones: existen 5 grupos etarios bien definidos, los mayores cambios del TAPSE se presentan antes de los 5 años de edad, y se encontró una relación logarítmica entre el TAPSE, la edad y el tiempo de aceleración pulmonar. Se recomienda el programa utilizado como alternativa en la medición del TAPSE. Abstract in english Introduction: the tricuspid annular plane systolic excursion (TAPSE) is a useful parameter to evaluate the right ventricular function in pediatric patients. Objectives: to show the normal values of TAPSE in Cuban children by age groups, and to describe their relationship with the age, the left ventr [...] icular output, the pulmonary acceleration time and the ejection fraction of the left ventricle. Methods: a prospective study included 102 normal children to whom TAPSE was measured by adapting the program for distance mensuration between point E and the interventricular septum. Results: average TAPSE was 19.4 mm (DS±6) with mean values equal to 9.5 mm in the first week up to 21.2 mm at 5 years and 24.1 in older children. There was significant positive correlation between TAPSE figures and age (r= 0.679) described in equation TAPSE= 13.2787 + 5.2354 log (X). The TAPSE values were presented in five age groups. It was also found that there was significant correlation among TAPSE, pulmonary acceleration time and systolic output of the left ventricle. Conclusions: there exist five well-defined age groups, the major changes occur before 5 years of age and log relation was found among TAPSE, age and pulmonary acceleration time. The used program is recommended as an alternative to measure TAPSE.
Eng, Alex Yong Sheng; Ambrosi, Adriano; Sofer, Zden?k; Šimek, Petr; Pumera, Martin
2014-12-23
Beyond MoS2 as the first transition metal dichalcogenide (TMD) to have gained recognition as an efficient catalyst for the hydrogen evolution reaction (HER), interest in other TMD nanomaterials is steadily beginning to proliferate. This is particularly true in the field of electrochemistry, with a myriad of emerging applications ranging from catalysis to supercapacitors and solar cells. Despite this rise, current understanding of their electrochemical characteristics is especially lacking. We therefore examine the inherent electroactivities of various chemically exfoliated TMDs (MoSe2, WS2, WSe2) and their implications for sensing and catalysis of the hydrogen evolution and oxygen reduction reactions (ORR). The TMDs studied are found to possess distinctive inherent electroactivities and together with their catalytic effects for the HER are revealed to strongly depend on the chemical exfoliation route and metal-to-chalcogen composition particularly in MoSe2. Despite its inherent activity exhibiting large variations depending on the exfoliation procedure, it is also the most efficient HER catalyst with a low overpotential of -0.36 V vs RHE (at 10 mA cm(-2) current density) and fairly low Tafel slope of ?65 mV/dec after BuLi exfoliation. In addition, it demonstrates a fast heterogeneous electron transfer rate with a k0obs of 9.17×10(-4) cm s(-1) toward ferrocyanide, better than that seen for conventional glassy carbon electrodes. Knowledge of TMD electrochemistry is essential for the rational development of future applications; inherent TMD activity may potentially limit certain purposes, but intended objectives can nonetheless be achieved by careful selection of TMD compositions and exfoliation methods. PMID:25453501
TDCR Calibration method of a radioactive source activity for liquid scintillation counting
International Nuclear Information System (INIS)
A new method for the calibration of a radioactive source for liquid scintillator counting is described. The method uses, as yield variation parameter, the triple and double coincidence ratio (TDCR), the coincidence counting ratio curve is extrapolated up to TDCR = 1 and an approximate value of the sample activity is obtained directly. The paper gives the principle of the method, yield calculation and an estimate of theoretical uncertainty. Measurements for tritiated water agree with total uncertainty of ± 0.63 % of the NBS standard value
International Nuclear Information System (INIS)
Historically, geophysical methods have been used extensively to successfully explore the subsurface for petroleum, gas, mineral, and geothermal resources. Their application, however, for site characterization, and monitoring the performance of near surface waste sites or repositories has been somewhat limited. Presented here is an overview of the geophysical methods that could contribute to defining the subsurface heterogeneity and extrapolating point measurements at the surface and in boreholes to volumetric descriptions in a fractured rock. In addition to site characterization a significant application of geophysical methods may be in performance assessment and in monitoring the repository to determine if the performance is as expected
International Nuclear Information System (INIS)
A method is presented for determining the challenge to an air cleaning system resulting from an accidental explosion in a process cell of a fuel cycle facility. In many safety analyses, this quantity is estimated by multiplying the volume of the process cell by the maximum concentration of airborne material that is reasonably stable to agglomeration and sedimentation. Particle sizes are inferred from the assumption of concentration stability. The suggested method is based on extrapolation of data obtained for the explosive dispersal of chemical agents. Application of the extrapolated information to fuel cycle facilities results in an estimate of total material airborne as well as particle size distribution. An important variable is the weight ratio of inert material to that of explosive (mass ratio). As the mass ratio is expected to be high in fuel cycle facilities, the method predicts that airborne material will have size distributions that have relatively large mean values following which substantial settling will occur. An illustrative calculation that takes mass ratio and settling into account suggests that total filter challenge may be greater than previously estimated, but that the fraction of that challenge that is smaller than 10 micro-meters may be very low. For use in safety analyses, the method requires experimental validation of the extrapolation of reference data to the conditions existing in a fuel cycle facility
International Nuclear Information System (INIS)
The calculation method presented in this report has been developed for the Mercury-Ferranti computer of the C.E.N.S. This calculation method allows to resolve the diffusion equations and continuity equations of flux and flow with two groups of neutrons and one dimension in spherical, cylindrical and linear geometry. In the cylindrical and linear configurations, we can take the height and extrapolated radius into account. The critical condition can be realised by varying linearly one or more parameters: k?, medium frontier, height or extrapolated radius. The calculation method enables also to calculate the flux, adjoint flux and various integrals. In the first part, it explains what is needed to know before using the method: data presentation, method possibilities, results presentation with some information about restrictions, accuracy and calculation time. The complete formulation of the calculation method is given in the second part. (M.P.)
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente Jose
2011-11-01
This report explores some important considerations in devising a practical and consistent framework and methodology for utilizing experiments and experimental data to support modeling and prediction. A pragmatic and versatile 'Real Space' approach is outlined for confronting experimental and modeling bias and uncertainty to mitigate risk in modeling and prediction. The elements of experiment design and data analysis, data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. Rationale is given for the various choices underlying the Real Space end-to-end approach. The approach adopts and refines some elements and constructs from the literature and adds pivotal new elements and constructs. Crucially, the approach reflects a pragmatism and versatility derived from working many industrial-scale problems involving complex physics and constitutive models, steady-state and time-varying nonlinear behavior and boundary conditions, and various types of uncertainty in experiments and models. The framework benefits from a broad exposure to integrated experimental and modeling activities in the areas of heat transfer, solid and structural mechanics, irradiated electronics, and combustion in fluids and solids.
Wavefield Extrapolation in Pseudo-depth Domain
Ma, Xuxin
2011-12-11
Wave-equation based seismic migration and inversion tools are widely used by the energy industry to explore hydrocarbon and mineral resources. By design, most of these techniques simulate wave propagation in a space domain with the vertical axis being depth measured from the surface. Vertical depth is popular because it is a straightforward mapping of the subsurface space. It is, however, not computationally cost-effective because the wavelength changes with local elastic wave velocity, which in general increases with depth in the Earth. As a result, the sampling per wavelength also increases with depth. To avoid spatial aliasing in deep fast media, the seismic wave is oversampled in shallow slow media and therefore increase the total computation cost. This issue is effectively tackled by using the vertical time axis instead of vertical depth. This is because in a vertical time representation, the "wavelength" is essentially time period for vertical rays. This thesis extends the vertical time axis to the pseudo-depth axis, which features distance unit while preserving the properties of the vertical time representation. To explore the potentials of doing wave-equation based imaging in the pseudo-depth domain, a Partial Differential Equation (PDE) is derived to describe acoustic wave in this new domain. This new PDE is inherently anisotropic because the use of a constant vertical velocity to convert between depth and vertical time. Such anisotropy results in lower reflection coefficients compared with conventional space domain modeling results. This feature is helpful to suppress the low wavenumber artifacts in reverse-time migration images, which are caused by the widely used cross-correlation imaging condition. This thesis illustrates modeling acoustic waves in both conventional space domain and pseudo-depth domain. The numerical tool used to model acoustic waves is built based on the lowrank approximation of Fourier integral operators. To investigate the potential of seismic imaging in the pseudo-depth domain, examples of zero-offset migration are implemented in pseudo-depth domain and compared with conventional space domain imaging results.
Model Mixing for Long-Term Extrapolation.
Czech Academy of Sciences Publication Activity Database
Ettler, P.; Kárný, Miroslav; Nedoma, Petr
Vienna : ARGESIM-ARGE Simulation News, 2007, s. 1-6. ISBN 978-3-901608-32-2. [EUROSIM Congress on Modelling and Simulation /6./. Ljubljana (SI), 09.09.2007-13.09.2007] R&D Projects: GA AV ?R 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Simulation * Modelling * Estimation * Multiple models Subject RIV: BB - Applied Statistics, Operational Research http://as.utia.cz/publications/2007/EttKarNed_07.pdf
Model Mixing for Long-Term Extrapolation.
Czech Academy of Sciences Publication Activity Database
Ettler, P.; Kárný, Miroslav; Nedoma, Petr
Vienna : ARGESIM-ARGE Simulation News, 2007. s. 275-275. [EUROSIM Congress on Modelling and Simulation /6./. 09.09.2007-13.09.2007, Ljubljana] R&D Projects: GA AV ?R 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Simulation * Modelling * Estimation * Multiple models Subject RIV: BB - Applied Statistics, Operational Research http://as.utia.cz/publications/2007/EttKarNed_07b.pdf
Alexander, V; THOMAS, H; A Cronin; Fielding, J.; Moran-Ellis, J
2008-01-01
This chapter considers mixed methods, defined as using two or more research methods within a project, and explores the reasons why a researcher may choose two or more methods to address their chosen area of study. Starting with a discussion of the aims researchers may have in using multiple methods, the chapter then briefly describes key debates about what constitutes mixed methods. It presents the advantages of using a mixed methods approach and discusses a variety of ways that researchers h...
Corrosion Behavior of Arc Weld and Friction Stir Weld in Al 6061-T6 Alloys
International Nuclear Information System (INIS)
For the evaluation of corrosion resistance of Al 6061-T6 Alloy, Tafel method and immersion test was performed with Friction Stir Weld(FSW) and Gas Metal Arc Weld(GMAW). The Tafel and immersion test results indicated that GMA weld was severely attacked compared with those of friction stir weld. It may be mainly due to the galvanic corrosion mechanism act on the GMA weld
Methods for measuring arctic and alpine shrub growth : a review
DEFF Research Database (Denmark)
Myers-Smith, Isla; Hallinger, Martin
2015-01-01
Shrubs have increased in abundance and dominance in arctic and alpine regions in recent decades. This often dramatic change, likely due to climate warming, has the potential to alter both the structure and function of tundra ecosystems. The analysis of shrub growth is improving our understanding of tundra vegetation dynamics and environmental changes. However, dendrochronological methods developed for trees, need to be adapted for the morphology and growth eccentricity of shrubs. Here, we review current and developing methods to measure radial and axial growth, estimate age, and assess growth dynamics in relation to environmental variables. Recent advances in sampling methods, analysis and applications have improved our ability to investigate growth and recruitment dynamics of shrubs. However, to extrapolate findings to the biome scale, future dendroecologicalwork will require improved approaches that better address variation in growth within parts of the plant, among individuals within populations and between species.
Kumar, Sudhir; Srinivasan, P; Sharma, S D; Saxena, Sanjay Kumar; Bakshi, A K; Dash, Ashutosh; Babu, D A R; Sharma, D N
2015-09-01
Isotope production and Application Division of Bhabha Atomic Research Center developed (32)P patch sources for treatment of superficial tumors. Surface dose rate of a newly developed (32)P patch source of nominal diameter 25mm was measured experimentally using standard extrapolation ionization chamber and Gafchromic EBT film. Monte Carlo model of the (32)P patch source along with the extrapolation chamber was also developed to estimate the surface dose rates from these sources. The surface dose rates to tissue (cGy/min) measured using extrapolation chamber and radiochromic films are 82.03±4.18 (k=2) and 79.13±2.53 (k=2) respectively. The two values of the surface dose rates measured using the two independent experimental methods are in good agreement to each other within a variation of 3.5%. The surface dose rate to tissue (cGy/min) estimated using the MCNP Monte Carlo code works out to be 77.78±1.16 (k=2). The maximum deviation between the surface dose rates to tissue obtained by Monte Carlo and the extrapolation chamber method is 5.2% whereas the difference between the surface dose rates obtained by radiochromic film measurement and the Monte Carlo simulation is 1.7%. The three values of the surface dose rates of the (32)P patch source obtained by three independent methods are in good agreement to one another within the uncertainties associated with their measurements and calculation. This work has demonstrated that MCNP based electron transport simulations are accurate enough for determining the dosimetry parameters of the indigenously developed (32)P patch sources for contact brachytherapy applications. PMID:26086681
International Nuclear Information System (INIS)
The homotopy perturbation method is used to formulate a new analytic solution of the neutron diffusion equation both for a sphere and a hemisphere of fissile material. Different boundary conditions are investigated; including zero flux on boundary, zero flux on extrapolated boundary, and radiation boundary condition. The interaction between two hemispheres with opposite flat faces is also presented. Numerical results are provided for one-speed fast neutrons in 235U. A comparison with Bessel function based solutions demonstrates that the homotopy perturbation method can exactly reproduce the results. The computational implementation of the analytic solutions was found to improve the numeric results when compared to finite element calculations.
Velikyan, Irina; Antoni, Gunnar; Sörensen, Jens; Estrada, Sergio
2013-01-01
Positron Emission Tomography (PET) and in particular gallium-68 (68Ga) applications are growing exponentially worldwide contributing to the expansion of nuclear medicine and personalized management of patients. The significance of 68Ga utility is reflected in the implementation of European Pharmacopoeia monographs. However, there is one crucial point in the monographs that might limit the use of the generators and consequently expansion of 68Ga applications and that is the limit of 0.001% of Germanium-68 (68Ge(IV)) radioactivity content in a radiopharmaceutical. We have investigated the organ distribution of 68Ge(IV) in rat and estimated human dosimetry parameters in order to provide experimental evidence for the determination and justification of the 68Ge(IV) limit. Male and female rats were injected in the tail vein with formulated [68Ge]GeCl4 in the absence or presence of [68Ga]Ga-DOTA-TOC. The tissue radioactivity distribution data was extrapolated for the estimation of human organ equivalent doses and total effective dose using Organ Level Internal Dose Assessment Code software (OLINDA/EXM). 68Ge(IV) was evenly distributed among the rat organs and fast renal excretion prevailed. Human organ equivalent dose and total effective dose estimates indicated that the kidneys were the dose-limiting organs (185±54 ?Sv/MBq for female and 171±38 ?Sv/MBq for male) and the total effective dose was 15.5±0.1 and 10.7±1.2 ?Sv/MBq, respectively for female and male. The results of this dosimetry study conclude that the 68Ge(IV) limit currently recommended by monographs could be increased considerably (>100 times) without exposing the patient to harm given the small absorbed doses to normal organs and fast excretion. PMID:23526484
Directory of Open Access Journals (Sweden)
Sie S. T.
2006-11-01
Full Text Available Research and development studies in a laboratory are necessarily conducted on a scale which is orders of magnitude smaller than that in commercial practice. In the case of the development and commercialization of an unprecedented novel process technology, available laboratory results have to be translated into envisaged technology on a commercial scale, i. e. the problem is that of scaling-up. However, in many circumstances the commercial technology is more or less defined as far as type of reactor is concerned and laboratory studies are concerned with the generation of predictive information on the behaviour of new catalysts, alternative feedstocks, etc. , in such a reactor. In many cases the complexity of feed composition and reaction kinetics preclude the prediction to be made on the basis of a combination of fundamental kinetic data and computer models, so that there is no other option than to simulate the commercial reactor on a laboratory scale, i. e. the problem is that of scaling-down. From the point of view of R & D Defficiency, the scale of the laboratory experiments should be as small as possible without detracting from the meaningfulness of the results. In the present paper some problems in the scaling-down of a trickle-flow reactor as applied in hydrotreating processes to kinetically equivalent laboratory reactors of different sizes will be discussed. Two main aspects relating to inequalities in fluid dynamics resulting from the differences in scale will be treated in more detail, viz. deviations from ideal plug flow and non ideal wetting or irrigation of the catalyst particles. Although a laboratory reactor can never be a true small-scale replica of a commercial trickle-flow reactor in all respects, it can nevertheless be made to provide representative data as far as the catalytic conversion aspects are concerned. By ressorting to measures such as catalyst bed dilution with fine catalytically inert material it proves possible to carry out meaningful process research on hydrotreating processes on the scale of micro-reactors. Les études et mises au point effectuées en laboratoire sont nécessairement effectuées à plus petite échelle que les réalisations commerciales. Dans le cas de la mise au point et de la commercialisation de la technologie d'un procédé nouveau, il faudra traduire les résultats obtenus en laboratoire pour la technologie envisagée à l'échelle commerciale; le problème est donc l'extrapolation vers le haut. Cependant, bien souvent, la technologie commerciale, pour ce qui touche au type de réacteur, est plus ou moins bien définie et les études de laboratoire s'attachent à produire des données permettant de prévoir le comportement qu'auront dans ce réacteur des catalyseurs nouveaux, de matières premières de substitution, etc. Dans bien des cas, étant donné la complexité de la composition de la matière première et la cinétique de réaction, il est impossible de mener la prévision en s'appuyant sur les données cinétiques et les modèles informatiques, de sorte qu'il n'y a pas d'autre solution que la simulation du réacteur commercial à l'échelle du laboratoire; le problème est donc l'extrapolation vers le bas. Du point de vue de l'efficacité des études de recherche et développement, pour les expériences en laboratoire, l'échelle devra être aussi petite que possible sans nuire à la signification des résultats. Le présent article examine certains problèmes liés à l'extrapolation vers le bas d'un réacteur à écoulement ruisselant telle qu'elle est appliquée dans les procédés d'hydrotraitement à des réacteurs de laboratoire de tailles différentes cinétiquement équivalents. Deux aspects principaux relatifs à des inégalités de dynamique des fluides résultant de différences d'échelle sont décrits plus en détail, i. e. les écarts par rapport à un écoulement idéal donnant lieu à un effet bouchon et au mouillage ou à l'irrigation non-idéal des particules de catalyseur. Bien qu'un réacteur de laboratoire ne puisse jam
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
Tafel, Külliki
2006-01-01
Tippjuhtide mõju organisatsioonisistele suhetele ning juhimistegevusele. Skeemid: The content and overlap of the terms of corporate governance and management; The theoretical framework for the study; The degree of involvement of the board of directors in the strategic management process; Framework for treatment of the owner-CEO-employee chain of relations
Nonstationary Hydrological Frequency Analysis: Theoretical Methods and Application Challenges
Xiong, L.
2014-12-01
Because of its great implications in the design and operation of hydraulic structures under changing environments (either climate change or anthropogenic changes), nonstationary hydrological frequency analysis has become so important and essential. Two important achievements have been made in methods. Without adhering to the consistency assumption in the traditional hydrological frequency analysis, the time-varying probability distribution of any hydrological variable can be established by linking the distribution parameters to some covariates such as time or physical variables with the help of some powerful tools like the Generalized Additive Model of Location, Scale and Shape (GAMLSS). With the help of copulas, the multivariate nonstationary hydrological frequency analysis has also become feasible. However, applications of the nonstationary hydrological frequency formula to the design and operation of hydraulic structures for coping with the impacts of changing environments in practice is still faced with many challenges. First, the nonstationary hydrological frequency formulae with time as covariate could only be extrapolated for a very short time period beyond the latest observation time, because such kind of formulae is not physically constrained and the extrapolated outcomes could be unrealistic. There are two physically reasonable methods that can be used for changing environments, one is to directly link the quantiles or the distribution parameters to some measureable physical factors, and the other is to use the derived probability distributions based on hydrological processes. However, both methods are with a certain degree of uncertainty. For the design and operation of hydraulic structures under changing environments, it is recommended that design results of both stationary and nonstationary methods be presented together and compared with each other, to help us understand the potential risks of each method.
Lelong, Jérôme; Zanette, Antonino
2010-01-01
Tree methods are among the most popular numerical methods to price financial derivatives. Mathematically speaking, they are easy to understand and do not require severe implementation skills to obtain algorithms to price financial derivatives. Tree methods basically consist in approximating the diffusion process modeling the underlying asset price by a discrete random walk. In this contribution, we provide a survey of tree methods for equity options, which focus on multiplicative binomial Cox...
Corrosion Behavior of Pulsed Gas Tungsten Arc Weldments in Power Plant Carbon Steel
Kumaresh Babu, S. P.; Natarajan, S.
2007-10-01
Welding plays an essential role in fabrication of components such as boiler drum, pipe work, heat exchangers, etc., used in power plants. Gas tungsten arc welding (GTAW) is mainly used for welding of boiler components. Pulsed GTAW is another process widely used where high quality and precision welds are required. In all arc-welding processes, the intense heat produced by the arc and the associated local heating and cooling lead to varied corrosion behavior and several metallurgical phase changes. Since the occurrence of corrosion is due to electrochemical potential gradient developed in the adjacent site of a weld metal, it is proposed to study the effects of welding on the corrosion behavior of these steels. This paper describes the experimental work carried out to evaluate and compare corrosion and its inhibition in SA 516 Gr.70 carbon steel by pulsed GTAW process in HCl medium at 0.1, 0.5, and 1.0 M concentrations. The parent metal, weld metal and heat affected zone are chosen as regions of exposure for the study made at room temperature (R.T.) and at 100 °C. Electrochemical polarization techniques such as Tafel line extrapolation (Tafel), linear polarization resistance (LPR), and ac impedance method have been used to measure the corrosion current. The role of hexamine and mixed inhibitor (thiourea + hexamine in 0.5 M HCl), each at 100 ppm concentration is studied in these experiments. Microstructural observation, surface characterization, and morphology using SEM and XRD studies have been made on samples exposed at 100 °C in order to highlight the nature and extent of film formation.
The Cn method for approximation of the Boltzmann equation
International Nuclear Information System (INIS)
In a new method of approximation of the Boltzmann equation, one starts from a particular form of the equation which involves only the angular flux at the boundary of the considered medium and where the space variable does not appear explicitly. Expanding in orthogonal polynomials the angular flux of neutrons leaking from the medium and making no assumption about the angular flux within the medium, very good approximations to several classical plane geometry problems, i.e. the albedo of slabs and the transmission by slabs, the extrapolation length of the Milne problem, the spectrum of neutrons reflected by a semi-infinite slowing down medium. The method can be extended to other geometries. (authors)
Quandt, D. J.; Peterson, K.; Bond, B. J.; Olson, K. V.; Spies, T.; Halpern, C.
2010-12-01
The combination of LiDAR (Light Detection and Ranging), data from field plot measurements, and GIS technology can provide new insights into spatial variation in ecological patterns and processes. We took advantage of long-term field data in a small (96 ha) watershed (“Watershed 1”) in the H.J. Andrews Experimental Forest, an LTER (Long Term Ecological Research) site situated in Oregon’s central-western Cascade Range. Originally an old-growth Douglas-fir forest, the entire basin was clearcut in the late 1960s and then planted with Douglas-fir seedlings. However, three other tree species have found their way into the watershed and with highly variable terrain and nutrient availability, vegetation growth is also highly variable. A network of 133 vegetation plots was installed to track subsequent growth and development of vegetation and has been re-measured approximately every 5-8 years ever since. A wealth of other information is available for the site thanks to decades of intensive research. This information has been incorporated into GIS layers along with high-resolution LiDAR measurements acquired in 2008. Our goal was to use the LiDAR data to extrapolate data from plot measurements to the entire basin, and then to use the derived images to evaluate landscape patterns and relationships. The slopes of the watershed are very steep in many places, and the field plots were originally laid out as fixed-radius circles with no slope correction. We obtained significantly better correlations between field plot data and LiDAR data when the dimensions of each plot was corrected according to the local slope and aspect, resulting in variable-sized ellipses. We explored various combinations of LiDAR metrics related to canopy cover and height, and explored various linear and non-linear fittings for combinations of LIDAR metrics. We found that we could explain approximately 50% of the observed variation in vegetation properties such as biomass and productivity with LiDAR. Relationships between vegetation productivity and other site variables are currently being explored.
N.A. Abdel Ghanyl; A. E. El-Shenawy; W.A.M. Hussien
2011-01-01
The inhibition effect of four amino acids on the corrosion of 316L stainless steel in 1.0 M H2SO4 has been studied by open-circuit potential and potentiodynamic polarization measurements. Corrosion data such as corrosion rate, corrosion potential (ECorr.) and corrosion current (ICorr.) were determined by extrapolation of the cathodic and anodic Tafel region. Glycine, Leucine and Valine inhibit the corrosion process, but Arginine accelerate the corrosion phenomenon. Glycine has the highest inh...
Berezin, I S
2014-01-01
Computing Methods, Volume 2 is a five-chapter text that presents the numerical methods of solving sets of several mathematical equations. This volume includes computation sets of linear algebraic equations, high degree equations and transcendental equations, numerical methods of finding eigenvalues, and approximate methods of solving ordinary differential equations, partial differential equations and integral equations.The book is intended as a text-book for students in mechanical mathematical and physics-mathematical faculties specializing in computer mathematics and persons interested in the
Development and Application of Accelerated Test Methods Specific to Cpv Systems
Spencer, Mark; Hirny, Marcin; Kaplan, Ariadna
2010-10-01
Accelerated test methods are essential to prove the life of CPV products within an acceptable amount of time. This paper describes two separate accelerated test programs SolFocus is conducting on its 300 Watt SF1100 CPV product. One program is a traditional temperature / humidity step test with results extrapolated to field conditions through a use-condition model of the interior of the panel. The other program is a field study with the CPV receiver components operating at elevated temperature levels while operating in grid-connected systems. The methodology of both programs is detailed along with initial data and results.
New method for exact measurement of thermal neutron distribution in elementary cell
International Nuclear Information System (INIS)
Exact measurement of thermal neutron density distribution in an elementary cell necessitates the knowledge of the perturbations involved in the cell by the measuring device. A new method has been developed in which a special stress is made to evaluate these perturbations by measuring the response from the perturbations introduced in the elementary cell. The unperturbed distribution was obtained by extrapolation to zero perturbation. The final distributions for different lattice pitches were compared with a THERMOS-type calculation. As a pleasing fact a very good agreement has been reached, which dissolves the long existing disagreement between THERMOS calculations and measured density distribution (author)
Fast and accurate molecular Hartree-Fock with a finite-element multigrid method
Beck, O; Kolb, D
2003-01-01
We present a multigrid scheme for the solution of finite-element Hartree-Fock equations for diatomic molecules. It is shown to be fast and accurate, the time effort depending linearly on the number of variables. Results are given for the molecules LiH, BH, N_2 and for the Be atom in our molecular grid which agrees very well with accurate values from an atomic code. Highest accuracies were obtained by applying an extrapolation scheme; we compare with other numerical methods. For N_2 we get an accuracy below 1 nHartree.
Directory of Open Access Journals (Sweden)
TobiasKoch
2014-04-01
Full Text Available One of the key interests in the social sciences is the investigation of change and stability of a given attribute. Although numerous models have been proposed in the past for analyzing longitudinal data including multilevel and/or latent variable modeling approaches, only few modeling approaches have been developed for studying the construct validity in longitudinal multitrait-multimethod (MTMM measurement designs. The aim of the present study was to extend the spectrum of current longitudinal modeling approaches for MTMM analysis. Specifically, a new longitudinal multilevel CFA-MTMM model for measurement designs with structurally different and interchangeable methods (called Latent-State-Combination-Of-Methods model, LS-COM is presented. Interchangeable methods are methods that are randomly sampled from a set of equivalent methods (e.g., multiple student ratings for teaching quality, whereas structurally different methods are methods that cannot be easily replaced by one another (e.g., teacher, self-ratings, principle ratings. Results of a simulation study indicate that the parameters and standard errors in the LS-COM model are well recovered even in conditions with only 5 observations per estimated model parameter. The advantages and limitations of the LS-COM model relative to other longitudinal MTMM modeling approaches are discussed.
DEFF Research Database (Denmark)
McLaughlin, W.L.; Miller, A.
2003-01-01
Chemical and physical radiation dosimetry methods, used for the measurement of absorbed dose mainly during the practical use of ionizing radiation, are discussed with respect to their characteristics and fields of application.
Flight-Test Evaluation of Flutter-Prediction Methods
Lind, RIck; Brenner, Marty
2003-01-01
The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.
Method and apparatus for determining minority carrier diffusion length in semiconductors
Goldstein, Bernard (Princeton, NJ); Dresner, Joseph (Princeton, NJ); Szostak, Daniel J. (Mercerville, NJ)
1983-07-12
Method and apparatus are provided for determining the diffusion length of minority carriers in semiconductor material, particularly amorphous silicon which has a significantly small minority carrier diffusion length using the constant-magnitude surface-photovoltage (SPV) method. An unmodulated illumination provides the light excitation on the surface of the material to generate the SPV. A manually controlled or automatic servo system maintains a constant predetermined value of the SPV. A vibrating Kelvin method-type probe electrode couples the SPV to a measurement system. The operating optical wavelength of an adjustable monochromator to compensate for the wavelength dependent sensitivity of a photodetector is selected to measure the illumination intensity (photon flux) on the silicon. Measurements of the relative photon flux for a plurality of wavelengths are plotted against the reciprocal of the optical absorption coefficient of the material. A linear plot of the data points is extrapolated to zero intensity. The negative intercept value on the reciprocal optical coefficient axis of the extrapolated linear plot is the diffusion length of the minority carriers.
Re, Matteo; Valentini, Giorgio
2012-03-01
Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been proposed to explain the characteristics and the successful application of ensembles to different application domains. For instance, Allwein, Schapire, and Singer interpreted the improved generalization capabilities of ensembles of learning machines in the framework of large margin classifiers [4,177], Kleinberg in the context of stochastic discrimination theory [112], and Breiman and Friedman in the light of the bias-variance analysis borrowed from classical statistics [21,70]. Empirical studies showed that both in classification and regression problems, ensembles improve on single learning machines, and moreover large experimental studies compared the effectiveness of different ensemble methods on benchmark data sets [10,11,49,188]. The interest in this research area is motivated also by the availability of very fast computers and networks of workstations at a relatively low cost that allow the implementation and the experimentation of complex ensemble methods using off-the-shelf computer platforms. However, as explained in Section 26.2 there are deeper reasons to use ensembles of learning machines, motivated by the intrinsic characteristics of the ensemble methods. The main aim of this chapter is to introduce ensemble methods and to provide an overview and a bibliography of the main areas of research, without pretending to be exhaustive or to explain the detailed characteristics of each ensemble method. The paper is organized as follows. In the next section, the main theoretical and practical reasons for combining multiple learners are introduced. Section 26.3 depicts the main taxonomies on ensemble methods proposed in the literature. In Section 26.4 and 26.5, we present an overview of the main supervised ensemble methods reported in the literature, adopting a simple taxonomy, originally proposed in Ref. [201]. Applications of ensemble methods are only marginally considered, but a specific section on some relevant applications of ensemble methods in astronomy and astrophysics has been added (Section 26.6). The conclusion (Section 26.7) ends this pap
Method and system for non-linear motion estimation
Lu, Ligang (Inventor)
2011-01-01
A method and system for extrapolating and interpolating a visual signal including determining a first motion vector between a first pixel position in a first image to a second pixel position in a second image, determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image, determining a third motion vector between one of the first pixel position in the first image and the second pixel position in the second image, and the second pixel position in the second image and the third pixel position in the third image using a non-linear model, determining a position of the fourth pixel in a fourth image based upon the third motion vector.
A review of experimental methods for determining residual creep life
International Nuclear Information System (INIS)
Experimental methods available for determining how much creep life remains at a particular time in the high temperature service of a component are reviewed. After a brief consideration of the limitations of stress rupture extrapolation techniques, the application of post-exposure creep testing is considered. Ways of assessing the effect of microstructural degradation on residual life are then reviewed. It is pointed out that while this type of work will be useful for certain materials, there are other materials in which 'mechanical damage' such as cavitation will be more important. Cavitation measurement techniques are therefore reviewed. The report ends with a brief consideration of the use of crack growth measurements in assessing the residual life of cracked components. (author)
Scattering from finite size methods in lattice QCD
International Nuclear Information System (INIS)
Using two flavors of maximally twisted mass fermions, we calculate the S-wave pion-pion scattering length in the isospin I=2 channel and the P-wave pion-pion scattering phase in the isospin I=1 channel. In the former channel, the lattice calculations are performed at pion masses ranging from 270 MeV to 485 MeV. We use chiral perturbation theory at next-to-leading order to extrapolate our results. At the physical pion mass, we find m?aI=2??=-0.04385(28)(38) for the scattering length. In the latter channel, the calculation is currently performed at a single pion mass of 391 MeV. Making use of finite size methods, we evaluate the scattering phase in both the center of mass frame and the moving frame. The effective range formula is employed to fit our results, from which the rho resonance mass and decay width are evaluated. (orig.)
New aspects for the evaluation of radioactive waste disposal methods
International Nuclear Information System (INIS)
For the performance assessment of radioactive and hazardous waste disposal sites, risk assessments are usually performed for the long term, i.e., over an interval in space and time for which one can predict movement and behavior of toxic agents in the environment. This approach is based on at least three implicit assumptions: One, that the engineering layout will take care of the immediate endangerment of potential receptors; two, that one has carefully evaluated just how far out in space and time the models can be extrapolated, and three, that one can evaluate potential health effects for very low exposures. A few of these aspects will be discussed here in the framework of the scientific method
Szulc, Stefan
1965-01-01
Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then
Energy Technology Data Exchange (ETDEWEB)
Glass, J.T. [North Carolina State Univ., Raleigh (United States)
1993-01-01
Methods discussed in this compilation of notes and diagrams are Raman spectroscopy, scanning electron microscopy, transmission electron microscopy, and other surface analysis techniques (auger electron spectroscopy, x-ray photoelectron spectroscopy, electron energy loss spectroscopy, and scanning tunnelling microscopy). A comparative evaluation of different techniques is performed. In-vacuo and in-situ analyses are described.
DEFF Research Database (Denmark)
Ernst, Erik
2005-01-01
The world of programming has been conquered by the procedure call mechanism, including object-oriented method invocation which is a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior, where traditional invocation is optimized for as-is reuse of existing behavior. Tight coupling reduces flexibility, and traditional invocation tightly couples transfer of information and transfer of control. Method mixins decouple these two kinds of transfer, thereby opening the doors for new kinds of abstraction and reuse. Method mixins use shared name spaces to transfer information between caller and callee, as opposed to traditional invocation which uses parameters and returned results. This relieves the caller from dependencies on the callee, and it allows direct transfer of information further down the call stack, e.g., to a callee's callee. The mechanism has been implemented in the programming language gbeta. Variants of the mechanism could be added to almost any imperative programming language.
Functional renormalization group methods in quantum chromodynamics
International Nuclear Information System (INIS)
We apply functional Renormalization Group methods to Quantum Chromodynamics (QCD). First we calculate the mass shift for the pion in a finite volume in the framework of the quark-meson model. In particular, we investigate the importance of quark effects. As in lattice gauge theory, we find that the choice of quark boundary conditions has a noticeable effect on the pion mass shift in small volumes. A comparison of our results to chiral perturbation theory and lattice QCD suggests that lattice QCD has not yet reached volume sizes for which chiral perturbation theory can be applied to extrapolate lattice results for low-energy observables. Phase transitions in QCD at finite temperature and density are currently very actively researched. We study the chiral phase transition at finite temperature with two approaches. First, we compute the phase transition temperature in infinite and in finite volume with the quark-meson model. Though qualitatively correct, our results suggest that the model does not describe the dynamics of QCD near the finite-temperature phase boundary accurately. Second, we study the approach to chiral symmetry breaking in terms of quarks and gluons. We compute the running QCD coupling for all temperatures and scales. We use this result to determine quantitatively the phase boundary in the plane of temperature and number of quark flavors and find good agreement with lattice results. (orig.)
Method for accelerated leaching of solidified waste
International Nuclear Information System (INIS)
An accelerated leach test method has been developed to determine the maximum leachability of solidified waste. The approach we have taken is to use a semi-dynamic leach test; that is, the leachant is sampled and replaced periodically. Parameters such as temperature, leachant volume, and specimen size are used to obtain releases that are accelerated relative to other standard leach tests and to the leaching of full-scale waste forms. The data obtained with this test can be used to model releases from waste forms, or to extrapolate from laboratory-scale to full-scale waste forms if diffusion is the dominant leaching mechanism. Diffusion can be confirmed as the leaching mechanism by using a computerized mathematical model for diffusion from a finite cylinder. We have written a computer program containing several models including diffusion to accompany this test. The program and a Users' Guide that gives screen-by-screen instructions on the use of the program are available from the authors. 14 refs., 4 figs., 1 tab
Dahlquist, Germund
2003-01-01
""Substantial, detailed and rigorous . . . readers for whom the book is intended are admirably served."" - MathSciNet (Mathematical Reviews on the Web), American Mathematical Society.Practical text strikes fine balance between students' requirements for theoretical treatment and needs of practitioners, with best methods for large- and small-scale computing. Prerequisites are minimal (calculus, linear algebra, and preferably some acquaintance with computer programming). Text includes many worked examples, problems, and an extensive bibliography.
Simple Experimental Methods for Determining the Apparent Focal Shift in a Microscope System
Bratton, Benjamin P.; Shaevitz, Joshua W.
2015-01-01
Three-dimensional optical microscopy is often complicated by a refractive index mismatch between the sample and objective lens. This mismatch causes focal shift, a difference between sample motion and focal-plane motion, that hinders the accuracy of 3D reconstructions. We present two methods for measuring focal shift using fluorescent beads of different sizes and ring-stained fluorescent beads. These simple methods are applicable to most situations, including total internal reflection objectives and samples very close to the interface. For distances 0–1.5 ?m into an aqueous environment, our 1.49-NA objective has a relative focal shift of 0.57 ± 0.02, significantly smaller than the simple n2/n1 approximation of 0.88. We also expand on a previous sub-critical angle theory by means of a simple polynomial extrapolation. We test the validity of this extrapolation by measuring the apparent focal shift in samples where the refractive index is between 1.33 and 1.45 and with objectives with numerical apertures between 1.25 and 1.49. PMID:26270960
Discretization error estimation and exact solution generation using the method of nearby problems.
Energy Technology Data Exchange (ETDEWEB)
Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)
2011-10-01
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
A simple and accurate method for high-temperature PEM fuel cell characterization
Kulikovsky, Andrei; Wannek, Christoph; Oetjen, Hans-Friedrich
2010-01-01
Abstract A set of basic parameters for any polymer electrolyte membrane fuel cell (PEMFC) includes the Tafel slope $b$ and the exchange current density $j_*$ of the cathode catalyst, the oxygen diffusion coefficient $D_b$ in the cathode gas--diffusion layer and the cell resistivity $R_{cell}$. Based on the analytical model of a PEMFC (A.A.Kulikovsky. {\\em Electrochimica Acta} {\\bf 49} (2004) 617), we propose a two--step procedure allowing to evaluate these parameters for a high--te...
Failure Analysis of Wind Turbines by Probability Density Evolution Method
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, SØren R.K.
2013-01-01
The aim of this study is to present an efficient and accurate method for estimation of the failure probability of wind turbine structures which work under turbulent wind load. The classical method for this is to fit one of the extreme value probability distribution functions to the extracted maxima of the response of wind turbine. However this approach may contain high amount of uncertainty due to the arbitrariness of the data and the distributions chosen. Therefore less uncertain methods are meaningful in this direction. The most natural approach in this respect is Monte Carlo (MC) simulation. This is not practical due to its excessive computational load. This problem can alternatively be tackled if the evolution of the probability density function (PDF) of the response process can be realized. The evolutionary PDF can then be integrated on the boundaries of the problem. For this reason we propose to use the Probability Density Evolution Method (PDEM). PDEM can alternatively be used to obtain the distribution of the extreme values of the response process by simulation. This approach requires less computational effort than integrating the evolution of the PDF; but may be less accurate. In this paper we present the results of failure probability estimation using PDEM. The results will then be compared to extrapolated values obtained from extreme value distribution fits to the sample response values. The results approve the feasibility of this approach for reliability analysis of wind turbines however they convey the potential for improving accuracy of the method in low probability areas
DEFF Research Database (Denmark)
Zhuravlev, Fedor Technical University of Denmark,
A method of conducting radiofluorination of a substrate, comprising the steps of: (a) contacting an aqueous solution of [18F] fluoride with a polymer supported phosphazene base for sufficient time for trapping of [18F] fluoride on the polymer supported phosphazene base; and (b) contacting a solution of the substrate with the polymer supported phosphazene base having [18F] fluoride trapped thereon obtained in step (a) for sufficient time for a radiofluorination reaction to take place; an apparatus for conducting radiofluorination; use of the apparatus; and an apparatus for production of a dose of a radiotracer for administration to a patient.
International Nuclear Information System (INIS)
Several stages of physical tests have to be considered when a PWR reactor starts up. To determine the main physical parameters of the reactor core, means at disposal are: neutron flux measuring units, the reactivity meter, the in-core instrumentation (flux and temperature), the boron meter, and the system to know the position of the control elements. Each mean is reviewed: principle, design, position of detectors... Then, the methods to measure the main physical parameters are presented: boron concentration, efficiency of control elements and differential efficiency of boron, measurement of the isotherm coefficient of temperature, flux maps, and calibration of the units of neutron power measurement
International Nuclear Information System (INIS)
Purpose: To provide a method of eliminating radioactive contaminations capable of ease treatment for decontaminated liquid wastes and grinding materials. Method: Those organic grinding materials such as fine wall nuts shell pieces cause no secondary contaminations since they are softer as compared with inorganic grinding materials, less pulverizable upon collision against the surface to be treated, being capable of reusing and producing no fine scattering powder. In addition, they can be treated by burning. The organic grinding material and water are sprayed by a nozzle to the surface to be treated, and decontaminated liquid wastes are separated into solid components mainly composed of organic grinding materials and liquid components mainly composed of water by filtering. The thus separated solid components are recovered in a storage tank for reuse as the grinding material and, after repeating use, subjected to burning treatment. While on the other hand, water is recovered into a storage tank and, after repeating use, purified by passing through an ion exchange resin-packed column and decontaminated to discharge. (Horiuchi, T.)
A method for measuring element fluxes in an undisturbed soil: nitrogen and carbon from earthworms
International Nuclear Information System (INIS)
Data on chemical cycles, as nitrogen or carbon cycles, are extrapolated to the fields or ecosystems without the possibility for checking conclusions; i.e. from scientific knowledge (para-ecology). A new method, by natural introduction of an earthworm compartment into an undisturbed soil, with earthworms labelled both by isotopes (15N, 14C) and by staining is described. This method allows us to measure fluxes of chemicals. The first results, gathered during the improvement of the method in partly artificial conditions, are cross-checked with other data given by direct observation in the field. Measured flux (2.2 mg N/g fresh mass empty gut/day/15 0C) is far more important than para-ecological estimations; animal metabolism plays directly an important role in nitrogen and carbon cycles. (author)
The numerical solution of differential-algebraic systems by Runge-Kutta methods
Hairer, Ernst; Lubich, Christian
1989-01-01
The term differential-algebraic equation was coined to comprise differential equations with constraints (differential equations on manifolds) and singular implicit differential equations. Such problems arise in a variety of applications, e.g. constrained mechanical systems, fluid dynamics, chemical reaction kinetics, simulation of electrical networks, and control engineering. From a more theoretical viewpoint, the study of differential-algebraic problems gives insight into the behaviour of numerical methods for stiff ordinary differential equations. These lecture notes provide a self-contained and comprehensive treatment of the numerical solution of differential-algebraic systems using Runge-Kutta methods, and also extrapolation methods. Readers are expected to have a background in the numerical treatment of ordinary differential equations. The subject is treated in its various aspects ranging from the theory through the analysis to implementation and applications.
Development of methods for evaluating operation characteristics of heat resisting materials
International Nuclear Information System (INIS)
Methods of estimating long-term strength, creep resistance and lifetime of refractory materials designed for nuclear power plants are reviewed. The present methods are capable of predicting long-term strength for 105 hours exploitation with the accuracy of 5%. Long-term plasticity estimations are based upon the similarity between plasticity logarhythm versus temperature-time parameter curves at different temperatures. Calculations thus performed indicate that to eliminate the danger of brittle failure the metal plasticity must be less than 1-2%. Creep resistance is determined by extrapolation of creep curves. In order to determine the metals behaviour under non-stationary conditions equations are suggested describing fatigue curves at soft and rigid loading with an account taken of creep effects. The methods of evaluating parameters of fatigue and lifetime curves have been checked in about 1000 experiments each 40-140 thousand hours long, involving 40 different materials, chromium-nickel and chromium-molibdenum steels, in particular
A comparison between the fission matrix method, the diffusion model and the transport model
Energy Technology Data Exchange (ETDEWEB)
Dehaye, B.; Hugot, F. X.; Diop, C. M. [Commissariat a l' Energie Atomique et aux Energies Alternatives, Direction de l' Energie Nucleaire, Departement de Modelisation des Systemes et Structures, CEA DEN/DM2S, PC 57, F-91191 Gif-sur-Yvette cedex (France)
2013-07-01
The fission matrix method may be used to solve the critical eigenvalue problem in a Monte Carlo simulation. This method gives us access to the different eigenvalues and eigenvectors of the transport or fission operator. We propose to compare the results obtained via the fission matrix method with those of the diffusion model, and an approximated transport model. To do so, we choose to analyse the mono-kinetic and continuous energy cases for a Godiva-inspired critical sphere. The first five eigenvalues are computed with TRIPOLI-4{sup R} and compared to the theoretical ones. An extension of the notion of the extrapolation distance is proposed for the modes other than the fundamental one. (authors)
Energy Technology Data Exchange (ETDEWEB)
Solano, R.; Schirra, M.; Rivas, M. de la; Barroso, S.; Seith, B.
1982-07-01
The austenitic stainless steel X6crni1811 (Din 1.4948) used as a structure material for the German Fast Breeder Reactor SNR 300 was creep tested in a temperature range of 550-650 degree centigree material condition as well as welded material condition. The main point of this program (Extrapolation-Program) lies in the knowledge of the creep-rupture-strength and creep-behaviour up to 3 x 10{sup 4} hours higher temperatures in order to extrapolated up to {>=}10{sup 5} hours for operating temperatures. In order to study the stress dependency of the minimum creep rate additional tests were carried out of 550 degree centigree - 750 degree centigree. The present report describes the state in the running program with test-times of 23.000 hours and results from tests up to 55.000 hours belonging to other parallel programs are taken into account. Besides the creep-rupture behaviour it is also made a study of ductility between 550 and 750 degree centigree. Extensive metallographic examinations have been made to study the fracture behaviour and changes in structure. (Author)
An investigation of new methods for estimating parameter sensitivities
Beltracchi, Todd J.; Gabriele, Gary A.
1988-01-01
Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.
Structural Reliability of Wind Turbine Blades : Design Methods and Evaluation
DEFF Research Database (Denmark)
Dimitrov, Nikolay Krasimirov
2013-01-01
In the past decade the use of wind energy has expanded significantly, transforming a niche market into a practically mainstream energy generation industry. With the advance of turbine technology the search for more efficient solutions has lead to increased focus on probabilistic modelling and design. Reliability-based analysis methods have the potential of being a valuable tool which can improve the state of knowledge by explaining the uncertainties, and form the probabilistic basis for calibration of deterministic design tools. The present thesis focuses on reliability-based design of wind turbine blades. The main purpose is to draw a clear picture of how reliability-based design of wind turbines can be done in practice. The objectives of the thesis are to create methodologies for efficient reliability assessment of composite materials and composite wind turbine blades, and to map the uncertainties in the processes, materials and external conditions that have an effect on the health of a composite structure. The study considers all stages in a reliability analysis, from defining models of structural components to obtaining the reliability index and calibration of partial safety factors. In a detailed demonstration of the process of estimating the reliability of a wind turbine blade and blade components, a number of probabilistic load and strength models are formulated, and the following scientific and practical questions are answered: a) What material, load and uncertainty models need to be used b) How can different failure modes be taken into account c) What reliability methods are most suitable for the particular task d) Are there any factors specific to wind turbines such as materials and operating conditions that need to be taken into account e) Are there ways for improvement by developing new models and standards or carrying out tests The following aspects are covered in detail: ? The probabilistic aspects of ultimate strength of composite laminates are addressed. Laminated plates are considered asa general structural reliability system where each layer in a laminate is a separate system component. Methods for solving the system reliability are discussed in an example problem. ? Probabilistic models for fatigue life of laminates and sandwich core are developed and calibrated against measurement data. A modified, nonlinear S-N relationship is formulated where the static strength of the material is included as a parameter. A Bayesian inference model predicting the fatigue resistance of face laminates based on the static and fatigue strength of individual lamina is developed. A series of tests of the fatigue life of balsa wood core material are carried out, and a probabilistic model for the fatigue strength of balsa core subjected to transverse shear loading is calibrated to the test data. ? A review study evaluates and compares several widely-used statistical extrapolation methods for their capability of modelling the short-term statistical distribution of blade loads and tip deflection. The best performing methods are selected, and several improvements are suggested, including a procedure for automatic determination of tail threshold level, which allows for efficient automated use of peaks-over-threshold methods. ? The problem of obtaining the long-term statistical distribution of load extremes is discussed by comparing the method of integrating extrapolated short-term statistical distributions against extrapolation of data directly sampled from the long-term distribution. The comparison is based on the long-term distribution of wind speed, turbulence, and wind shear, where a model of the wind shear distribution is specifically developed for the purpose. ? Uncertainties in load and material modelling are considered. A quantitative assessment of the in uence of a number of uncertainties is done based on modelled and measured data. ? Example analyses demonstrate the process of estimating the reliability against several modes of failure in two different structures. This includes reliability against blade-to
Directory of Open Access Journals (Sweden)
C?t?lin LUPU
2009-06-01
Full Text Available In this article presents applications of “Divide et impera” method using object -oriented programming in C #.Main advantage of using the "divide et impera" cost in that it allows software to reduce the complexity of the problem,sub-problems that were being decomposed and simpler data sharing in smaller groups of data (eg sub -algorithmQuickSort. Object-oriented programming means programs with new types that integrates both data and methodsassociated with the creation, processing and destruction of such data. To gain advantages through abstractionprogramming (the program is no longer a succession of processing, but a set of objects to life, have differentproperties, are capable of specific action s and interact in the program. Spoke on instantiation new techniques,derivation and polimorfismul object types.
International Nuclear Information System (INIS)
This little volume is one of an extended series of basic textbooks on analytical chemistry produced by the Analytical Chemistry by Open Learning project in the UK. Prefatory sections explain its mission, and how to use the Open Learning format. Seventeen specific sections organized into five chaptrs begin with a general discussion of nuclear properties, types, and laws of nuclear decay and proceeds to specific discussions of three published papers (reproduced in their entirety) giving examples of radiochemical methods which were discussed in the previous chapter. Each section begins with an overview, contains one or more practical problems (called self-assessment questions or SAQ's), and concludes with a summary and a list of objectives for the student. Following the main body are answers to the SAQ's, and several tables of physical constants, SI prefixes, etc. A periodic table graces the inside back cover
Energy Technology Data Exchange (ETDEWEB)
Patrick Gonzalez; Antonio Lara; Jorge Gayoso; Eduardo Neira; Patricio Romero; Leonardo Sotomayor
2005-07-14
Deforestation of temperate rainforests in Chile has decreased the provision of ecosystem services, including watershed protection, biodiversity conservation, and carbon sequestration. Forest conservation can restore those ecosystem services. Greenhouse gas policies that offer financing for the carbon emissions avoided by preventing deforestation require a projection of future baseline carbon emissions for an area if no forest conservation occurs. For a proposed 570 km{sup 2} conservation area in temperate rainforest around the rural community of Curinanco, Chile, we compared three methods to project future baseline carbon emissions: extrapolation from Landsat observations, Geomod, and Forest Restoration Carbon Analysis (FRCA). Analyses of forest inventory and Landsat remote sensing data show 1986-1999 net deforestation of 1900 ha in the analysis area, proceeding at a rate of 0.0003 y{sup -1}. The gross rate of loss of closed natural forest was 0.042 y{sup -1}. In the period 1986-1999, closed natural forest decreased from 20,000 ha to 11,000 ha, with timber companies clearing natural forest to establish plantations of non-native species. Analyses of previous field measurements of species-specific forest biomass, tree allometry, and the carbon content of vegetation show that the dominant native forest type, broadleaf evergreen (bosque siempreverde), contains 370 {+-} 170 t ha{sup -1} carbon, compared to the carbon density of non-native Pinus radiata plantations of 240 {+-} 60 t ha{sup -1}. The 1986-1999 conversion of closed broadleaf evergreen forest to open broadleaf evergreen forest, Pinus radiata plantations, shrublands, grasslands, urban areas, and bare ground decreased the carbon density from 370 {+-} 170 t ha{sup -1} carbon to an average of 100 t ha{sup -1} (maximum 160 t ha{sup -1}, minimum 50 t ha{sup -1}). Consequently, the conversion released 1.1 million t carbon. These analyses of forest inventory and Landsat remote sensing data provided the data to evaluate the three methods to project future baseline carbon emissions. Extrapolation from Landsat change detection uses the observed rate of change to estimate change in the near future. Geomod is a software program that models the geographic distribution of change using a defined rate of change. FRCA is an integrated spatial analysis of forest inventory, biodiversity, and remote sensing that produces estimates of forest biodiversity and forest carbon density, spatial data layers of future probabilities of reforestation and deforestation, and a projection of future baseline forest carbon sequestration and emissions for an ecologically-defined area of analysis. For the period 1999-2012, extrapolation from Landsat change detection estimated a loss of 5000 ha and 520,000 t carbon from closed natural forest; Geomod modeled a loss of 2500 ha and 250 000 t; FRCA projected a loss of 4700 {+-} 100 ha and 480,000 t (maximum 760,000 t, minimum 220,000 t). Concerning labor time, extrapolation for Landsat required 90 actual days or 120 days normalized to Bachelor degree level wages; Geomod required 240 actual days or 310 normalized days; FRCA required 110 actual days or 170 normalized days. Users experienced difficulties with an MS-DOS version of Geomod before turning to the Idrisi version. For organizations with limited time and financing, extrapolation from Landsat change provides a cost-effective method. Organizations with more time and financing could use FRCA, the only method where that calculates the deforestation rate as a dependent variable rather than assuming a deforestation rate as an independent variable. This research indicates that best practices for the projection of baseline carbon emissions include integration of forest inventory and remote sensing tasks from the beginning of the analysis, definition of an analysis area using ecological characteristics, use of standard and widely used geographic information systems (GIS) software applications, and the use of species-specific allometric equations and wood densities developed for local species.
International Nuclear Information System (INIS)
Two methods for determining the diffusion parameters of thermal neutrons for non-moderator and non-multiplicator media have been developped: The first one, which is a pulsed method, is based on thermal neutrons relaxation coefficients measurement in a moderator, with and without the medium of interest that plays the role of reflector. For the experimental results interpretation using the diffusion theory, a corrective factor which takes into account the neutron cooling by diffusion has been introduced. Its dependence on the empirically obtained relaxation coefficients is in a good agreement with the calculations made in P3L2 approximation. The difference between linear extrapolation lengths of the moderator and the reflector has been taken into account, by developping the scalar fluxes in Bessel function series which automatically satisfy the boundary conditions at the extra-polated surfaces of the two media. The obtained results for Iron are in a good agreement with those in the literature. The second method is time independent, based on the 'flux albedo' measurements interpretation (Concept introduced by Amaldi and Fermi) by P3 approximation in the one group trans-port theory. The independent sources are introduced in the Marshak boundary conditions. An angular albedo matrix has been used to deal with multiple reflections and to take into account the distortion of the current vector when entring a medium, after being reflected by this latter. The results obtained by this method are slightly different from those given in the literature. The analysis of the possible sources causing this discrepancy, particulary the radial distribution of flux in cylindrical geometry and the flux depression at medium-black body interface, has shown that the origin of this discrepancy is the neutron heating by diffusion. 47 figs., 20 tabs., 39 refs. (author)
International Nuclear Information System (INIS)
Purpose: To efficiently dissolve radioactive iron oxide membranes deposited to accumulate on the inside of pipeways, equipments, etc. Method: Pipeways of a plant to be removed with iron oxide membranes are connected to a system for recycling decontaminating liquids. A raw water tank also serving for deaeration, a liquid delivery pump and a hydrogen gas injection heater are disposed to the system. A heater, a degasing gas bubbling pipe and filters for catching depositing platinum particles are disposed to the raw water tank. Upon decontamination with this constitution, decontaminating liquids mainly composed of a complexing agent are charged in the raw water tank and heated, while dissolved oxygen is removed by using an inert gas. The pipeways are decontaminated by mixing the decontaminating liquids with the platinum-depositing particles and further with gaseous hydrogen and the liquids are returned to the tank. The platinum-depositing particles are recycled during decontamination without recovery while adjusting in the tank, and then recovered through the filters upon completion of the decontamination. In this way, the iron oxide membranes can be dissolved and decontaminated. (Horiuchi, T.)
Sharifalhoseini, Zahra; Entezari, Mohammad H
2015-10-01
The pure phase of the ZnO nanoparticles (NPs) as anticorrosive pigments was synthesized by the sonication method. The surfaces of the sono-synthesized nanoparticles were covered with the protective silica layer. The durability of the coated and uncoated ZnO NPs in the used electrolytic Ni bath was determined by flame atomic absorption spectrometry. In the present research the multicomponent Ni bath as the complex medium was replaced by the simple one. The used nickel-plating bath was just composed of the Ni salts (as the sources of the Ni(2+) ions) to better clarify the influence of the presence of the ZnO@SiO2 core-shell NPs on the stability of the medium. The effect of ZnO@SiO2 NPs incorporation on the morphology of the solid electroformed Ni deposit was studied by scanning electron microscopy (SEM). Furthermore, the influence of the co-deposited particles in the Ni matrix on the corrosion resistance of the Ni coating was evaluated by the electrochemical methods including linear polarization resistance (LPR) and Tafel extrapolation. PMID:26057943
International Nuclear Information System (INIS)
Effect of different concentrations, 40-200 ppm, of various polyester aliphatic amine surfactants on inhibition of the corrosion of carbon steel in the formation water (deep well water) was investigated. These surfactants exhibit different levels of inhibition particularly at high concentration (200 ppm). Inhibition efficiencies in the range 86-96% were determined by weight loss method. Comparable results were obtained from electrochemical measurements using Tafel extrapolation and polarisation resistance methods. It was shown that all the investigated surfactants act primarily as anodic inhibitors; however, they also affect the rate and mechanism of the cathodic reaction. These compounds function via adsorption on reactive sites on the corroding surface reducing the corrosion rate of the metal. It was revealed that the adsorption of these surfactants obey Langmuir adsorption isotherm. The inhibition effectiveness increases with the length of the aliphatic hydrocarbon chain, being a maximum in the presence of surfactant IV (?96% efficiency). The corrosion inhibition feature of this compound is attributed to the presence of a long hydrocarbon chain that ensures large surface coverage as well as the presence of multiple active centers for adsorption. Scanning electron microscopy, SEM, has been applied to identify the surface morphology of carbon steel alloy in the absence and presence of the inhibitor molecules
Development of MCAERO wing design panel method with interactive graphics module
Hawk, J. D.; Bristow, D. R.
1984-01-01
A reliable and efficient iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical pressure distribution. The design process is initialized by using MCAERO (MCAIR 3-D Subsonic Potential Flow Analysis Code) to analyze a baseline configuration. A second program DMCAERO is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter by applying a first-order expansion to the baseline equations in MCAERO. This matrix is calculated only once but is used in each iteration cycle to calculate the geometry perturbation and to analyze the perturbed geometry. The potential on the new geometry is calculated by linear extrapolation from the baseline solution. This extrapolated potential is converted to velocity by numerical differentiation, and velocity is converted to pressure by using Bernoulli's equation. There is an interactive graphics option which allows the user to graphically display the results of the design process and to interactively change either the geometry or the prescribed pressure distribution.
Experimental verification and comparison of mode shape-based damage detection methods
International Nuclear Information System (INIS)
This paper presents experimental verification and comparison of damage detection methods based on changes in mode shapes such as: mode shape curvature (MSC), modal assurance criterion (MAC), strain energy (SE), modified Laplacian operator (MLO), generalized fractal dimension (GFD) and Wavelet Transform (WT). The object of the investigation is to determine benefits and drawbacks of the aforementioned methods and to develop data preprocessing algorithms for increasing damage assessment effectiveness by using signal processing techniques such as interpolation and extrapolation of measured points. Noise reduction algorithms based on moving average, median filter, and wavelet decomposition are also tested. The experiments were performed on an aluminium plate with riveted stiffeners. Damage was introduced in a form of damaged rivets and a saw cut in the angle bar. Measurements were made using a non-contact Scanning Laser Doppler Vibrometer (SLDV) at 101 points in two rows, distributed over the structure height and positioned along two reinforcing ribs.
Mathematical Method for Predicting Nickel Deposit Based on Data from Drilling Points
Directory of Open Access Journals (Sweden)
Edi Cahyono
2011-01-01
Full Text Available In this article we discuss several methods for predicting nickel ore content inside the soil under a given area/region. The prediction is the main objective of the exploration activity which is very important for conducting the exploitation activity from economic point of view. The prediction methods are based on the data obtained from the drilling activity at several ‘points’. The data yields information on the nickel density at those points. Nickel density over the region is approximated (with an approximate function by applying interpolation and/or extrapolation based on the data from those points. The nickel content is predicted by applying integral of the approximate function over the given region
Experimental verification and comparison of mode shape-based damage detection methods
Energy Technology Data Exchange (ETDEWEB)
Radzienski, M; Krawczuk, M, E-mail: Maciej.Radzienski@gmail.co [Technical University of Gdansk, Faculty of Electrical and Control Engineering, Narutowicza 11/12, 80-952 Gdansk (Poland)
2009-08-01
This paper presents experimental verification and comparison of damage detection methods based on changes in mode shapes such as: mode shape curvature (MSC), modal assurance criterion (MAC), strain energy (SE), modified Laplacian operator (MLO), generalized fractal dimension (GFD) and Wavelet Transform (WT). The object of the investigation is to determine benefits and drawbacks of the aforementioned methods and to develop data preprocessing algorithms for increasing damage assessment effectiveness by using signal processing techniques such as interpolation and extrapolation of measured points. Noise reduction algorithms based on moving average, median filter, and wavelet decomposition are also tested. The experiments were performed on an aluminium plate with riveted stiffeners. Damage was introduced in a form of damaged rivets and a saw cut in the angle bar. Measurements were made using a non-contact Scanning Laser Doppler Vibrometer (SLDV) at 101 points in two rows, distributed over the structure height and positioned along two reinforcing ribs.
Use of the gold markers method to predict the mechanisms of iron atmospheric corrosion
International Nuclear Information System (INIS)
Highlights: ? Corrosion mechanisms investigated by gold markers method coupled with microRaman imaging. ? Experimental highlighting of an important internal development of the rust layer. ? Microstructural evolution of the corrosion product layer during atmospheric treatment. ? Comparison with long-term corrosion layer microstructure. - Abstract: Iron corrosion under atmospheric conditions has been investigated by using the gold markers method. The corrosion experiments were performed in a climatic chamber with an accelerated treatment. The gold markers localization, carried out by scanning electron microscopy coupled with energy dispersive X-ray spectroscopy, revealed that the rust layer growth was essentially due to an internal development. Moreover, microRaman mappings allowed prediction of the mechanism of rust layer evolution during the ageing treatment. Finally these results were compared to samples corroded for several 100 years in order to extrapolate our observations to long term corrosion.
Vessel Segmentation and Blood Flow Simulation Using Level-Sets and Embedded Boundary Methods
Energy Technology Data Exchange (ETDEWEB)
Deschamps, T; Schwartz, P; Trebotich, D; Colella, P; Saloner, D; Malladi, R
2004-12-09
In this article we address the problem of blood flow simulation in realistic vascular objects. The anatomical surfaces are extracted by means of Level-Sets methods that accurately model the complex and varying surfaces of pathological objects such as aneurysms and stenoses. The surfaces obtained are defined at the sub-pixel level where they intersect the Cartesian grid of the image domain. It is therefore straightforward to construct embedded boundary representations of these objects on the same grid, for which recent work has enabled discretization of the Navier-Stokes equations for incompressible fluids. While most classical techniques require construction of a structured mesh that approximates the surface in order to extrapolate a 3D finite-element gridding of the whole volume, our method directly simulates the blood-flow inside the extracted surface without losing any complicated details and without building additional grids.
Directory of Open Access Journals (Sweden)
A.V. Gusarov
2012-04-01
Full Text Available This paper uses the results of river suspended sediment flux (SSF analysis to propose a new hydrological method for quantitatively estimating the river bed and drainage basin (sheet erosion, rill and gully erosion components of total erosion intensity in river basins. The suggested method is based on the establishment of the functional power connection between mean monthly water discharges (WD, Q i and suspended sediment fluxes (r i calculated for the low-water-discharge phases of a river?s hydrological regime in various (on mean annual water discharges years: r i = a×Q i (where a, ì are some empirical coefficients, and further extrapolation of this connection for other phases of the hydrological regime. Thus, the extrapolation allows us to calculate (in a long-term annual SSF the proportions of sediments originating in river beds and drainage basins. The proposed method is tested using a long-term (not less than 10 years series of observations for WD and SSF of 124 chiefly small and midsize rivers of the East-European plain, the Urals, the Eastern Carpathians, the Ciscaucasia and the Caucasus, and Central Asian mountains, containing data on the mean monthly values of WD and SSF. The paper also compares the method with other methods for estimating the components of erosion intensity and SSF..
Point dose verification for intensity modulated radiosurgery using Clarkson's method
International Nuclear Information System (INIS)
In clinical radiation physics chart checking, the dose calculation results generated by computer treatment planning software are usually verified by an independent computerized monitor unit calculation routine, or by 'hand calculation' using percent depth dose (PDD), tissue phantom ratio (TPR), scatter factors, and the machine calibration factors. For intensity-modulated radiosurgery (IMRS) or intensity-modulated radiation therapy (IMRT), the 'hand calculation' becomes not feasible due to the sophisticated multileaf collimator (MLC) segments created for intensity-modulated dose delivery. Therefore, an independent computerized dose calculation routine is needed for fast and reliable dose verification. In this work, a point dose calculation routine for IMRS/IMRT plan verification is developed by directly applying Clarkson's method. The method includes preparing data table by measuring TPRs for circular fields with diameters ranging 6 to 98 mm, extrapolating TPR for the zero field size (TPR0) from measured data and generating scatter phantom ratio (SPR) for each individual circular field. The segmented MLC sequences created by IMRS/IMRT inverse planning are converted into irregular fields for Clarkson's calculation. This method has been tested using 29 IMRS/IMRT cases. The results indicate that it is reliable, fast, and accurate. The average time to calculate one field is about 2 s with a 300 Mhz CPU
A method to obtain new cross-sections transport equivalent
International Nuclear Information System (INIS)
We present a method, that allows the calculation, by the mean of variational principle, of equivalent cross-sections in order to take into account the transport and mesh size effects on reactivity variation calculations. The method validation has been made in two and three dimensions geometries. The reactivity variations calculated in three dimensional hexagonal geometry with seven points by subassembly using two sets of equivalent cross-sections for control rods are in a very good agreement with the ones of a transport, extrapolated to zero mesh size, calculation. The difficulty encountered in obtaining a good flux distribution has lead to the utilisation of a single set of equivalent cross-sections calculated by starting from an appropriated R-Z model that allows to take into account also the axial transport effects for the control rod followers. The global results in reactivity variations are still satisfactory with a good performance for the flux distribution. The main interest of the proposed method is the possibility to simulate a full 3D transport calculation, with fine mesh size, using a 3D diffusion code, with a larger mesh size. The results obtained should be affected by uncertainties, which do not exceed ± 4% for a large LMFBR control rod worth and for very different rod configurations. This uncertainty is by far smaller than the experimental uncertainties. (author). 5 refs, 8 figs, 9 tabs
Comparison of deterministic and Monte Carlo methods in shielding design
International Nuclear Information System (INIS)
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. (authors)
Comparison of deterministic and Monte Carlo methods in shielding design.
Oliveira, A D; Oliveira, C
2005-01-01
In shielding calculation, deterministic methods have some advantages and also some disadvantages relative to other kind of codes, such as Monte Carlo. The main advantage is the short computer time needed to find solutions while the disadvantages are related to the often-used build-up factor that is extrapolated from high to low energies or with unknown geometrical conditions, which can lead to significant errors in shielding results. The aim of this work is to investigate how good are some deterministic methods to calculating low-energy shielding, using attenuation coefficients and build-up factor corrections. Commercial software MicroShield 5.05 has been used as the deterministic code while MCNP has been used as the Monte Carlo code. Point and cylindrical sources with slab shield have been defined allowing comparison between the capability of both Monte Carlo and deterministic methods in a day-by-day shielding calculation using sensitivity analysis of significant parameters, such as energy and geometrical conditions. PMID:16381723
SAR/QSAR methods in public health practice
International Nuclear Information System (INIS)
Methods of (Quantitative) Structure-Activity Relationship ((Q)SAR) modeling play an important and active role in ATSDR programs in support of the Agency mission to protect human populations from exposure to environmental contaminants. They are used for cross-chemical extrapolation to complement the traditional toxicological approach when chemical-specific information is unavailable. SAR and QSAR methods are used to investigate adverse health effects and exposure levels, bioavailability, and pharmacokinetic properties of hazardous chemical compounds. They are applied as a part of an integrated systematic approach in the development of Health Guidance Values (HGVs), such as ATSDR Minimal Risk Levels, which are used to protect populations exposed to toxic chemicals at hazardous waste sites. (Q)SAR analyses are incorporated into ATSDR documents (such as the toxicological profiles and chemical-specific health consultations) to support environmental health assessments, prioritization of environmental chemical hazards, and to improve study design, when filling the priority data needs (PDNs) as mandated by Congress, in instances when experimental information is insufficient. These cases are illustrated by several examples, which explain how ATSDR applies (Q)SAR methods in public health practice.
Simplified method for measuring sex-hormone binding globulin
International Nuclear Information System (INIS)
We describe a simple, rapid method for measurement of sex-hormone binding globulin. Serial dilutions of pregnancy serum are prepared in serum from males that has been pre-treated by heating to 60 degrees C for 1 h to destroy endogenous binding globulin, which is then determined by a long-used technique to yield a set of ''standards.'' In the assay itself, a fixed amount of [3H]-labeled and unlabeled dihydrotestosterone is incubated with standard or unknown, and the bound fraction precipitated with saturated ammonium sulfate. A plot of percent of the steroid bound vs standard dilution yields a sigmoid curve, from which the results in unknowns can be read by simple extrapolation. Within-assay CVs for pools of serum from men, women, and women in late pregnancy were 6.56, 9.59, and 8.4%, respectively. Between-assay CVs for the same pools were 8.05, 9.5, and 11.5%, respectively. The correlation between results obtained by this method and those of the older technique was 0.95 for samples from non-pregnant subjects and 0.73 for those from pregnant women. Our procedure is simpler and faster than previous methods and accurately measures the differences in the globulin in sera from men, women, and pregnant women. Forty to 50 samples can be assayed in a working day
Rad, Jamal Amani; Parand, Kourosh; Abbasbandy, Saeid
2015-05-01
For the first time in mathematical finance field, we propose the local weak form meshless methods for option pricing; especially in this paper we select and analysis two schemes of them named local boundary integral equation method (LBIE) based on moving least squares approximation (MLS) and local radial point interpolation (LRPI) based on Wu's compactly supported radial basis functions (WCS-RBFs). LBIE and LRPI schemes are the truly meshless methods, because, a traditional non-overlapping, continuous mesh is not required, either for the construction of the shape functions, or for the integration of the local sub-domains. In this work, the American option which is a free boundary problem, is reduced to a problem with fixed boundary using a Richardson extrapolation technique. Then the ? -weighted scheme is employed for the time derivative. Stability analysis of the methods is analyzed and performed by the matrix method. In fact, based on an analysis carried out in the present paper, the methods are unconditionally stable for implicit Euler (? = 0) and Crank-Nicolson (? = 0.5) schemes. It should be noted that LBIE and LRPI schemes lead to banded and sparse system matrices. Therefore, we use a powerful iterative algorithm named the Bi-conjugate gradient stabilized method (BCGSTAB) to get rid of this system. Numerical experiments are presented showing that the LBIE and LRPI approaches are extremely accurate and fast.
Feller, David
2015-07-16
In the explicitly correlated CCSD(T)-F12b coupled cluster method only the singles and doubles component of the energy benefits from inclusion of terms involving the interelectronic distance. Consequently, only that component exhibits accelerated convergence with respect to the 1-particle basis set. The smaller perturbative triples component converges at the same rate as the corresponding piece in standard CCSD(T). With the alternative CCSD(T*)-F12b method the triples correlation energy is scaled up by the ratio of explicitly correlated to standard second-order perturbation theory correlation energies in an attempt to better approximate the basis set limit. An extensive and diverse 212 molecule collection of reference total atomization energies, developed with large basis sets (up to aug-cc-pV9Z in some cases) and standard CCSD(T), was used to calibrate the performance of CCSD(T*). Scaling of the (T) energy led to improved results relative to raw F12b values but only provided a statistical advantage over previously proposed complete basis set extrapolation techniques for the smallest basis sets. With larger sets, scaling (T) produced noticeably poorer results, sometimes by a factor of 2. In agreement with earlier studies, basis set extrapolated CCSD(T)-F12b was found to exhibit a systematic bias toward overestimating reference atomization energies with an error that increases with the magnitude of the valence correlation energy. PMID:25730633
International Nuclear Information System (INIS)
The present work described a new methodology for modelling the behaviour of the activity in a 4??-? coincidence system. The detection efficiency for electrons in the proportional counter and gamma radiation in the NaI(Tl) detector was calculated using the Monte Carlo program MCNP4C. Another Monte Carlo code was developed which follows the path in the disintegration scheme from the initial state of the precursor radionuclide, until the ground state of the daughter nucleus. Every step of the disintegration scheme is sorted by random numbers taking into account the probabilities of all ?- branches, electronic capture branches, transitions probabilities and internal conversion coefficients. Once the final state was reached beta, electronic capture events and gamma transitions are accounted for the three spectra: beta, gamma and coincidence variation in the beta efficiency was performed simulating energy cut off or use of absorbers (Collodion). The selected radionuclides for simulation were: 134Cs, 72Ga which disintegrate by ?- transition, 133Ba which disintegrates by electronic capture and 35S which is a beta pure emitter. For the latter, the Efficiency Tracing technique was simulated. The extrapolation curves obtained by Monte Carlo were filled by the Least Square Method with the experimental points and the results were compared to the Linear Extrapolation method. (author)
International Nuclear Information System (INIS)
Tissue-phantom ratios (TPRs) are a common dosimetric quantity used to describe the change in dose with depth in tissue. These can be challenging and time consuming to measure. The conversion of percentage depth dose (PDD) data using standard formulae is widely employed as an alternative method in generating TPR. However, the applicability of these formulae for small fields has been questioned in the literature. Functional representation has also been proposed for small-field TPR production. This article compares measured TPR data for small 6 MV photon fields against that generated by conversion of PDD using standard formulae to assess the efficacy of the conversion data. By functionally fitting the measured TPR data for square fields greater than 4 cm in length, the TPR curves for smaller fields are generated and compared with measurements. TPRs and PDDs were measured in a water tank for a range of square field sizes. The PDDs were converted to TPRs using standard formulae. TPRs for fields of 4 × 4 cm2 and larger were used to create functional fits. The parameterization coefficients were used to construct extrapolated TPR curves for 1 × 1 cm2, 2 × 2-cm2, and 3 × 3-cm2 fields. The TPR data generated using standard formulae were in excellent agreement with direct TPR measurements. The TPR data for 1 × 1-cm2, 2 × 2-cm2, and 3 × 3-cm2 fields created by extrapolation of the larger field functional fits gave inaccurate initial results. The corresponding mean differences for the 3 fields were 4.0%, 2.0%, and 0.9%. Generation of TPR data using a standard PDD-conversion methodology has been shown to give good agreement with our directly measured data for small fields. However, extrapolation of TPR data using the functional fit to fields of 4 × 4 cm2 or larger resulted in generation of TPR curves that did not compare well with the measured data
International Nuclear Information System (INIS)
In the present investigation corrosion and sulphate ion reduction on the nickel and platinum surfaces, respectively have been studied in (Li,Na,K)2SO4 melt in the presence and absence of V2O5 at 550 deg. C. The corrosion or oxidation rate of nickel was derived by Tafel extrapolation method while cyclic voltammetry measurement was employed for the determination of electrochemical behaviour of sulphate ion. According to result obtained, corrosion of nickel increases very fast after the addition of V2O5 and becomes more than two orders of magnitude higher in the presence of 3% V2O5 in the melt. Three well defined cathodic peaks related to vanadate ion reduction, were clearly observed in the cyclovoltammogram obtained on the Pt surfaces in presence of V2O5 in the melt. Observation of three reduction peaks could be due to subsequent reduction of vanadate ion to its lower oxidation state. From the results, a conclusion has been drawn that a strong ability of vanadium to reduce in its lower oxidation state by consuming electrons released by sulphate ion oxidation, is mainly responsible for enhancement of corrosion or oxidation of the nickel in the sulphates melt
Energy Technology Data Exchange (ETDEWEB)
Cetin, D.; Doenmez, G. [Faculty of Science, Department of Biology, Ankara University, Tandogan, 06100, Ankara (Turkey); Bilgic, S. [Faculty of Science, Department of Chemistry, Ankara University, Tandogan, 06100, Ankara (Turkey); Doenmez, S. [Faculty of Engineering, Department of Food Engineering, Ankara University, Diskapi, 06110 Ankara (Turkey)
2007-11-15
In this study corrosion behavior of low alloy steel, in the presence of anaerobic sulfate-reducing Desulfotomaculum sp. which was isolated from an oil production well, was investigated. In order to determine corrosion rates and mechanisms, mass loss measurements and electrochemical polarization studies were performed without and with bacteria in the culture medium. Scanning electron microscopic observations and energy dispersive X-ray spectra (EDS) analysis were made on steel coupons. The effect of iron concentration on corrosion behavior was determined by Tafel extrapolation method. In a sterile culture medium, as the FeSO{sub 4} . 7H{sub 2}O concentration increased, corrosion potential (E{sub cor}) values shifted towards more anodic potentials and corrosion current density (I{sub cor}) values increased considerably. After inoculation of sulfate-reducing bacteria (SRB), E{sub cor} shifted towards cathodic values. I{sub cor} values increased with increasing incubation time for 10 and 100 mg/L concentrations of FeSO{sub 4} . 7H{sub 2}O. Results have shown that the corrosion activity changed due to several factors such as bacterial metabolites, ferrous sulfide, hydrogen sulfide, iron phosphide, and cathodic depolarization effect. (Abstract Copyright [2007], Wiley Periodicals, Inc.)
Energy Technology Data Exchange (ETDEWEB)
Cruz, Maria C.P.; Cavalcanti, Eliane B.; Rambo, Elisabeth S.M.; Araujo, Paulo M.M. [Instituto de Tecnologia e Pesquisa (IPT), Aracaju, SE (Brazil); Santos, Anderson O. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil)
2008-07-01
The effect caused for constituent elements of the watery environment of production of the oil on metallic coverings was studied that simulate the artificial rise of the oil (connecting rods of pump mechanic). For in such a way, the techniques of electrochemical analysis had been used, as the method of linear polarization and the extrapolation of the straight line of Tafel of metallic coverings [alloy of Bronze and Aluminum; alloy of CrNi 80/20; NiCr 80/20; 95MXC (Cr: 26.5% - 31.5%, Cr: 26.5% - 31.5%, B: 3.35% - 4.15%, Mn: 1.1% - 2.2%, Itself: 1.1% -2.1%, Faith) and aluminum] on the substratum steel simulating covering in the connecting rods of pump. Oil well had been raised the resistance characteristics the polarization of the metal when submitted to the watery way surrounding it. They had been evaluated the resistance the covering polarization applied on the steel, with the objective to analyze its resistance the corrosion and to verify the possibility of its use as barrier against the problems originated for the degradation in connecting rods of pump. For the calculated Taxes of Corrosion, it can be concluded that the Aluminum coverings (0,003 mm/year) and NiCr 80/20 (0,179 mm/year) had been the ones that had presented a bigger resistance to the corrosive way. (author)
International Nuclear Information System (INIS)
Hydrogen embrittlement cracking behaviours of SUS304/Ta/Zr explosive bonded joint during underwater polishing were investigated. Hydrogen embrittlement cracks occurred in the Zr substrate adjacent to the Ta/Zr bond interface during underwater polishing. The open circuit potential of Zr during underwater polishing was drastically reduced immediately after mechanical polishing (within a fraction of a second). The hydrogen yields of Zr-Ta alloys and cold-worked Zr during underwater polishing were estimated from the corrosion current determined by the Tafel extrapolation method. The hydrogen yield increased with a decrease in the Ta content of Zr-Ta alloy, and with an increase in the degree of working (rolling reduction) of Zr. It was deduced that the mechanical grinding in water removing the passive oxide film on the Zr substrate led to the hydrogen absorption into the Zr substrate and the precipitation of zirconium hydrides. Accordingly, hydrogen embrittlement cracks occurred in the deformation layer of Zr around the Ta/Zr bond interface due to the tensile residual stress in the explosive bonded joint. (author)
International Nuclear Information System (INIS)
The corrosion inhibition effect of four indole derivatives, namely indole (IND), benzotriazole (BTA), benzothiazole (BSA) and benzoimidazole (BIA), have been used as possible corrosion inhibitors for pure iron in 1 M HCl. In this study, electrochemical frequency modulation, EFM was used as an effective method for corrosion rate determination in corrosion inhibition studies. By using EFM measurements, corrosion current density was determined without prior knowledge of Tafel slopes. Corrosion rates obtained using EFM, were compared to that obtained from other chemical and electrochemical techniques. The results obtained from EFM, EIS, Tafel and weight loss measurements were in good agreement. Tafel polarization measurements show that indole derivatives are cathodic-type inhibitors. Molecular simulation studies were applied to optimize the adsorption structures of indole derivatives. The inhibitor/iron/solvent interfaces were simulated and the adsorption energies of these inhibitors were calculated. Quantum chemical calculations have been performed and several quantum chemical indices were calculated and correlated with the corresponding inhibition efficiencies