Extrapolation methods theory and practice
Brezinski, C
1991-01-01
This volume is a self-contained, exhaustive exposition of the extrapolation methods theory, and of the various algorithms and procedures for accelerating the convergence of scalar and vector sequences. Many subroutines (written in FORTRAN 77) with instructions for their use are provided on a floppy disk in order to demonstrate to those working with sequences the advantages of the use of extrapolation methods. Many numerical examples showing the effectiveness of the procedures and a consequent chapter on applications are also provided - including some never before published results and applicat
? ? scattering by pole extrapolation methods
International Nuclear Information System (INIS)
A 25-inch hydrogen bubble chamber was used at the Lawrence Berkeley Laboratory Bevatron to produce 300,000 pictures of ?+p interactions at an incident momentum of the ?+ of 2.67 GeV/c. The 2-prong events were processed using the FSD and the FOG-CLOUDY-FAIR data reduction system. Events of the nature ?+p??+p?0 and ?+p??+?+n with values of momentum transfer to the proton of -t less than or equal to 0.238 GeV2 were selected. These events were used to extrapolate to the pion pole (t = m/sub ?/2) in order to investigate the ? ? interaction with isospins of both T=1 and T=2. Two methods were used to do the extrapolation: the original Chew-Low method developed in 1959 and the Durr-Pilkuhn method developed in 1965, which takes into account centrifugal barrier penetration factors. At first it seemed that, while the Durr-Pilkuhn method gave better values for the total ? ? cross section, the Chew-Low method gave better values for the angular distribution. Further analysis, however, showed that, if the requirement of total OPE (one-pion-exchange) was dropped, then the Durr-Pilkuhn method gave more reasonable values of the angular distribution as well as for the total ? ? cross section
Extrapolation discontinuous Galerkin method for ultraparabolic equations
Marcozzi, Michael D.
2009-02-01
Ultraparabolic equations arise from the characterization of the performance index of stochastic optimal control relative to ultradiffusion processes; they evidence multiple temporal variables and may be regarded as parabolic along characteristic directions. We consider theoretical and approximation aspects of a temporally order and step size adaptive extrapolation discontinuous Galerkin method coupled with a spatial Lagrange second-order finite element approximation for a prototype ultraparabolic problem. As an application, we value a so-called Asian option from mathematical finance.
A cut extrapolation method for data analysis
International Nuclear Information System (INIS)
A method is proposed to estimate the strength of the discontinuity across the crossed channel Mandelstam cuts near the elastic threshold. The method is applied to the t,u symmetric ?? case; the ?+?-??+?- and ?+?0??+?0 cross sections are extrapolated to the nearby cuts in the cos theta plane. The strength is related to the Chew-Mandelstam coupling constant lambdasub(CM) through a discontinuity formula calculated from Hsub(int)=lambda(phisub?))2)2, the value of mod(lambdasub(CM)) is found to be 0.07+-0.03. (Auth.)
Implicit extrapolation methods for multilevel finite element computations
Energy Technology Data Exchange (ETDEWEB)
Jung, M.; Ruede, U. [Technische Universitaet Chemnitz-Zwickau (Germany)
1994-12-31
The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.
Acceleration of nodal diffusion code by Chebychev polynomial extrapolation method
International Nuclear Information System (INIS)
This paper presents Chebychev acceleration of outer iterations of a nodal diffusion code of high accuracy. Extrapolation parameters, unique for all moments are calculated using the node integrated distribution of fission source. Sample calculations are presented indicating the efficiency of method. (author)
A new extrapolation method for weak approximation schemes with applications
Oshima, Kojiro; Veluscek, Dejan
2009-01-01
We review Fujiwara's scheme, a sixth order weak approximation scheme for the numerical approximation of SDEs, and embed it into a general method to construct weak approximation schemes of order $ 2m $ for $ m \\in \\mathbf{N} $. Those schemes cannot be seen as cubature schemes, but rather as universal ways how to extrapolate from a lower order weak approximation scheme, namely the Ninomiya-Victoir scheme, for higher orders.
Assessment of Load Extrapolation Methods for Wind Turbines
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; SØrensen, John Dalsgaard
2011-01-01
In the present paper, methods for statistical load extrapolation of wind-turbine response are studied using a stationary Gaussian process model, which has approximately the same spectral properties as the response for the out-of-plane bending moment of a windturbine blade. For a Gaussian process, an approximate analytical solution for the distribution of the peaks is given by Rice. In the present paper, three different methods for statistical load extrapolation are compared with the analytical solution for one mean wind speed. The methods considered are global maxima, block maxima, and the peak over threshold method with two different threshold values. The comparisons show that the goodness of fit for the local distribution has a significant influence on the results, but the peak over threshold method with a threshold value on the mean plus 1.4 standard deviations generally gives the best results. By considering Gaussian processes for 12 mean wind speeds, the “fitting before aggregation” and “aggregation before fitting” approaches are studied. The results show that the fitting before aggregation approach gives the best results.
Comparison of methods for extrapolating breaking creep results
International Nuclear Information System (INIS)
Among all the methods of extrapolation, the following have been selected: - parametric methods (Larson-Miller, Dorn, Manson-Haferd); - digital and parametric method (minimum commitment); - digital method (finite differences); - descriptive method (Givar). The Larson-Miller, Dorn and Manson-Haferd methods are commonly used for analyzing the breaking creep results of materials for which the master curves can be described simply. The other methods have been developed in order to analyze the breaking creep results of materials where the structural changes over time modify the creep behaviour. In each case the assessment of the parameters is achieved by the least squares method. These methods were compared with each other on two steels, namely: Z6 CND 17-12 (316) and Z4 CND 35-20 (800 alloy). The various analyses performed show that (a) the predictions made as from the different methods are in good agreement between each other when there is a sufficient number of experimental values and (b) the predictions of the breaking times in the case of the 800 alloy differ from one method to the next. This result is due to the limited sampling data and to the complex behaviour of this alloy, the properties of which change with ageing
Type-insensitive ODE codes based on extrapolation methods
Energy Technology Data Exchange (ETDEWEB)
Shampine, L.F.
1982-06-01
For a long time extrapolation of the (explicit) midpoint rule has been a popular way to solve non-stiff initial value problems for systems of ordinary differential equations (ODEs). In the last few years Bader and Deuflhard have been studying the theory and practice of extrapolation of a semi-implicit midpoint rule for the solution of stiff problems. They have developed an effective code, METAN1, which is the object of attention in this paper.
Energy Technology Data Exchange (ETDEWEB)
Simonsen, H.H.
1990-11-01
This thesis consists of the following three papers: Continuous approximations for extrapolation methods; Views on the solution of ordinary differential equations (ODE) on parallel computers; A parallel ODE solver based on extrapolation. The first paper presents ways of constructing interpolants for extrapolation methods in ODE`s. Extrapolation based on the forward Euler and smoothed midpoint rule are examined in detail. The same schemes can be utilized both for extrapolation based on the backward Euler rule and for extrapolation based on any symmetric two-step method. The next paper discusses the use of parallel computers for solving ODE`s. This paper attempts to identify some ODE problems that are good candidates for parallel execution. The third paper presents some experiments done with a parallel ODE solver based on extrapolation. These experiments were conducted on a two-processor CRAY computer. The work may have practical applications in fields like process simulation and control systems. 62 refs., 14 figs., 7 tabs.
Extrapolation Method for System Reliability Assessment : A New Scheme
DEFF Research Database (Denmark)
Qin, Jianjun; Nishijima, Kazuyoshi
2012-01-01
The present paper presents a new scheme for probability integral solution for system reliability analysis, which takes basis in the approaches by Naess et al. (2009) and Bucher (2009). The idea is to evaluate the probability integral by extrapolation, based on a sequence of MC approximations of integrals with scaled domains. The performance of this class of approximation depends on the approach applied for the scaling and the functional form utilized for the extrapolation. A scheme for this task is derived here taking basis in the theory of asymptotic solutions to multinormal probability integrals. The scheme is extended so that it can be applied to cases where the asymptotic property may not be valid and/or the random variables are not normally distributed. The performance of the scheme is investigated by four principal series and parallel systems and some practical examples. The results indicate that the proposed scheme is efficient and adds to generality for this class of approximations for probability integrals.
Multiplicative measurement error and the simulation extrapolation method
Biewen, Elena; Nolte, Sandra; Rosemann, Martin
2008-01-01
Whereas the literature on additive measurement error has known a considerable treatment, less work has been done for multiplicative noise. In this paper we concentrate on multiplicative measurement error in the covariates, which contrary to additive error not only modifies proportionally the original value, but also conserves the structural zeros. This paper compares three variants to specify the multiplicative measurement error model in the simulation step of the Simulation-Extrapolation (SI...
Tao, Lu
1995-01-01
The splitting extrapolation method is a newly developed technique for solving multidimensional mathematical problems. It overcomes the difficulties arising from Richardson's extrapolation when applied to these problems and obtains higher accuracy solutions with lower cost and a high degree of parallelism. The method is particularly suitable for solving large scale scientific and engineering problems.This book presents applications of the method to multidimensional integration, integral equations and partial differential equations. It also gives an introduction to combination methods which are
Directory of Open Access Journals (Sweden)
Ezekiel Uba Nwose
2011-07-01
Full Text Available Background: The first issue of this series proposed extrapolation chart with conventional reference range and suggested comparison of results with other methods. Aim: This work sets out to compare interpretative results from the extrapolation method with those from a digital viscometer method. Materials and Methods: Five cases in our archived clinical pathology database that were specifically tested for whole blood viscosity by the digital method, and had results for haematocrit and serum proteins were pooled. The values of haematocrit and serum proteins were used to derive extrapolated values. The interpretative results of the extrapolation method were compared with those of digital viscometer-based clinical reports. Non-Newtonian fluids such as whole blood have different viscosities at different shear rates. Comparative statement can only be based on interpreted outcome. Results: Two-fifth absolute concordance and one-fifth discordance is observed between extrapolation and viscometer-based clinical reports. The discordance is a case of hyperviscosity in the presence of neither hyperproteinaemia nor polycythemia. Conclusion: The extrapolation method may underestimate whole blood viscosity in some patients when compared with digital viscometer, which in turn may suggest hyperviscosity that cannot be explained by hyperproteinaemia or polycythemia concepts. The impact of oxidative stress is highlighted.
Extrapolation method in the Monte Carlo Shell Model and its applications
International Nuclear Information System (INIS)
We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking 56Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as 72Ge with f5pg9-shell. The structure of 72Se is also studied including the discussion of the shape-coexistence phenomenon.
How useful are corpus-based methods for extrapolating psycholinguistic variables?
Mandera, Pawe?; Keuleers, Emmanuel; Brysbaert, Marc
2015-08-01
Subjective ratings for age of acquisition, concreteness, affective valence, and many other variables are an important element of psycholinguistic research. However, even for well-studied languages, ratings usually cover just a small part of the vocabulary. A possible solution involves using corpora to build a semantic similarity space and to apply machine learning techniques to extrapolate existing ratings to previously unrated words. We conduct a systematic comparison of two extrapolation techniques: k-nearest neighbours, and random forest, in combination with semantic spaces built using latent semantic analysis, topic model, a hyperspace analogue to language (HAL)-like model, and a skip-gram model. A variant of the k-nearest neighbours method used with skip-gram word vectors gives the most accurate predictions but the random forest method has an advantage of being able to easily incorporate additional predictors. We evaluate the usefulness of the methods by exploring how much of the human performance in a lexical decision task can be explained by extrapolated ratings for age of acquisition and how precisely we can assign words to discrete categories based on extrapolated ratings. We find that at least some of the extrapolation methods may introduce artefacts to the data and produce results that could lead to different conclusions that would be reached based on the human ratings. From a practical point of view, the usefulness of ratings extrapolated with the described methods may be limited. PMID:25695623
Energy Technology Data Exchange (ETDEWEB)
Inoue, S.; Magara, T.; Choe, G. S.; Kim, K. S. [School of Space Research, Kyung Hee University, Yongin, Gyeonggi-do 446-701 (Korea, Republic of); Pandey, V. S. [Department of Physics, National Institute of Technology, Dwarka, Sector-9, Delhi-110077 (India); Shiota, D.; Kusano, K., E-mail: inosato@khu.ac.kr [Solar-Terrestrial Environment Laboratory, Furo-Cho, Chikusa-ku Nagoya 464-8601 (Japan)
2014-01-01
We develop a nonlinear force-free field (NLFFF) extrapolation code based on the magnetohydrodynamic (MHD) relaxation method. We extend the classical MHD relaxation method in two important ways. First, we introduce an algorithm initially proposed by Dedner et al. to effectively clean the numerical errors associated with ? · B . Second, the multigrid type method is implemented in our NLFFF to perform direct analysis of the high-resolution magnetogram data. As a result of these two implementations, we successfully extrapolated the high resolution force-free field introduced by Low and Lou with better accuracy in a drastically shorter time. We also applied our extrapolation method to the MHD solution obtained from the flux-emergence simulation by Magara. We found that NLFFF extrapolation may be less effective for reproducing areas higher than a half-domain, where some magnetic loops are found in a state of continuous upward expansion. However, an inverse S-shaped structure consisting of the sheared and twisted loops formed in the lower region can be captured well through our NLFFF extrapolation method. We further discuss how well these sheared and twisted fields are reconstructed by estimating the magnetic topology and twist quantitatively.
International Nuclear Information System (INIS)
We develop a nonlinear force-free field (NLFFF) extrapolation code based on the magnetohydrodynamic (MHD) relaxation method. We extend the classical MHD relaxation method in two important ways. First, we introduce an algorithm initially proposed by Dedner et al. to effectively clean the numerical errors associated with ? · B . Second, the multigrid type method is implemented in our NLFFF to perform direct analysis of the high-resolution magnetogram data. As a result of these two implementations, we successfully extrapolated the high resolution force-free field introduced by Low and Lou with better accuracy in a drastically shorter time. We also applied our extrapolation method to the MHD solution obtained from the flux-emergence simulation by Magara. We found that NLFFF extrapolation may be less effective for reproducing areas higher than a half-domain, where some magnetic loops are found in a state of continuous upward expansion. However, an inverse S-shaped structure consisting of the sheared and twisted loops formed in the lower region can be captured well through our NLFFF extrapolation method. We further discuss how well these sheared and twisted fields are reconstructed by estimating the magnetic topology and twist quantitatively.
A hybrid method without extrapolation step for solving variational inequality problems
Malitsky, Yu. V.; Semenov, V. V.
2015-01-01
In this paper, we introduce a new method for solving variational inequality problems with monotone and Lipschitz-continuous mapping in Hilbert space. The iterative process is based on two well-known projection method and the hybrid (or outer approximation) method. However we do not use an extrapolation step in the projection method. The absence of one projection in our method is explained by slightly different choice of sets in hybrid method. We prove a strong convergence of...
Nonlinear Force-Free Extrapolation of the Coronal Magnetic Field Based on the MHD Relaxation Method
Inoue, S; Pandey, V S; Shiota., D; Kusano, K; Choe, G S; Kim, K S
2013-01-01
We develop a nonlinear force-free field (NLFFF) extrapolation code based on the magnetohydrodynamic (MHD) relaxation method. We extend the classical MHD relaxation method in two important ways. First, we introduce an algorithm initially proposed by cite{2002JCoPh.175..645D} to effectively clean the numerical errors associated with $nabla cdot vec{B}$. Second, the multi-grid type method is implemented in our NLFFF to perform direct analysis of the high-resolution magnetogram data. As a result of these two implementations, we successfully extrapolated the high resolution force-free field introduced by cite{1990ApJ...352..343L} with better accuracy in a drastically shorter time. We also applied our extrapolation method to the MHD solution obtained from the flux-emergence simulation by cite{2012ApJ...748...53M}. We found that NLFFF extrapolation may be less effective for reproducing areas higher than a half-domain, where some magnetic loops are found in a state of continuous upward expansion. However, an inverse ...
A comparison of preprocessing methods for solar force-free magnetic field extrapolation
Fuhrmann, M; Valori, G; Wiegelmann, T
2010-01-01
Extrapolations of solar photospheric vector magnetograms into three-dimensional magnetic fields in the chromosphere and corona are usually done under the assumption that the fields are force-free. The field calculations can be improved by preprocessing the photospheric magnetograms. We compare two preprocessing methods presently in use, namely the methods of Wiegelmann et al. (2006) and Fuhrmann et al. (2007). The two preprocessing methods were applied to a recently observed vector magnetogram. We examine the changes in the magnetogram effected by the two preprocessing algorithms. Furthermore, the original magnetogram and the two preprocessed magnetograms were each used as input data for nonlinear force-free field extrapolations by means of two different methods, and we analyze the resulting fields. Both preprocessing methods managed to significantly decrease the magnetic forces and magnetic torques that act through the magnetogram area and that can cause incompatibilities with the assumption of force-freenes...
A MULTI-STEP RICHARDSON-ROMBERG EXTRAPOLATION METHOD FOR STOCHASTIC APPROXIMATION
Frikha, Noufel; Huang, Lorick
2014-01-01
We obtain an expansion of the implicit weak discretization error for the target of stochastic approximation algorithms introduced and studied in [Frikha2013]. This allows us to extend and develop the Richardson-Romberg extrapolation method for Monte Carlo linear estimator (introduced in [Talay & Tubaro 1990] and deeply studied in [Pag{\\`e}s 2007]) to the framework of stochastic optimization by means of stochastic approximation algorithm. We notably apply the method to the es...
A least square extrapolation method for improving solution accuracy of PDE computations
International Nuclear Information System (INIS)
Richardson extrapolation (RE) is based on a very simple and elegant mathematical idea that has been successful in several areas of numerical analysis such as quadrature or time integration of ODEs. In theory, RE can be used also on PDE approximations when the convergence order of a discrete solution is clearly known. But in practice, the order of a numerical method often depends on space location and is not accurately satisfied on different levels of grids used in the extrapolation formula. We propose in this paper a more robust and numerically efficient method based on the idea of finding automatically the order of a method as the solution of a least square minimization problem on the residual. We introduce a two-level and three-level least square extrapolation method that works on nonmatching embedded grid solutions via spline interpolation. Our least square extrapolation method is a post-processing of data produced by existing PDE codes, that is easy to implement and can be a better tool than RE for code verification. It can be also used to make a cascade of computation more numerically efficient. We can establish a consistent linear combination of coarser grid solutions to produce a better approximation of the PDE solution at a much lower cost than direct computation on a finer grid. To illustrate the performance of the method, examples including two-dimensional turning point problem with sharp transition layer and the Navier-Stokes flow inside a lid-driven cavityier-Stokes flow inside a lid-driven cavity are adopted
International Nuclear Information System (INIS)
In this work the single-yoke measuring technique is proposed to be optimized by extrapolation of a magnetic field profile to the sample surface for determination of the 'real' field inside the sample. It has been shown that this approach gives reasonable values of magnetic parameters and allows to solve the well-known problem of considerable fluctuations of the measurement results due to imperfections of the yoke-sample contact. The magnetization process with the single-yoke setup is considered on basis of the surface field measurements around the sample and their extrapolation to the sample surfaces. Advantages as well as drawbacks of the measuring procedure and of the suggested optimization method are discussed
International Nuclear Information System (INIS)
The efficiency extrapolation method was improved by establishing ''linearity conditions'' for the discrimination on the gamma channel of the coincidence equipment. These conditions were proved to eliminate the systematic error of the method. A control procedure for the fulfilment of linearity conditions and estimation of residual systematic error was given. For law-energy gamma transitions an ''equivalent scheme principle'' was established, which allow for a correct application of the method. Solutions of Cs-134, Co-57, Ba-133 and Zn-65 were standardized with an ''effective standard deviation'' of 0.3-0.7 per cent. For Zn-65 ''special linearity conditions'' were applied. (author)
Evaluation of functioning of an extrapolation chamber using Monte Carlo method
International Nuclear Information System (INIS)
The extrapolation chamber is a parallel plate chamber and variable volume based on the Braff-Gray theory. It determines in absolute mode, with high accuracy the dose absorbed by the extrapolation of the ionization current measured for a null distance between the electrodes. This camera is used for dosimetry of external beta rays for radiation protection. This paper presents a simulation for evaluating the functioning of an extrapolation chamber type 23392 of PTW, using the MCNPX Monte Carlo method. In the simulation, the fluence in the air collector cavity of the chamber was obtained. The influence of the materials that compose the camera on its response against beta radiation beam was also analysed. A comparison of the contribution of primary and secondary radiation was performed. The energy deposition in the air collector cavity for different depths was calculated. The component with the higher energy deposition is the Polymethyl methacrylate block. The energy deposition in the air collector cavity for chamber depth 2500 ?m is greater with a value of 9.708E-07 MeV. The fluence in the air collector cavity decreases with depth. It's value is 1.758E-04 1/cm2 for chamber depth 500 ?m. The values reported are for individual electron and photon histories. The graphics of simulated parameters are presented in the paper. (Author)
Sun, Shuyu
2013-06-01
This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.
Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer
Le, Guigao; Oulaid, Othmane; Zhang, Junfeng
2015-03-01
In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.
International Nuclear Information System (INIS)
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on down-link band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2x2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders. (authors)
Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc
2013-06-01
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders. PMID:23179190
Florez, W. F.; Portapila, M.; Hill, A. F.; Power, H.; Orsini, P.; Bustamante, C. A.
2015-03-01
The aim of this paper is to present how to implement a control volume approach improved by Hermite radial basis functions (CV-RBF) for geochemical problems. A multi-step strategy based on Richardson extrapolation is proposed as an alternative to the conventional dual step sequential non-iterative approach (SNIA) for coupling the transport equations with the chemical model. Additionally, this paper illustrates how to use PHREEQC to add geochemical reaction capabilities to CV-RBF transport methods. Several problems with different degrees of complexity were solved including cases of cation exchange, dissolution, dissociation, equilibrium and kinetics at different rates for mineral species. The results show that the solution and strategies presented here are effective and in good agreement with other methods presented in the literature for the same cases.
Ketcheson, David I.
2014-04-11
In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.
Xie, Feng; Li, Xuesong; Dai, Yihua; Jiang, Wengang; He, Xiaobing; Yu, Gongshuo; Ni, Jianzhong
2015-03-01
Noble gas (41)Ar was measured with a 4??-4?? coincidence system, in which gamma- and beta-rays were respectively detected with a well-type NaI(Tl) and plastic scintillator (PS) detector. The activity of (41)Ar was determined from an efficiency extrapolation method, in which the beta detector efficiency was varied by electronic discrimination using the software developed under Visual basic. In addition, high resolution gamma spectroscopy with HPGe detector was also used for activity determination of (41)Ar, and the result was satisfactory in agreement with that obtain by the efficiency extrapolation method. This work demonstrated that the activity of (41)Ar can be accurately measured by efficiency extrapolation method. PMID:25527895
Comparison of precipitation nowcasting by extrapolation and statistical-advection methods.
Czech Academy of Sciences Publication Activity Database
Sokol, Zbyn?k; Kitzmiller, D.; Pešice, Petr; Mejsnar, Jan
2013-01-01
Ro?. 123, 1 April (2013), s. 17-30. ISSN 0169-8095 R&D Projects: GA MŠk ME09033 Institutional support: RVO:68378289 Keywords : Precipitation forecast * Statistical models * Regression * Quantitative precipitation forecast * Extrapolation forecast Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.421, year: 2013 http://www.sciencedirect.com/science/article/pii/S0169809512003390
Fernandes, Ryan I.; Fairweather, Graeme
2012-01-01
An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated...
Comparison of extrapolation methods for creep rupture stresses of 12Cr and 18Cr10NiTi steels
International Nuclear Information System (INIS)
As a part of a Soviet-Swedish research programme the creep rupture properties of two heat resisting steels namely a 12% Cr steel and an 18% Cr12% Ni titanium stabilized steel have been studied. One heat from each country of both steels were creep tested. The strength of the 12% Cr steels was similar to earlier reported strength values, the Soviet steel being some-what stronger due to a higher tungsten content. The strength of the Swedish 18/12 Ti steel agreed with earlier results, while the properties of the Soviet steel were inferior to those reported from earlier Soviet creep testings. Three extrapolation methods were compared on creep rupture data collected in both countries. Isothermal extrapolation and an algebraic method of Soviet origin gave in many cases rather similar results, while the parameter method recommended by ISO resulted in higher rupture strength values at longer times. (author)
Linear extrapolation distance for a black cylindrical control rod with the pulsed neutron method
International Nuclear Information System (INIS)
The objective of this experiment was to measure the linear extrapolation distance for a central black cylindrical control rod in a cylindrical water moderator. The radius for both the control rod and the moderator was varied. The pulsed neutron technique was used and the decay constant was measured for both a homogeneous and a heterogeneous system. From the difference in the decay constants the extrapolation distance could be calculated. The conclusion is that within experimental error it is safe to use the approximate formula given by Pellaud or the more exact one given by Kavenoky. We can also conclude that linear anisotropic scattering is accounted for in a correct way in the approximate formula given by Pellaud and Prinja and Williams
International Nuclear Information System (INIS)
A program to investigate the possibility of track extrapolation and interpolation for drift chambers with the Principal Components Analysis and polynoms was written for SAPHIR. The results for the most significant configurations at SAPHIR were pointed out. It was shown that the Principal Components Analysis is a good basis to write a fast track reconstruction program for a drift chamber using a global track model in an inhomogeneous magnetic field. A data input/output package was written, too. (orig.)
Czech Academy of Sciences Publication Activity Database
Mejsnar, Jan; Sokol, Zbyn?k; Pešice, Petr
Toulouse : Météo France, 2012. [ERAD 2012 - European Conference on Radar in Meteo rology and Hydrology /7./. Toulouse (FR), 24.06.2012-29.06.2012] R&D Projects: GA MŠk ME09033 Institutional support: RVO:68378289 Keywords : precipitation nowcasting * Lagrangien extrapolation * uncertainty in precipitation Subject RIV: DG - Athmosphere Sciences, Meteo rology http://www. meteo .fr/cic/meetings/2012/ERAD/extended_abs/NOW_250_ext_abs.pdf
Ketcheson, David I.
2014-06-13
We compare the three main types of high-order one-step initial value solvers: extrapolation, spectral deferred correction, and embedded Runge–Kutta pairs. We consider orders four through twelve, including both serial and parallel implementations. We cast extrapolation and deferred correction methods as fixed-order Runge–Kutta methods, providing a natural framework for the comparison. The stability and accuracy properties of the methods are analyzed by theoretical measures, and these are compared with the results of numerical tests. In serial, the eighth-order pair of Prince and Dormand (DOP8) is most efficient. But other high-order methods can be more efficient than DOP8 when implemented in parallel. This is demonstrated by comparing a parallelized version of the wellknown ODEX code with the (serial) DOP853 code. For an N-body problem with N = 400, the experimental extrapolation code is as fast as the tuned Runge–Kutta pair at loose tolerances, and is up to two times as fast at tight tolerances.
International Nuclear Information System (INIS)
Inside Activity 3 ''Materials'' of WGCS, the member states UK and FRG have developed a work regarding extrapolation methods for creep data. This work has been done by comparising extrapolation methods in use in their countries by applying them to creep rupture strength data on AISI 316 SS obtained in UK and FRG. This work has been issued on April 1978 and the Community has dealed it to all Activity 3 Members. Italy, in the figure of NIRA S.p.A., has received, from the European Community a contract to extend the work to Italian and French data, using extrapolation methods currently in use in Italy. The work should deal with the following points: - Collect of Italian experimental data; - Chemical analysis on Italian Specimen; - Comparison among Italian experimental data with French, FRG and UK data; - Description of extrapolation methods in use in Italy; - Application of these extrapolation methods to Italian, French, British and Germany data; - Extensions of a Final Report
Louka, M. A.; N. M. Missirlis
2015-01-01
In this paper we study the impact of two types of preconditioning on the numerical solution of large sparse augmented linear systems. The first preconditioning matrix is the lower triangular part whereas the second is the product of the lower triangular part with the upper triangular part of the augmented system's coefficient matrix. For the first preconditioning matrix we form the Generalized Modified Extrapolated Successive Overrelaxation (GMESOR) method, whereas the secon...
Evaluation of external quality factor of the superconducting cavity using extrapolation method
International Nuclear Information System (INIS)
The estimation of the external quality factor is important for designing coupling devices for the cavities. A new representation of the external quality factor calculations for single-cell cavity coupled to a coaxial transmission line is derived based on analytic analysis and numeric analysis with the help of 3D electromagnetic code, and verified with experimental measurements at room temperature. In logarithmic scale the results for the external quality factor were quasi-linear over the limited range, and the simulated and measured data could be used and extrapolated to the superconducting case. For the unpolished 1.5 GHz 3rd harmonic superconducting cavity, the discrepancy between the evaluation value and measurement result is less than 25% within an acceptable deviation. (authors)
DEFF Research Database (Denmark)
Kofoed, Peter; Nielsen, Peter V.
1990-01-01
The design of a displacement ventilation system involves determination of the flow rate in the thermal plumes. The flow rate in the plumes and the vertical temperature gradient influence each other, and they are influenced by many factors. This paper shows some descriptions of these effects. Free turbulent plumes from different heated bodies are investigated. The measurements have taken place in a full-scale test room where the vertical temperature gradient have been changed. The velocity and the temperature distribution in the plume are measured. Large scale plume axis wandering is taken into account and the temperature excess and the velocity distribution are calculated by use of an extrapolation method. In the case with a concentrated heat source (dia 50mm, 343W) and nearly uniform surroundings the model of a plume above a point heat source is verified. It represents a borderline case with the smallest entrainment factor and the smallest angle of spread. Due to the measuring method and data processing the velocity and temperature excess profiles are observed more narrowly than those reported by previous authors. In the case with an extensive heat source (dia 400mm, lOOW) the model of a plume above a point heat source cannot be used. This is caused either by the way of generating the plume including a long intermediate region or by the environmental conditions where vertical temperature gradients are present. The flow has a larger angle of spread and the entrainment factor is greather than for a point heat source. The exact knowledge of the vertical temperature gradient is essential to predict the flow propagation due to its influence on the entrainment, e.g. in an integral method of plume calculation • Since the flow from different heated bodies is individual full-scale measurements seem to be the only possible approach to obtain the volume flow in: thermal plumes in ventilated rooms.
Bytautas, Laimutis; Nagata, Takeshi; Gordon, Mark S.; Ruedenberg, Klaus
2007-10-01
The recently introduced method of correlation energy extrapolation by intrinsic scaling (CEEIS) is used to calculate the nonrelativistic electron correlations in the valence shell of the F2 molecule at 13 internuclear distances along the ground state potential energy curve from 1.14Åto8Å, the equilibrium distance being 1.412Å. Using Dunning's correlation-consistent double-, triple-, and quadruple-zeta basis sets, the full configuration interaction energies are determined, with an accuracy of about 0.3mhartree, by successively generating up to octuple excitations with respect to multiconfigurational reference functions that strongly change along the reaction path. The energies of the reference functions and those of the correlation energies with respect to these reference functions are then extrapolated to their complete basis set limits. The applicability of the CEEIS method to strongly multiconfigurational reference functions is documented in detail.
Buller, N P; Poole-Wilson, P A
1988-01-01
Respiratory gas exchange was measured during maximal treadmill exercise testing in six healthy volunteers and 20 patients with chronic heart failure. A curve of equation y = ax-bx2 was used to model the relation between the rate of oxygen consumption (y axis) and the rate of carbon dioxide production (x axis). The constants "a" and "b" were used to calculate the maximal value of the expression ax-bx2. This value was termed the "extrapolated maximal oxygen consumption". For all subjects a clos...
Infrared extrapolations for atomic nuclei
International Nuclear Information System (INIS)
Harmonic oscillator model-space truncations introduce systematic errors to the calculation of binding energies and other observables. We identify the relevant infrared (IR) scaling variable and give values for this nucleus-dependent quantity. We consider isotopes of oxygen computed with the coupled-cluster method from chiral nucleon–nucleon interactions at next-to-next-to-leading order and show that the IR component of the error is sufficiently understood to permit controlled extrapolations. By employing oscillator spaces with relatively large frequencies, well above the energy minimum, the ultraviolet corrections can be suppressed while IR extrapolations over tens of MeVs are accurate for ground-state energies. However, robust uncertainty quantification for extrapolated quantities that fully accounts for systematic errors is not yet developed. (paper)
How to hedge extrapolated yield curves
Lagerås, Andreas
2014-01-01
We present a framework on how to hedge the interest rate sensitivity of liabilities discounted by an extrapolated yield curve. The framework is based on functional analysis in that we consider the extrapolated yield curve as a functional of an observed yield curve and use its G\\^ateaux variation to understand the sensitivity to any possible yield curve shift. We apply the framework to analyse the Smith-Wilson method of extrapolation that is proposed by the European Insurance...
Builtin vs. auxiliary detection of extrapolation risk.
Energy Technology Data Exchange (ETDEWEB)
Munson, Miles Arthur; Kegelmeyer, W. Philip,
2013-02-01
A key assumption in supervised machine learning is that future data will be similar to historical data. This assumption is often false in real world applications, and as a result, prediction models often return predictions that are extrapolations. We compare four approaches to estimating extrapolation risk for machine learning predictions. Two builtin methods use information available from the classification model to decide if the model would be extrapolating for an input data point. The other two build auxiliary models to supplement the classification model and explicitly model extrapolation risk. Experiments with synthetic and real data sets show that the auxiliary models are more reliable risk detectors. To best safeguard against extrapolating predictions, however, we recommend combining builtin and auxiliary diagnostics.
On the Extrapolation Estimates.
Czech Academy of Sciences Publication Activity Database
Gogatishvili, Amiran; Sobukawa, T.
2003-01-01
Ro?. 6, ?. 1 (2003), s. 97-104. ISSN 1331-4343 R&D Projects: GA ?R GA201/01/0333 Institutional research plan: CEZ:AV0Z1019905; CEZ:AV0Z1019905 Keywords : extrapolation theorem * Orlicz class Subject RIV: BA - General Mathematics Impact factor: 0.316, year: 2003
Koutchouk, Jean-Pierre; Ptitsyn, V I
2001-01-01
The multipolar content of the dipoles and quadrupoles is known to limit the stability of the beam dynamics in super-conducting machines like RHIC and even more in LHC. The low-beta quadrupoles are thus equipped with correcting coils up to the dodecapole order. The correction is planned to rely on magnetic measurements. We show that a relatively simple method allows an accurate measurement of the multipolar field aberrations using the beam. The principle is to displace the beam in the non-linear fields by local closed orbit bumps and to measure the variation of sensitive beam observable. The resolution and robustness of the method are found appropriate. Experimentation at RHIC showed clearly the presence of normal and skew sextupolar field components in addition to a skew quadrupolar component in the interaction regions. Higher-order components up to decapole order appear as well.
Ecotoxicological effects extrapolation models
Energy Technology Data Exchange (ETDEWEB)
Suter, G.W. II
1996-09-01
One of the central problems of ecological risk assessment is modeling the relationship between test endpoints (numerical summaries of the results of toxicity tests) and assessment endpoints (formal expressions of the properties of the environment that are to be protected). For example, one may wish to estimate the reduction in species richness of fishes in a stream reach exposed to an effluent and have only a fathead minnow 96 hr LC50 as an effects metric. The problem is to extrapolate from what is known (the fathead minnow LC50) to what matters to the decision maker, the loss of fish species. Models used for this purpose may be termed Effects Extrapolation Models (EEMs) or Activity-Activity Relationships (AARs), by analogy to Structure-Activity Relationships (SARs). These models have been previously reviewed in Ch. 7 and 9 of and by an OECD workshop. This paper updates those reviews and attempts to further clarify the issues involved in the development and use of EEMs. Although there is some overlap, this paper does not repeat those reviews and the reader is referred to the previous reviews for a more complete historical perspective, and for treatment of additional extrapolation issues.
A fast marching approach to multidimensional extrapolation
McCaslin, Jeremy O.; Courtine, Émilien; Desjardins, Olivier
2014-10-01
A computationally efficient approach to extrapolating a data field with second order accuracy is presented. This is achieved through the sequential solution of non-homogeneous linear static Hamilton-Jacobi equations, which can be performed rapidly using the fast marching methodology. In particular, the method relies on a fast marching calculation of the distance from the manifold ? that separates the subdomain ?in over which the quanity is known from the subdomain ?out over which the quantity is to be extrapolated. A parallel algorithm is included and discussed in the appendices. Results are compared to the multidimensional partial differential equation (PDE) extrapolation approach of Aslam (Aslam (2004) [31]). It is shown that the rate of convergence of the extrapolation within a narrow band near ? is controlled by both the number of successive extrapolations performed and the order of accuracy of the spatial discretization. For m successive extrapolating steps and a spatial discretization scheme of order N, the rate of convergence in a narrow band is shown to be min(N+1,m+1). Results show that for a wide range of error levels, the fast marching extrapolation strategy leads to dramatic improvements in computational cost when compared to the PDE approach.
Load Extrapolation During Operation for Wind Turbines
Toft, Henrik Stensgaard; Sørensen, John Dalsgaard
2008-01-01
In the recent years load extrapolation for wind turbines has been widely considered in the wind turbine industry. Loads on wind turbines during operations are normally dependent on the mean wind speed, the turbulence intensity and the type and settings of the control system. All these parameters must be taken into account when characteristic load effects during operation are determined. In the wind turbine standard IEC 61400-1 a method for load extrapolation using the peak over threshold meth...
Edwards, A. R.
1998-01-01
Als toepassingsgericht en multidisciplinaire wetenschap behoeft de bestuurskunde een eigen methodologie, naast de algemene sociaalwetenschappelijke methodologie voor het doen van empirisch onderzoek. Deze eigen bestuurskundige methodologie zou kunnen uitgaan van een argumentatieve benadering, gericht op versterking van de kwaliteit van de praktijkredeneringen die aan bestuurlijk handelen ten grondslag liggen. In dit artikel wordt aangegeven hoe de door Brasz ontwikkelde praxeologische methode...
Scholze, Martin; Silva, Elisabete; Kortenkamp, Andreas
2014-01-01
Dose addition, a commonly used concept in toxicology for the prediction of chemical mixture effects, cannot readily be applied to mixtures of partial agonists with differing maximal effects. Due to its mathematical features, effect levels that exceed the maximal effect of the least efficacious compound present in the mixture, cannot be calculated. This poses problems when dealing with mixtures likely to be encountered in realistic assessment situations where chemicals often show differing maximal effects. To overcome this limitation, we developed a pragmatic solution that extrapolates the toxic units of partial agonists to effect levels beyond their maximal efficacy. We extrapolated different additivity expectations that reflect theoretically possible extremes and validated this approach with a mixture of 21 estrogenic chemicals in the E-Screen. This assay measures the proliferation of human epithelial breast cancers. We found that the dose-response curves of the estrogenic agents exhibited widely varying shapes, slopes and maximal effects, which made it necessary to extrapolate mixture responses above 14% proliferation. Our toxic unit extrapolation approach predicted all mixture responses accurately. It extends the applicability of dose addition to combinations of agents with differing saturating effects and removes an important bottleneck that has severely hampered the use of dose addition in the past. PMID:24533151
Extrapolation Distances for Pulsed Neutron Experiments
International Nuclear Information System (INIS)
Attention has been drawn in earlier work to the effect of uncertainty in extrapolation distance on the results of pulsed neutron experiments and hence to the need for more accurate knowledge of this parameter. The extrapolated endpoints can be obtained from flux plots and the value for large systems can be deduced from diffusion coefficients. Information from both approaches is given and the dependence of extrapolated endpoint on temperature and on buckling is discussed. Decay times and time-dependent flux plots have been measured in pulsed source experiments on small, accurately-known, volumes of water and Dowtherm A (thermex) by the use of a small scintillation detector and a time analyser; a separate scintillation detector or a BF3 counter has been used as a monitor. Spatial harmonic analysis of the flux plots was performed by the method of least squares to obtain the extrapolated endpoints once appropriate corrections have been made to the recorded counts. Some consideration was given to the possibility of testing for the effect of flux distortion near the boundary by successive removal of the outer points and to the effects on extrapolated endpoint of the flux perturbation produced by the detector. The results presented are mainly for measurements at 20°C in 4-in and 7-in cubic containers lined with cadmium, but very preliminary information was obtained for water at temperatures up to 80°C and equipment is being designed to extend the range of temperg designed to extend the range of temperatures still further. (author)
Load Extrapolation During Operation for Wind Turbines
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; SØrensen, John Dalsgaard
2008-01-01
In the recent years load extrapolation for wind turbines has been widely considered in the wind turbine industry. Loads on wind turbines during operations are normally dependent on the mean wind speed, the turbulence intensity and the type and settings of the control system. All these parameters must be taken into account when characteristic load effects during operation are determined. In the wind turbine standard IEC 61400-1 a method for load extrapolation using the peak over threshold method is recommended. In this paper this method is considered and some of the assumptions are examined. The statistical uncertainty related to the limited number of simulations of the response during operation is explored together with the influence of the threshold value.
Fuzzy Model Comparison to Extrapolate Rainfall Data
C. Tzimopoulos; L. Mpallas; C. Evangelides
2008-01-01
This research presents two fuzzy rule-based models for extrapolating the missing rainfall data records of a station, utilizing as a reference the values from another meteorological station located in an adjacent area. The first one is constructed based on the least squares algorithm and the second one using ANFIS method. Three stations were used in this research, all located in Northern Greece. The values of Thessaloniki station were used as fuzzy premises and the values of Sindos and K...
Novel Extrapolation for Strong Coupling Expansions
International Nuclear Information System (INIS)
We present a novel extrapolation scheme for high order series expansions. The idea is to express the series, obtained in orders of an external variable, in terms of an internal parameter of the system. Here we apply this method to the 1-triplet dispersion in an antiferromagnetic S = 1/2 Heisenberg ladder. By the use of the internal parameter the accuracy of the truncated series is enhanced tremendously. (author)
Residual extrapolation operators for efficient wavefield construction
Alkhalifah, T.
2013-02-27
Solving the wave equation using finite-difference approximations allows for fast extrapolation of the wavefield for modelling, imaging and inversion in complex media. It, however, suffers from dispersion and stability-related limitations that might hamper its efficient or proper application to high frequencies. Spectral-based time extrapolation methods tend to mitigate these problems, but at an additional cost to the extrapolation. I investigate the prospective of using a residual formulation of the spectral approach, along with utilizing Shanks transform-based expansions, that adheres to the residual requirements, to improve accuracy and reduce the cost. Utilizing the fact that spectral methods excel (time steps are allowed to be large) in homogeneous and smooth media, the residual implementation based on velocity perturbation optimizes the use of this feature. Most of the other implementations based on the spectral approach are focussed on reducing cost by reducing the number of inverse Fourier transforms required in every step of the spectral-based implementation. The approach here fixes that by improving the accuracy of each, potentially longer, time step.
Uncertainties of Euclidean time extrapolation in lattice effective field theory
International Nuclear Information System (INIS)
Extrapolations in Euclidean time form a central part of nuclear lattice effective field theory (NLEFT) calculations using the projection Monte Carlo method, as the sign problem in many cases prevents simulations at large Euclidean time. We review the next-to-next-to-leading order NLEFT results for the alpha nuclei up to 28Si, with emphasis on the Euclidean time extrapolations, their expected accuracy and potential pitfalls. We also discuss possible avenues for improving the reliability of Euclidean time extrapolations in NLEFT. (paper)
3D Hail Size Distribution Interpolation/Extrapolation Algorithm
Lane, John
2013-01-01
Radar data can usually detect hail; however, it is difficult for present day radar to accurately discriminate between hail and rain. Local ground-based hail sensors are much better at detecting hail against a rain background, and when incorporated with radar data, provide a much better local picture of a severe rain or hail event. The previous disdrometer interpolation/ extrapolation algorithm described a method to interpolate horizontally between multiple ground sensors (a minimum of three) and extrapolate vertically. This work is a modification to that approach that generates a purely extrapolated 3D spatial distribution when using a single sensor.
Extrapolations to critical for systems with large inherent sources
International Nuclear Information System (INIS)
An approach to delayed critical experiment was performed in 1981 at Pacific Northwest Laboratory with a cylindrical tank of plutonium-uranium nitrate solution. During this experiment, various methods to determine the critical height were used, including (1) extrapolation of the usual plot of inverse count rate vs. height, which estimates the delayed critical height (DCH); (2) the inverse count rate vs. height divided by count rate, which corrects somewhat for the change in inherent source size as the height changes; (3) ratio of spectral densities vs. height, which extrapolates to DCH; (4) extrapolations of prompt neutron decay constant vs. height, which extrapolates to the prompt critical height (PCH); and (5) inverse kinetics rod drop (IKRD) methods, which measure ?k/k? very accurately for a particular solution height. The problem with some of the extrapolation methods is that the measured data are not linear with height, but, for lack of anything better, linear extrapolations are made. In addition to the measurements to determine the delayed critical height subcriticality measurements by the 252Cf source driven frequency analysis method were performed for a variety of subcriticality heights. This paper describes how all these methods were applied to obtain the critical height of a cylindrical tank of plutonium nitrate solution and how the subcritical neutron multiplication factor was obtained
BESIII track extrapolation and matching
International Nuclear Information System (INIS)
A GEANT4 based, Object-Oriented package, TrkExtAlg, is developed for extrapolating the BESIII MDC track into outer sub-detectors. The magnetic deflection and ionization loss of the particle in the BESIII detector is considered, the algorithm supplies the position and the momentum of the track in all the outer sub-detectors, as well as the error matrix at any given hit point with the multiple scattering effect taken into account. Through comparing the results of track extrapolation with the results of a full simulation and checking the results and application of track matching with hits in outer sub-detectors, TrkExtAlg is proved to be reliable. (authors)
UFOs: Observations, Studies and Extrapolations
Baer, T; Barnes, M J; Bartmann, W; Bracco, C; Carlier, E; Cerutti, F; Dehning, B; Ducimetière, L; Ferrari, A; Ferro-Luzzi, M; Garrel, N; Gerardin, A; Goddard, B; Holzer, E B; Jackson, S; Jimenez, J M; Kain, V; Zimmermann, F; Lechner, A; Mertens, V; Misiowiec, M; Nebot Del Busto, E; Morón Ballester, R; Norderhaug Drosdal, L; Nordt, A; Papotti, G; Redaelli, S; Uythoven, J; Velghe, B; Vlachoudis, V; Wenninger, J; Zamantzas, C; Zerlauth, M; Fuster Martinez, N
2012-01-01
UFOs (“ Unidentified Falling Objects”) could be one of the major performance limitations for nominal LHC operation. Therefore, in 2011, the diagnostics for UFO events were significantly improved, dedicated experiments and measurements in the LHC and in the laboratory were made and complemented by FLUKA simulations and theoretical studies. The state of knowledge is summarized and extrapolations for LHC operation in 2012 and beyond are presented. Mitigation strategies are proposed and related tests and measures for 2012 are specified.
Extrapolating future Arctic ozone losses
Knudsen, B. M.; Andersen, S. B.; Christiansen, B.; Larsen, N.; Rex, M.; Harris, N. R. P.; Naujokat, B.
2004-01-01
Future increases in the concentration of greenhouse gases and water vapour may cool the stratosphere further and increase the amount of polar stratospheric clouds (PSCs). Future Arctic PSC areas have been extrapolated from the highly significant trends 1958-2001. Using a tight correlation between PSC area and the total vortex ozone depletion and taking the decreasing amounts of ozone depleting substances into account we make empirical estimates of future ozone. The result is that Arctic ozone...
Extrapolations to critical for systems with large inherent sources
International Nuclear Information System (INIS)
An experiment was performed in 1981 at Pacific Northwest Laboratory with a cylindrical tank of plutonium-uranium nitrate solution. During this experiment, various methods to determine the critical height were used, including (1) extrapolation of the usual plot of inverse count rate and height, which estimates the delayed critical height (DCH), (2) the inverse count rate and height divided by count rate, which corrects somewhat for the change in inherent source size as the height changes, (3) ratio of spectral densities to height, which extrapolates to DCH, (4) extrapolations of prompt neutron decay constant and height, which extrapolates to the prompt critical height (PCH), and (5) inverse kinetics rod drop (IKRD) methods, which measure ?k/k? very accurately for a particular solution height
International Nuclear Information System (INIS)
Study of the reaction ?-p ? ?-?0 p at 2.77 GeV/c carried out in the CERN 2 meter large liquid hydrogen bubble chamber at the proton synchrotron, shows that 70 per cent of this reaction goes through ?-p ? ?-p channel. The high statistics allow us to specify the mass and the width of the ?- resonance. In other hand, if the ?- production parameters are independent of the ?- width, it is not the same case for the decay parameters. In the second part, the Chew-Low extrapolation method allows us to determine the ?-?0 elastic cross section to the pole, and the phase shifts of the P waves in the isospin 1 state and S waves in the isospin 2 state. (author)
Application of Curve Fitting Extrapolation in Measuring Transient Surface Temperature
Directory of Open Access Journals (Sweden)
Xiaojian Hao
2013-08-01
Full Text Available The engine inner wall surface temperature was measured by the plug blind-hole extrapolation, and multiple thermocouples were installed at different depths in the substrate. The engine wall extrapolation model of transient high temperature was established according to the basic principles of heat transfer. The transient temperatures were measured by thermocouples buried at different depths of the engine wall and fitting curve was got. The transient temperature field which was generated by the three oxy-hydrogen flame guns was used to simulate the transient high temperature field inside the engine wall. The simulated inner wall surface temperature curves of the engine could be got by the curve fitting extrapolation of temperature sensor and the infrared thermometer respectively, which show good agreement in the overall trend and at the peak point, and verify the correctness of the extrapolation model and method.
International Nuclear Information System (INIS)
The corrosion inhibition characteristics of non-ionic surfactants of the TRITON-X series, known as TRITON-X-100 (TX-100), TRITON-X-165 (TX-165) and TRITON-X-305 (TX-305), on iron in 1.0 M HCl solution were studied. Measurements were conducted in 1.0 M HCl solutions without and with various concentrations of the three selected surfactants using chemical (ICP-AES method of analysis of dissolved cations) and electrochemical (Tafel polarisation and EFM) techniques at 25 deg. C. These measurements were complemented with SEM and EDX examinations of the electrode surface. Polarisation data showed that the non-ionic surfactants used in this study acted as mixed-type inhibitors with cathodic predominance. The protection efficiency increased with increase in surfactant concentration. Maximum protection efficiency of the surfactant was observed at concentrations around its CMC. From their molecular structure, these surfactants may adsorb on the metal surface through two lone pairs of electrons on the oxygen atoms of the hydrophilic head group.
Aschwanden, Markus J; Liu, Yang
2014-01-01
We developed a {\\sl coronal non-linear force-free field (COR-NLFFF)} forward-fitting code that fits an approximate {\\sl non-linear force-free field (NLFFF)} solution to the observed geometry of automatically traced coronal loops. In contrast to photospheric NLFFF codes, which calculate a magnetic field solution from the constraints of the transverse photospheric field, this new code uses coronal constraints instead, and this way provides important information on systematic errors of each magnetic field calculation method, as well as on the non-forcefreeness in the lower chromosphere. In this study we applied the COR-NLFFF code to active region NOAA 11158, during the time interval of 2011 Feb 12 to 17, which includes an X2.2 GOES-class flare plus 35 M and C-class flares. We calcuated the free magnetic energy with a 6-minute cadence over 5 days. We find good agreement between the two types of codes for the total nonpotential $E_N$ and potential energy $E_P$, but find up to a factor of 4 discrepancy in the free ...
International Nuclear Information System (INIS)
We developed a coronal nonlinear force-free field (COR-NLFFF) forward-fitting code that fits an approximate nonlinear force-free field (NLFFF) solution to the observed geometry of automatically traced coronal loops. In contrast to photospheric NLFFF codes, which calculate a magnetic field solution from the constraints of the transverse photospheric field, this new code uses coronal constraints instead, and this way provides important information on systematic errors of each magnetic field calculation method, as well as on the non-force-freeness in the lower chromosphere. In this study we applied the COR-NLFFF code to NOAA Active Region 11158, during the time interval of 2011 February 12-17, which includes an X2.2 GOES-class flare plus 35 M- and C-class flares. We calculated the free magnetic energy with a 6 minute cadence over 5 days. We find good agreement between the two types of codes for the total nonpotential EN and potential energy EP but find up to a factor of 4 discrepancy in the free energy E free = EN – EP and up to a factor of 10 discrepancy in the decrease of the free energy ?E free during flares. The coronal NLFFF code exhibits a larger time variability and yields a decrease of free energy during the flare that is sufficient to satisfy the flare energy budget, while the photospheric NLFFF code shows much less time variability and an order of magnitude less free-energy decrease during flares. The discrepancy may partly be due to the preprocessing of photospheric vector data but more likely is due to the non-force-freeness in the lower chromosphere. We conclude that the coronal field cannot be correctly calculated on the basis of photospheric data alone and requires additional information on coronal loop geometries.
Chiral extrapolation beyond the power-counting regime
International Nuclear Information System (INIS)
Chiral effective field theory can provide valuable insight into the chiral physics of hadrons when used in conjunction with nonperturbative schemes such as lattice quantum chromodynamics (QCD). In this discourse, the attention is focused on extrapolating the mass of the ? meson to the physical pion mass in quenched QCD. With the absence of a known experimental value, this serves to demonstrate the ability of the extrapolation scheme to make predictions without prior bias. By using extended effective field theory developed previously, an extrapolation is performed using quenched lattice QCD data that extends outside the chiral power-counting regime. The method involves an analysis of the renormalization flow curves of the low-energy coefficients in a finite-range regularized effective field theory. The analysis identifies an optimal regularization scale, which is embedded in the lattice QCD data themselves. This optimal scale is the value of the regularization scale at which the renormalization of the low-energy coefficients is approximately independent of the range of quark masses considered. By using recent precision, quenched lattice results, the extrapolation is tested directly by truncating the analysis to a set of points above 380 MeV, while temporarily disregarding the simulation results closer to the chiral regime. This tests the ability of the method to make predictions of the simulation results, without phenomenologically motivated bias. The result is a succally motivated bias. The result is a successful extrapolation to the chiral regime.
Extrapolations of nuclear binding energies from new linear mass relations
DEFF Research Database (Denmark)
Hove, D.; Jensen, A. S.
2013-01-01
We present a method to extrapolate nuclear binding energies from known values for neighboring nuclei. We select four specific mass relations constructed to eliminate smooth variation of the binding energy as function nucleon numbers. The fast odd-even variations are avoided by comparing nuclei with same parity. The mass relations are first tested and shown to either be rather accurately obeyed or revealing signatures of quickly varying structures. Extrapolations are initially made for a nucleus by applying each of these relations. Very reliable estimates are then produced either by an average or by choosing the extrapolation where the smoothest structures enter. Corresponding mass relations for Q? values are used to study the general structure of superheavy elements. A minor neutron shell at N=152 is seen, but no sign of other shell structures are apparent in the superheavy region. Accuracies are typically substantially better than 0.5 MeV.
Chiral extrapolation beyond the power-counting regime
Hall, J M M; Leinweber, D B; Liu, K F; Mathur, N; Young, R D; Zhang, J B
2011-01-01
Chiral effective field theory can provide valuable insight into the chiral physics of hadrons when used in conjunction with non-perturbative schemes such as lattice QCD. In this discourse, the attention is focused on extrapolating the mass of the rho meson to the physical pion mass in quenched QCD (QQCD). With the absence of a known experimental value, this serves to demonstrate the ability of the extrapolation scheme to make predictions without prior bias. By using extended effective field theory developed previously, an extrapolation is performed using quenched lattice QCD data that extends outside the chiral power-counting regime (PCR). The method involves an analysis of the renormalization flow curves of the low energy coefficients in a finite-range regularized effective field theory. The analysis identifies an optimal regulator, which is embedded in the lattice QCD data themselves. This optimal regulator is the regulator value at which the renormalization of the low energy coefficients is approximately i...
Design and building of an extrapolation ionization chamber for beta dosimetry
International Nuclear Information System (INIS)
An extrapolation chamber was designed and built to be used in beta dosimetry. The basic characteristics of an extrapolation chamber are discussed, together with fundamental principle of the dosimetric method used. Details of the chamber's design and properties of materials employed are presented. A full evaluation of extrapolation chamber under irradiation from two 90Sr + 90Y beta sources is done. The geometric parameters of the chamber, leakage current and ion collection efficiency are determined. (Author)
Extrapolation of power series by self-similar factor and root approximants
Yukalov, V. I.; GLUZMAN, S
2004-01-01
The problem of extrapolating the series in powers of small variables to the region of large variables is addressed. Such a problem is typical of quantum theory and statistical physics. A method of extrapolation is developed based on self-similar factor and root approximants, suggested earlier by the authors. It is shown that these approximants and their combinations can effectively extrapolate power series to the region of large variables, even up to infinity. Several exampl...
Outlier robustness for wind turbine extrapolated extreme loads
DEFF Research Database (Denmark)
Natarajan, Anand; Verelst, David Robert
2012-01-01
Methods for extrapolating extreme loads to a 50 year probability of exceedance, which display robustness to the presence of outliers in simulated loads data set, are described. Case studies of isolated high extreme out-of-plane loads are discussed to emphasize their underlying physical reasons. Stochastic identification of numerical artifacts in simulated loads is demonstrated using the method of principal component analysis. The extrapolation methodology is made robust to outliers through a weighted loads approach, whereby the eigenvalues of the correlation matrix obtained using the loads with its dependencies is utilized to estimate a probability for the largest extreme load to occur at a specific mean wind speed. This inherently weights extreme loads that occur frequently within mean wind speed bins higher than isolated occurrences of extreme loads. Primarily, the results for the blade root out-of-plane loads are presented here as those extrapolated loads have shown wide variability in literature, but the method can be generalized to any other component load. The convergence of the 1 year extrapolated extreme blade root out-of-plane load with the number of turbulent wind samples used in the loads simulation is demonstrated and compared with published results. Further effects of varying wind inflow angles and shear exponent is brought out. Parametric fitting techniques that consider all extreme loads including ‘outliers’ are proposed, and the physical reasons that result in isolated high extreme loads are highlighted, including the effect of the wind turbine controls system. Copyright © 2011 John Wiley & Sons, Ltd.
Extrapolation procedures in Mott electron polarimetry
Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.
1992-01-01
In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.
Prediction of long term stability by extrapolation
Parzen, G
2000-01-01
This paper studies the possibility of using the survival function to predict long term stability by extrapolation. The survival function is a function of the initial coordinates and is the number of turns a particle will survive for a given set of initial coordinates. To determine the difficulties in extrapolating the survival function, tracking studies were done to compute the survival function. The survival function was found to have two properties that may cause difficulties in extrapolating the survival function. One is the existence of rapid oscillations, and the second is the existence of plateaus. It was found that it appears possible to extrapolate the survival function to estimate long term stability by taking the two difficulties into account. A model is proposed which pictures the survival function to be a series of plateaus with rapid oscillations superimposed on the plateaus. The tracking studies give results for the widths of these plateaus and for the seperation between adjacent plateaus which ...
Endangered species toxicity extrapolation using ICE models
The National Research Council’s (NRC) report on assessing pesticide risks to threatened and endangered species (T&E) included the recommendation of using interspecies correlation models (ICE) as an alternative to general safety factors for extrapolating across species. ...
Reexamination of Finite-Lattice Extrapolation of Haldane Gaps
Nakano, Hiroki; Terai, Akira
2008-01-01
We propose two methods of estimating a systematic error in extrapolation to the infinite-size limit in the study of measuring the Haldane gaps of the one-dimensional Heisenberg antiferromagnet with the integer spin up to S=5. The finite-size gaps obtained by numerical diagonalizations based on Lanczos algorithm are presented for sizes that have not previously been reported. The changes of boundary conditions are also examined. We successfully demonstrate that our methods of ...
EXTRAPOLATING BRAIN DEVELOPMENT FROM EXPERIMENTAL SPECIES TO HUMANS
Clancy, Barbara; Finlay, Barbara L; Darlington, Richard B.; Anand, KJS
2007-01-01
To better understand the neurotoxic effects of diverse hazards on the developing human nervous system, researchers and clinicians rely on data collected from a number of model species that develop and mature at varying rates. We review the methods commonly used to extrapolate the timing of brain development from experimental mammalian species to humans, including morphological comparisons, “rules of thumb” and “event-based” analyses. Most are unavoidably limited in range or detail, many are n...
Visek, W J
1988-01-01
The Life Sciences Research Office (LSRO) of the Federation of American Societies for Experimental Biology (FASEB) is conducting this symposium under contract with the Center for Food Safety and Applied Nutrition (CFSAN) of the Food and Drug Administration (FDA). The FDA has requested information on the strengths and weaknesses of current interspecies extrapolation methods using metabolic and pharmacokinetic data, identity of data for these methods, bases for choice of extrapolation method and...
International Nuclear Information System (INIS)
180000 pictures taken in the 2 m CERN hydrogen bubble chamber with an incident beam of 2.77 GeV/e were examined. High statistics obtained in the whole angular production range allowed to study the d?/dt differential cross section behaviour, the mass and width of the ? meson, and the multipole parameters of this resonance. Nevertheless, the aim of this experiment was the application of the CHEW - LOW extrapolation method. Different types of extrapolation procedures were compared. Phase shift analysis of the elastic ?? scattering between 500 and 1100 MeV, performed with conformal mappings, allowed to determine the values of the S0, S2, P1, D0, D2 waves. Forward dispersion relations were used to obtain scattering length values of the S2 and P1 phase shifts. (author)
International Nuclear Information System (INIS)
From the year of 1987 the Department of Metrology of the ININ, in their Secondary Laboratory of Calibration Dosimetric, has a patron group of sources of radiation beta and an extrapolation chamber of electrodes of variable separation.Their objective is to carry out of the unit of the dose speed absorbed in air for radiation beta. It uses the ionometric method, cavity Bragg-Gray in the extrapolation chamber with which it counts. The services that offers are: i) it Calibration : Radioactive Fuentes of radiation beta, isotopes: 90Sr/90Y; Ophthalmic applicators 90Sr/90Y; Instruments for detection of beta radiation with to the radiological protection: Ionization chambers, Geiger-Muller, etc.; Personal Dosemeters. ii) Irradiation with beta radiation of materials to the investigation. (Author)
Survival extrapolation using the poly-Weibull model.
Demiris, Nikolaos; Lunn, David; Sharples, Linda D
2015-04-01
Recent studies of (cost-) effectiveness in cardiothoracic transplantation have required estimation of mean survival over the lifetime of the recipients. In order to calculate mean survival, the complete survivor curve is required but is often not fully observed, so that survival extrapolation is necessary. After transplantation, the hazard function is bathtub-shaped, reflecting latent competing risks which operate additively in overlapping time periods. The poly-Weibull distribution is a flexible parametric model that may be used to extrapolate survival and has a natural competing risks interpretation. In addition, treatment effects and subgroups can be modelled separately for each component of risk. We describe the model and develop inference procedures using freely available software. The methods are applied to two problems from cardiothoracic transplantation. PMID:21937472
Cosmogony as an extrapolation of magnetospheric research
International Nuclear Information System (INIS)
A theory of the origin and evolution of the Solar System (Alfven and Arrhenius, 1975: 1976) which considered electromagnetic forces and plasma effects is revised in the light of new information supplied by space research. In situ measurements in the magnetospheres and solar wind have changed our views of basic properties of cosmic plasmas. These results can be extrapolated both outwards in space, to interstellar clouds, backwards in time, to the formation of the solar system. The first extrapolation leads to a revision of some cloud properties which are essential for the early phases in the formation of stars and solar nebule. The latter extrapolation makes possible to approach the cosmogonic processes by extrapolation of (rather) well-known magnetospheric phenomena. Pioneer-Voyager observations of the Saturnian rings indicate that essential parts of their structure are fossils from cosmogonic times. By using detailed information from these space missions, it seems possible to reconstruct certain events 4-5 billion years ago with an accuracy of a few percent. This will cause a change in our views of the evolution of the solar system.(author)
Effective Orthorhombic Anisotropic Models for Wave field Extrapolation
Ibanez Jacome, Wilson
2013-05-01
Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models, to reproduce wave propagation phenomena in the Earth\\'s subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, I generate effective isotropic inhomogeneous models that are capable of reproducing the first-arrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, I develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic one, is represented by a sixth order polynomial equation that includes the fastest solution corresponding to outgoing P-waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, which is done by explicitly solving the isotropic eikonal equation for the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. I extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.
Conic state extrapolation. [computer program for space shuttle navigation and guidance requirements
Shepperd, S. W.; Robertson, W. M.
1973-01-01
The Conic State Extrapolation Routine provides the capability to conically extrapolate any spacecraft inertial state vector either backwards or forwards as a function of time or as a function of transfer angle. It is merely the coded form of two versions of the solution of the two-body differential equations of motion of the spacecraft center of mass. Because of its relatively fast computation speed and moderate accuracy, it serves as a preliminary navigation tool and as a method of obtaining quick solutions for targeting and guidance functions. More accurate (but slower) results are provided by the Precision State Extrapolation Routine.
Knowledge-based antenna pattern extrapolation
Robinson, Michael
2012-01-01
We describe a theoretically-motivated algorithm for extrapolation of antenna radiation patterns from a small number of measurements. This algorithm exploits constraints on the antenna's underlying design to avoid ambiguities, but is sufficiently general to address many different antenna types. A theoretical basis for the robustness of this algorithm is developed, and its performance is verified in simulation using a number of popular antenna designs.
Extrapolation of toxic indices among test objects
Tichý, Milo?; Rucki, Marián; Roth, Zden?k; Hanzlíková, Iveta; Vlková, Alena; Tumová, Jana; Uzlová, Rút
2010-01-01
Oligochaeta Tubifex tubifex, fish fathead minnow (Pimephales promelas), hepatocytes isolated from rat liver and ciliated protozoan are absolutely different organisms and yet their acute toxicity indices correlate. Correlation equations for special effects were developed for a large heterogeneous series of compounds (QSAR, quantitative structure-activity relationships). Knowing those correlation equations and their statistic evaluation, one can extrapolate the toxic indices. The reason is that...
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-01-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach b...
Extrapolation of Fracture Toughness Data for HT9 Irradiated at Temperatures 360-390 C
International Nuclear Information System (INIS)
The objective of this task is to provide estimated HT9 cladding and duct fracture toughness values for test (or application) temperatures ranging from -10 C to 200 C, after irradiation at temperatures of 360-390 C. This is expected to be an extrapolation of the limited data presented by Huang(1, 2). This extrapolation is based on currently accepted methods (ASTM 2003 Standard E 1921-02), and other relevant fracture toughness data on irradiated HT9 or similar alloys
Chiral and Continuum Extrapolation of Partially-Quenched Hadron Masses
Allton, C R; Leinweber, D B; Thomas, A W; Young, R D
2005-01-01
Using the finite-range regularisation (FRR) of chiral effective field theory, the chiral extrapolation formula for the vector meson mass is derived for the case of partially-quenched QCD. We re-analyse the dynamical fermion QCD data for the vector meson mass from the CP-PACS collaboration. A global fit, including finite lattice spacing effects, of all 16 of their ensembles is performed. We study the FRR method together with a naive polynomial approach and find excellent agreement ~1% with the experimental value of M_rho from the former approach. These results are extended to the case of the nucleon mass.
Acute toxicity value extrapolation with fish and aquatic invertebrates
Buckler, D.R.; Mayer, F.L.; Ellersieck, Mark R.; Asfaw, A.
2005-01-01
Assessment of risk posed by an environmental contaminant to an aquatic community requires estimation of both its magnitude of occurrence (exposure) and its ability to cause harm (effects). Our ability to estimate effects is often hindered by limited toxicological information. As a result, resource managers and environmental regulators are often faced with the need to extrapolate across taxonomic groups in order to protect the more sensitive members of the aquatic community. The goals of this effort were to 1) compile and organize an extensive body of acute toxicity data, 2) characterize the distribution of toxicant sensitivity across taxa and species, and 3) evaluate the utility of toxicity extrapolation methods based upon sensitivity relations among species and chemicals. Although the analysis encompassed a wide range of toxicants and species, pesticides and freshwater fish and invertebrates were emphasized as a reflection of available data. Although it is obviously desirable to have high-quality acute toxicity values for as many species as possible, the results of this effort allow for better use of available information for predicting the sensitivity of untested species to environmental contaminants. A software program entitled "Ecological Risk Analysis" (ERA) was developed that predicts toxicity values for sensitive members of the aquatic community using species sensitivity distributions. Of several methods evaluated, the ERA program used with minimum data sets comprising acute toxicity values for rainbow trout, bluegill, daphnia, and mysids provided the most satisfactory predictions with the least amount of data. However, if predictions must be made using data for a single species, the most satisfactory results were obtained with extrapolation factors developed for rainbow trout (0.412), bluegill (0.331), or scud (0.041). Although many specific exceptions occur, our results also support the conventional wisdom that invertebrates are generally more sensitive to contaminants than fish are. ?? 2005 Springer Science+Business Media, Inc.
International Nuclear Information System (INIS)
90Sr+90Y clinical applicators are used for brachytherapy in Brazilian clinics even though they are not manufactured anymore. Such sources must be calibrated periodically, and one of the calibration methods in use is ionometry with extrapolation ionization chambers. 90Sr+90Y clinical applicators were calibrated using an extrapolation minichamber developed at the Calibration Laboratory at IPEN. The obtained results agree satisfactorily with the data provided in calibration certificates of the sources. - Highlights: • 90Sr+90Y clinical applicators were calibrated using a mini-extrapolation chamber. • An extrapolation curve was obtained for each applicator during its calibration. • The results were compared with those provided by the calibration certificates. • All results of the dermatological applicators presented lower differences than 5%
International Nuclear Information System (INIS)
A virial equation was used for approximation of experimental molar volumes at high and low pressures for experimental temperatures. It was shown that the virial equation can be used for wide pressure and temperature intervals in distinction of the Tait, logarithm and other equations. The obtained under fitting of experimental data virial parameters were used for their following extrapolations on wide temperature intervals. The direct solution of the third order linear virial equations relative to molar volumes using the Kardano or Newton methods was employed for extrapolation of experimental dependences from high pressure to low pressure and from low pressure to high and superhigh pressures. A quite good agreement between experimental values of molar volumes versus pressures and extrapolating ones allows one to conclude that for a definite temperature interval with high probability and proof it is possible to make extrapolations on superhigh pressures
Extrapolation from experimental systems to man. A review of the problems and the possibilities
International Nuclear Information System (INIS)
Various species of experimental animals, but in particular the mouse, have proved to be good model systems for predicting qualitatively the human response to irradiation. While extrapolations of genetic risks from mice to humans have a long history and a record of considerable success, there have been few attempts to extrapolate quantitatively the findings for somatic effects. An ability to extrapolate risks from exposures to various carcinogenic agents from experimental animal systems and from in vitro systems is an urgent need, and radiation studies provide the model for the development of suitable methods of extrapolation. Accurate measurement of dose, a remarkable store of knowledge about radiobiological responses at the molecular, cellular, and whole-organism level, and the body of data on radiation effects in both man and experimental animals make radiation studies the sensible choice of a model for the development of methods of extrapolation. The principles derived from such studies will make the much more difficult task of extrapolating risks from exposures to chemical carcinogens an easier one
Extrapolation of Extreme Response for Wind Turbines based on FieldMeasurements
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; SØrensen, John Dalsgaard
2009-01-01
The characteristic loads on wind turbines during operation are among others dependent on the mean wind speed, the turbulence intensity and the type and settings of the control system. These parameters must be taken into account in the assessment of the characteristic load. The characteristic load is normally determined by statistical extrapolation of the simulated response during operation according to IEC 61400-1 2005. However, this method assumes that the individual 10 min. time series are independent and that peaks extracted are independent. In the present paper two new methods for loads extrapolation are presented. The first method is based on the same assumptions as the existing method but the statistical extrapolation is only performed for a limited number of mean wind speeds where the extreme load is likely to occur. For the second method the mean wind speeds are divided into storms which are assumed independent and the characteristic loads are determined from the extreme load in each storm.
Smooth extrapolation of unknown anatomy via statistical shape models
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
Li, C.; Nowack, R. L.; Pyrak-Nolte, L.
2003-12-01
Seismic tomographic experiments in soil and rock are strongly affected by limited and non-uniform ray coverage. We propose a new method to extrapolate data used for seismic tomography to full coverage. The proposed two-stage autoregressive extrapolation technique can be used to extend the available data and provide better tomographic images. The algorithm is based on the principle that the extrapolated data adds minimal information to the existing data. A two-stage autoregressive (AR) extrapolation scheme is then applied to the seismic tomography problem. The first stage of the extrapolation is to find the optimal prediction-error filter (PE filter). For the second stage, we use the PE filter to find the values for the missing data so that the power out of the PE filter is minimized. At the second stage, we are able to estimate missing data values with the same spectrum as the known data. This is similar to maximizing an entropy criterion. Synthetic tomographic experiments have been conducted and demonstrate that the two-stage AR extrapolation technique is a powerful tool for data extrapolation and can improve the quality of tomographic inversions of experimental and field data. Moreover, the two-stage AR extrapolation technique is tolerant to noise in the data and can still extrapolate the data to obtain overall patterns, which is very important for real data applications. In this study, we have applied AR extrapolation to a series of datasets from laboratory tomographic experiments on synthetic sediments with known structure. In these tomographic experiments, glass beads saturated with de-ionized water were used as the synthetic water-saturated background sediments. The synthetic sediments were packed in plastic cylindrical containers with a diameter of 220 mm. Tomographic experiments were then set up to measure transmitted acoustic waves through the sediment samples from multiple directions. We recorded data for sources and receivers with varying angular coverage and used the data to tomographically reconstruct the internal sediment structures. The new tomographic inversion strategies using AR extrapolation should enable better delineation of structures in soil and rock which is important for characterizing the near-surface. Acknowledgments: LJPN acknowledges Purdue University Faculty Scholar program at Purdue University.
Scintillation counting: an extrapolation into the future
International Nuclear Information System (INIS)
Progress in scintillation counting is intimately related to advances in a variety of other disciplines such as photochemistry, photophysics, and instrumentation. And while there is steady progress in the understanding of luminescent phenomena, there is a virtual explosion in the application of semiconductor technology to detectors, counting systems, and data processing. The exponential growth of this technology has had, and will continue to have, a profound effect on the art of scintillation spectroscopy. This paper will review key events in technology that have had an impact on the development of scintillation science (solid and liquid) and will attempt to extrapolate future directions based on existing and projected capability in associated fields. Along the way there have been occasional pitfalls and several false starts; these too will be discussed as a reminder that if you want the future to be different than the past, study the past
Experiences and extrapolations from Hiroshima and Nagasaki
International Nuclear Information System (INIS)
This paper examines the events following the atomic bombings of Hiroshima and Nagasaki in 1945 and extrapolates from these experiences to further understand the possible consequences of detonations on a local area from weapons in the current world nuclear arsenal. The first section deals with a report of the events that occurred in Hiroshima and Nagasaki just after the 1945 bombings with respect to the physical conditions of the affected areas, the immediate effects on humans, the psychological response of the victims, and the nature of outside assistance. Because there can be no experimental data to validate the effects on cities and their populations of detonations from current weapons, the data from the actual explosions on Hiroshima and Nagasaki provide a point of departure. The second section examines possible extrapolations from and comparisons with the Hiroshima and Nagasaki experiences. The limitations of drawing upon the Hiroshima and Nagasaki experiences are discussed. A comparison is made of the scale of effects from other major disasters for urban systems, such as damages from the conventional bombings of cities during World War II, the consequences of major earthquakes, the historical effects of the Black Plague and widespread famines, and other extreme natural events. The potential effects of detonating a modern 1 MT warhead on the city of Hiroshima as it exists today are simulated. This is extended to the local effects on a targeted city from a global neffects on a targeted city from a global nuclear war, and attention is directed to problems of estimating the societal effects from such a war
Calculating excitation energies by extrapolation along adiabatic connections
Rebolini, Elisa; Teale, Andrew M; Helgaker, Trygve; Savin, Andreas
2015-01-01
In this paper, an alternative method to range-separated linear-response time-dependent density-functional theory and perturbation theory is proposed to improve the estimation of the energies of a physical system from the energies of a partially interacting system. Starting from the analysis of the Taylor expansion of the energies of the partially interacting system around the physical system, we use an extrapolation scheme to improve the estimation of the energies of the physical system at an intermediate point of the range-separated or linear adiabatic connection where either the electron--electron interaction is scaled or only the long-range part of the Coulomb interaction is included. The extrapolation scheme is first applied to the range-separated energies of the helium and beryllium atoms and of the hydrogen molecule at its equilibrium and stretched geometries. It improves significantly the convergence rate of the energies toward their exact limit with respect to the range-separation parameter. The range...
Proportional extrapolation techniques for determining stress intensity factors
International Nuclear Information System (INIS)
Proportional extrapolation techniques are proposed to compute simply and accurately the stress intensity factor by use of the boundary element method (BEM). They are based on the procedures that the effects of boundary division near crack tip on stresses and displacements are corrected by comparing with a standard problem and the corrected results are only accurate in the limit as r -> 0 (r = distance from crack tip). Comparisons of a few crack problems are made between results using the proposed techniques and those obtained by previously recommended methods. They are seen to be less sensitive than any other techniques regarding human work and accurate results are obtained even in the case of coarse boundary division. (orig.)
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-10-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about -0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%. PMID:23724368
Extrapolation distance in two-region spherical systems
International Nuclear Information System (INIS)
Extrapolation distances for the neutron flux distribution in bounded media are usually defined in such a way that agreement is obtained between diffusion theory and transport theory. A typical application is the interpretation of pulsed neutron experiments. In this work we extend the conventional treatment of extrapolation distances to two-region spherical bodies. Assuming neutrons of one speed, extrapolation distances are calculated for a large number of critical and time-dependent systems. Among other things it is found that the inner region has a large influence on the extrapolation distance at the outer surface. 6 refs
Scaling and extrapolation of hydrogen distribution experiments
International Nuclear Information System (INIS)
Important physical processes within a containment system, which govern the long-term behaviour under severe accident conditions have been analyzed with respect to the scaling of relevant test rigs. This analysis has been performed under contract with the Commission of the European Communities on the basis of the equations processed within a typical long term containment analysis code like the CONTAIN code. An improved set of conservation equations for the involved components (air, hydrogen, vapor, etc.), has been subject to a detailed dimensional analysis. This resulted in a set of dimensionless parameter groups which determine the similarity requirements necessary to warrant simple extrapolation of the results to full size reactor containments. The implications of the similarity requirements with the applied empirical correlations or constants in combination with a chosen nodalisation concept have been addressed. Code verification is based on comparison of selected measured with calculated parameters which leads to conclusions concerning an optimal choice of correlations and the proper nodalisation of the test rigs. Heat transfer correlations and local flow resistance determination are important empirical elements for a successful reanalysis of experiments. Facility dependent code verification can only be avoided, if the dimensional dependence of the empiricism in code application is assessed. (orig./GL)
Hard hadronic collisions: extrapolation of standard effects
International Nuclear Information System (INIS)
We study hard hadronic collisions for the proton-proton (pp) and the proton-antiproton (p anti p) option in the CERN LEP tunnel. Based on our current knowledge of hard collisions at the present CERN p anti p Collider, and with the help of quantum chromodynamics (QCD), we extrapolate to the next generation of hadron colliders with a centre-of-mass energy E/sub cm/ = 10 to 20 TeV. We estimate various signatures, trigger rates, event topologies, and associated distributions for a variety of old and new physical processes, involving prompt photons, leptons, jets, W+- and Z bosons in the final state. We also calculate the maximum fermion and boson masses accessible at the LEP Hadron Collider. The standard QCD and electroweak processes studied here, being the main body of standard hard collisions, quantify the challenge of extracting new physics with hadron colliders. We hope that our estimates will provide a useful profile of the final states, and that our experimental physics colleagues will find this of use in the design of their detectors. 84 references
Hard hadronic collisions - extrapolation of standard effects
International Nuclear Information System (INIS)
We study hard hadronic collisions for the proton-proton (pp) and the proton-antiproton (panti p) option in the CERN LEP tunnel. Based on our current knowledge of hard collisions at the present CERN panti p Collider, and with the help of quantum chromodynamics (QCD), we extrapolate to the next generation of hadron colliders with a centre-of-mass energy Esub(cm) = 10-20 TeV. We estimate various signatures, trigger rates, event topologies, and associated distributions for a variety of old and new physical processes, involving prompt photons, leptons, jets, Wsup(+-) and Z bosons in the final state. We also calculate the maximum fermion and boson masses accessible at the LEP Hadron Collider. The standard QCD and electroweak processes studied here, being the main body of standard hard collisions, quantify the challenge of extracting new physics with hadron colliders. We hope that our estimates will provide a useful profile of the final states, and that our experimental physics colleagues will find this of use in the design of their detectors. (orig.)
Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...
Nuclear Lattice Simulations using Symmetry-Sign Extrapolation
Lähde, Timo A; Lee, Dean; Meißner, Ulf-G; Epelbaum, Evgeny; Krebs, Hermann; Rupak, Gautam
2015-01-01
Projection Monte Carlo calculations of lattice Chiral Effective Field Theory suffer from sign oscillations to a varying degree dependent on the number of protons and neutrons. Hence, such studies have hitherto been concentrated on nuclei with equal numbers of protons and neutrons, and especially on the alpha nuclei where the sign oscillations are smallest. We now introduce the technique of "symmetry-sign extrapolation" which allows us to use the approximate Wigner SU(4) symmetry of the nuclear interaction to control the sign oscillations without introducing unknown systematic errors. We benchmark this method by calculating the ground-state energies of the $^{12}$C, $^6$He and $^6$Be nuclei, and discuss its potential for studies of neutron-rich halo nuclei and asymmetric nuclear matter.
A physically based methodology to extrapolate performance maps of radial turbines
International Nuclear Information System (INIS)
Highlights: ? Physical based methodology to extrapolate radial turbine efficiency measured data. ? Equation relating efficiency versus blade to speed ratio (?) have been developed. ? Developed efficiency equation takes into account turbine mass flow parameter. ? Efficiency versus ? at constant pressure ratio is discussed, also at constant speed. ? The methodology has been validated with a broad range of experimental results. - Abstract: This paper details a physically based methodology to perform an extrapolation of the radial turbine performance maps, both mass flow characteristics and the efficiency curve. This method takes into account a narrow range of experimental data, which is usually the data available when such turbines are part of a turbocharger. Therefore, the extrapolation methodology is especially useful when data from third parties are being used or when the compressor of a turbocharger is used as the turbine brake in a gas stand. The nozzle equation is used to develop an interpolation and extrapolation of the mass flow rate trough the turbine. Then, specific information is extracted from this extrapolation and is fed into a total-to-static efficiency equation to carry out an extension of the efficiency curve. This equation is developed using the definition of the total-to-static efficiency, velocity triangles and thermodynamic and fluid fundamental equations. This procedure has been applied to five radial turbines of different sizes and typees of different sizes and types. Results are compared against experimental information available in the literature or provided by the turbine manufacturers and a good agreement has been found between theoretical and experimentally estimated data.
Downward extrapolation of multi-component seismic data
Haime, Gregory Carlo
An evaluation of the problems involved in elastic seismic migration is presented. Elastic wave field extrapolated operators are presented that are applicable to a general three dimensional elastic and anisotropic medium. Although the elastic operators derived are stand alone elements and can be used in any migration scheme, they are developed to take part in the stepwise elastic inversion scheme proposed by Berkhout and Wapenaar. The advantages of multicomponent seismic acquisition are discussed. The necessity of multicomponent data in elastic processing is demonstrated by an example, and a global description of all modules in the stepwise elastic inversion scheme is given. Elastic P and S extrapolation operators are derived starting from the full elastic Kirchhoff-Helmholtz integral. An analysis of the contribution of the different elastic terms in the extrapolation process is presented. It is made clear that there are many different ways to generate extrapolation operators for a so called macro model. (Such a macro model represents a global description of the subsurface in terms of velocities and densities and must be estimated before the actual extrapolation step can be performed.) A quantitative error analysis of the extrapolation operators proposed is performed. The influence of macro model errors on the amplitudes of the extrapolated P and S wave fields is examined. The use of the elastic P and S extrapolation operators in redatuming and migration schemes is considered.
Slow neutron flux extrapolation distances in R-5 and CIRUS reactors
International Nuclear Information System (INIS)
In order to calculate the core reactivity, fuel channel power outputs and neutron flux levels in the R-5 reactor at Trombay, axial flux extrapolation distances are required. For this, an analysis is carried out considering the reactor core as a two region neutron multiplying system in axial direction. The slow neutron diffusion equations for both the regions are solved analytically by applying suitable boundary conditions. Application of this method for the estimation of top extrapolation distances in CIRUS, has given results which agree well with accepted values for the reactor. (author)
In this study, six extrapolation methods have been compared for their ability to estimate daily crop evapotranspiration (ETd) from instantaneous latent heat flux estimates derived from digital airborne multispectral remote sensing imagery. Data used in this study were collected during an experiment...
Directory of Open Access Journals (Sweden)
Lee HyunYoung
2010-01-01
Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.
Extrapolation and phase correction of non-uniformly broadened signals
Rodts, Stéphane; Bytchenkoff, Dimitri
2013-08-01
The initial part of FID-signals cannot always be acquired experimentally. This is particularly true for signals characterised by strong inhomogeneous broadening, such as those in porous materials, e.g. cements, soils and rocks, those measured by portable NMR-apparatus, or EPR-signals. Here we report on a numerical method we designed to extrapolate those initial missing parts, i.e. to retrieve their amplitude and phase. Should the entire signal be available from an experiment, the algorithm can still be used as an automatic phase-corrector and a low-pass filter. The method is based on the use of cardinal series, applies to any oversampled signals and requires no prior knowledge of the system under study. We show that the method can also be used to restore entire one-dimensional MRI-data sets from those in which less than half of the k-space was sampled, thus not only potentially allowing to speed up data acquisition - when extended to two or three dimensions, but also to circumvent phase-distortions usually encountered when exploring the k-space near its origin.
Takahashi, Junichi; Sugano, Junpei; Ishii, Masahiro; Kouno, Hiroaki; Yahiro, Masanobu
2014-01-01
We evaluate quark number densities at imaginary chemical potential by lattice QCD with clover-improved two-flavor Wilson fermion. The quark number densities are extrapolated to the small real chemical potential region by assuming some function forms. The extrapolated quark number densities are consistent with those calculated at real chemical potential with the Taylor expansion method for the reweighting factors. In order to study the large real chemical potential region, we...
Extrapolation modification of physical start-up of reactors
International Nuclear Information System (INIS)
The result of extrapolation is very important to the physical start-up, and it is used to control the speed and quantity of the reactivity. Using the neutron multiplication formula for active and sub-critical condition, the non-balance situation of neutron counts, the dilution, and the lifting of banks are analyzed. The results show that the delayed effect of dilution, the non-linearity of the integral worth of the control rods, and the non-linear increase of the flux have great effects on the accuracy of the extrapolation. We consider those effects and give some advices on the modification of the extrapolation. (authors)
Border extrapolation using fractal attributes in remote sensing images
Cipolletti, M. P.; Delrieux, C. A.; Perillo, G. M. E.; Piccolo, M. C.
2014-01-01
In management, monitoring and rational use of natural resources the knowledge of precise and updated information is essential. Satellite images have become an attractive option for quantitative data extraction and morphologic studies, assuring a wide coverage without exerting negative environmental influence over the study area. However, the precision of such practice is limited by the spatial resolution of the sensors and the additional processing algorithms. The use of high resolution imagery (i.e., Ikonos) is very expensive for studies involving large geographic areas or requiring long term monitoring, while the use of less expensive or freely available imagery poses a limit in the geographic accuracy and physical precision that may be obtained. We developed a methodology for accurate border estimation that can be used for establishing high quality measurements with low resolution imagery. The method is based on the original theory by Richardson, taking advantage of the fractal nature of geographic features. The area of interest is downsampled at different scales and, at each scale, the border is segmented and measured. Finally, a regression of the dependence of the measured length with respect to scale is computed, which then allows for a precise extrapolation of the expected length at scales much finer than the originally available. The method is tested with both synthetic and satellite imagery, producing accurate results in both cases.
Extrapolation of mean-field models to superheavy nuclei
International Nuclear Information System (INIS)
The extrapolation of self-consistent nuclear mean-field models to the region of superheavy elements is discussed with emphasis on the extrapolating power of the models. The predictions of modern mean-field models are confronted with recent experimental data. It is shown that a final conclusion about the location of the expected island of spherical doubly-magic superheavy nuclei cannot be drawn on the basis of the available data. (orig.)
Extrapolation of K to \\pi\\pi decay amplitude
Suzuki, Mahiko
2001-01-01
We examine the uncertainties involved in the off-mass-shell extrapolation of the $K\\rightarrow \\pi\\pi$ decay amplitude with emphasis on those aspects that have so far been overlooked or ignored. Among them are initial-state interactions, choice of the extrapolated kaon field, and the relation between the asymptotic behavior and the zeros of the decay amplitude. In the inelastic region the phase of the decay amplitude cannot be determined by strong interaction alone and even ...
Role of animal studies in low-dose extrapolation
International Nuclear Information System (INIS)
Current data indicate that in the case of low-LET radiation linear, extrapolation from data obtained at high doses appears to overestimate the risk at low doses to a varying degree. In the case of high-LET radiation, extrapolation from data obtained at doses as low as 40 rad (0.4 Gy) is inappropriate and likely to result in an underestimate of the risk
Bayesian estimation of medium properties in wavefield downward extrapolation problems
Pitas, I.; Venetsanopoulos, A. N.
2010-01-01
When acoustic waves are used for nondestructive imaging of the interior of objects such as the Earth the human body, etc., the wavefield measurements recorded on the surface of the object are extrapolated according to the wave equation to give an image of the object. The extrapolation propagates backward the noise present in the measurements, so that the quality of the final image is degraded, unless statistical restoration techniques are used. Another source of degradation of the image is th...
Problems with using mechanisms to solve the problem of extrapolation
Howick, J.; Glasziou, P.; Aronson, Jk
2013-01-01
Proponents of evidence-based medicine and some philosophers of science seem to agree that knowledge of mechanisms can help solve the problem of applying results of controlled studies to target populations ('the problem of extrapolation'). We describe the problem of extrapolation, characterize mechanisms, and outline how mechanistic knowledge might be used to solve the problem. Our main thesis is that there are four often overlooked problems with using mechanistic knowledge to solve the proble...
Amir, Sahar Z.
2013-05-01
We introduce an efficient thermodynamically consistent technique to extrapolate and interpolate normalized Canonical NVT ensemble averages like pressure and energy for Lennard-Jones (L-J) fluids. Preliminary results show promising applicability in oil and gas modeling, where accurate determination of thermodynamic properties in reservoirs is challenging. The thermodynamic interpolation and thermodynamic extrapolation schemes predict ensemble averages at different thermodynamic conditions from expensively simulated data points. The methods reweight and reconstruct previously generated database values of Markov chains at neighboring temperature and density conditions. To investigate the efficiency of these methods, two databases corresponding to different combinations of normalized density and temperature are generated. One contains 175 Markov chains with 10,000,000 MC cycles each and the other contains 3000 Markov chains with 61,000,000 MC cycles each. For such massive database creation, two algorithms to parallelize the computations have been investigated. The accuracy of the thermodynamic extrapolation scheme is investigated with respect to classical interpolation and extrapolation. Finally, thermodynamic interpolation benefiting from four neighboring Markov chains points is implemented and compared with previous schemes. The thermodynamic interpolation scheme using knowledge from the four neighboring points proves to be more accurate than the thermodynamic extrapolation from the closest point only, while both thermodynamic extrapolation and thermodynamic interpolation are more accurate than the classical interpolation and extrapolation. The investigated extrapolation scheme has great potential in oil and gas reservoir modeling.That is, such a scheme has the potential to speed up the MCMC thermodynamic computation to be comparable with conventional Equation of State approaches in efficiency. In particular, this makes it applicable to large-scale optimization of L-J model parameters for hydrocarbons and other important reservoir species. The efficiency of the thermodynamic dependent techniques is expected to make the Markov chains simulation an attractive alternative in compositional multiphase flow simulation.
Directory of Open Access Journals (Sweden)
Joana Aurora Braun Chagas
2010-02-01
Full Text Available O objetivo deste estudo foi avaliar o protocolo de contenção química com cetamina S(+ e midazolam em bugios-ruivos, comparando o cálculo de doses pelo método convencional e o método de extrapolação alométrica. Foram utilizados 12 macacos bugios (Alouatta guariba clamitans hígidos, com peso médio de 4,84±0,97kg, de ambos os sexos. Após jejum alimentar de 12 horas e hídrico de seis horas, realizou-se contenção física manual e aferiram-se os seguintes parâmetros: frequência cardíaca (FC, frequência respiratória (f, tempo de preenchimento capilar (TPC, temperatura retal (TR, pressão arterial sistólica não invasiva (PANI e valores de hemogasometria arterial. Posteriormente, os animais foram alocados em dois grupos: GC (Grupo Convencional, n=06, os quais receberam cetamina S(+ (5mg kg-1 e midazolam (0,5mg kg-1, pela via intramuscular, com doses calculadas pelo método convencional; e GA (Grupo Alometria, n=06, os quais receberam o mesmo protocolo, pela mesma via, utilizando-se as doses calculadas pelo método de extrapolação alométrica. Os parâmetros descritos foram mensurados novamente nos seguintes momentos: M5, M10, M20 e M30 (cinco, 10, 20 e 30 minutos após a administração dos fármacos, respectivamente. Também foram avaliados: qualidade de miorrelaxamento, reflexo podal e caudal, pinçamento interdigital, tempo para indução de decúbito, tempo hábil de sedação, qualidade de sedação, e tempo e qualidade de recuperação. O GA apresentou menor tempo para indução ao decúbito, maior grau e tempo de sedação, bem como redução significativa da FC e PANI de M5 até M30, quando comparado ao GC. Conclui-se que o grupo no qual o cálculo de dose foi realizado por meio da alometria (GA apresentou melhor grau de relaxamento muscular e sedação, sem produzir depressão cardiorrespiratória significativa.The aim of this study was to evaluate a protocol of chemical restraint comparing the conventional method of calculation (weight dose and allometric extrapolation. Twelve healthy red howler monkeys (Alouatta guariba clamitans, average weight 4.84±0.97kg, male and female, were used for this study. After a 12-hour period of food restriction and 6 hours of water restriction, the animals were physically restraint and the following parameters were measured: heart rate (HR, respiratory rate (RR, capillary refill time (CRT, rectal temperature (RT, non invasive systolic arterial pressure (NISAP and arterial blood gases analysis. The animals were distributed into two groups: CG (Conventional Group, n=6, in which the animals received S(+ ketamine (5mg kg-1 and midazolam (0.5mg kg-1, by intramuscular (IM injection; and AG (Allometry Group, n=6, in which the animals also received S(+ ketamine and midazolan IM, but the doses were calculated by allometric extrapolation. Parameters were evaluated at the following moments: M5, M10, M20 and M30 (5, 10, 20 and 30 minutes after IM injection, respectively. Muscle relaxation, pedal and caudal reflexes, interdigital pinch, recumbency time, sedation's quality and duration, and recovery time and its quality were also evaluated. The AG had a faster time for recumbency, higher period and quality of sedation, and a significantly reduction on HR and SAP from M5 to M30 when compared to CG. It was concluded that allometric extrapolation presented a better muscle relaxation and sedation without significant cardiorespiratory depression.
Scientific Electronic Library Online (English)
Joana Aurora Braun, Chagas; Nilson, Oleskovicz; Aury Nunes de, Moraes; Fabíola Niederauer, Flôres; André Luís, Corrêa; Júlio César, Souza Júnior; André Vasconcelos, Soares; Átila, Costa.
2010-02-01
Full Text Available O objetivo deste estudo foi avaliar o protocolo de contenção química com cetamina S(+) e midazolam em bugios-ruivos, comparando o cálculo de doses pelo método convencional e o método de extrapolação alométrica. Foram utilizados 12 macacos bugios (Alouatta guariba clamitans) hígidos, com peso médio d [...] e 4,84±0,97kg, de ambos os sexos. Após jejum alimentar de 12 horas e hídrico de seis horas, realizou-se contenção física manual e aferiram-se os seguintes parâmetros: frequência cardíaca (FC), frequência respiratória (f), tempo de preenchimento capilar (TPC), temperatura retal (TR), pressão arterial sistólica não invasiva (PANI) e valores de hemogasometria arterial. Posteriormente, os animais foram alocados em dois grupos: GC (Grupo Convencional, n=06), os quais receberam cetamina S(+) (5mg kg-1) e midazolam (0,5mg kg-1), pela via intramuscular, com doses calculadas pelo método convencional; e GA (Grupo Alometria, n=06), os quais receberam o mesmo protocolo, pela mesma via, utilizando-se as doses calculadas pelo método de extrapolação alométrica. Os parâmetros descritos foram mensurados novamente nos seguintes momentos: M5, M10, M20 e M30 (cinco, 10, 20 e 30 minutos após a administração dos fármacos, respectivamente). Também foram avaliados: qualidade de miorrelaxamento, reflexo podal e caudal, pinçamento interdigital, tempo para indução de decúbito, tempo hábil de sedação, qualidade de sedação, e tempo e qualidade de recuperação. O GA apresentou menor tempo para indução ao decúbito, maior grau e tempo de sedação, bem como redução significativa da FC e PANI de M5 até M30, quando comparado ao GC. Conclui-se que o grupo no qual o cálculo de dose foi realizado por meio da alometria (GA) apresentou melhor grau de relaxamento muscular e sedação, sem produzir depressão cardiorrespiratória significativa. Abstract in english The aim of this study was to evaluate a protocol of chemical restraint comparing the conventional method of calculation (weight dose) and allometric extrapolation. Twelve healthy red howler monkeys (Alouatta guariba clamitans), average weight 4.84±0.97kg, male and female, were used for this study. A [...] fter a 12-hour period of food restriction and 6 hours of water restriction, the animals were physically restraint and the following parameters were measured: heart rate (HR), respiratory rate (RR), capillary refill time (CRT), rectal temperature (RT), non invasive systolic arterial pressure (NISAP) and arterial blood gases analysis. The animals were distributed into two groups: CG (Conventional Group, n=6), in which the animals received S(+) ketamine (5mg kg-1) and midazolam (0.5mg kg-1), by intramuscular (IM) injection; and AG (Allometry Group, n=6), in which the animals also received S(+) ketamine and midazolan IM, but the doses were calculated by allometric extrapolation. Parameters were evaluated at the following moments: M5, M10, M20 and M30 (5, 10, 20 and 30 minutes after IM injection, respectively). Muscle relaxation, pedal and caudal reflexes, interdigital pinch, recumbency time, sedation's quality and duration, and recovery time and its quality were also evaluated. The AG had a faster time for recumbency, higher period and quality of sedation, and a significantly reduction on HR and SAP from M5 to M30 when compared to CG. It was concluded that allometric extrapolation presented a better muscle relaxation and sedation without significant cardiorespiratory depression.
Lutz, Jesse J.; Piecuch, Piotr
2008-04-01
The recently proposed potential energy surface (PES) extrapolation scheme, which predicts smooth molecular PESs corresponding to larger basis sets from the relatively inexpensive calculations using smaller basis sets by scaling electron correlation energies [A. J. C. Varandas and P. Piecuch, Chem. Phys. Lett. 430, 448 (2006)], is applied to the PESs associated with the conrotatory and disrotatory isomerization pathways of bicyclo[1.1.0]butane to buta-1,3-diene. The relevant electronic structure calculations are performed using the completely renormalized coupled-cluster method with singly and doubly excited clusters and a noniterative treatment of connected triply excited clusters, termed CR-CC(2,3), which is known to provide a highly accurate description of chemical reaction profiles involving biradical transition states and intermediates. A comparison with the explicit CR-CC(2,3) calculations using the large correlation-consistent basis set of the cc-pVQZ quality shows that the cc-pVQZ PESs obtained by the extrapolation from the smaller basis set calculations employing the cc-pVDZ and cc-pVTZ basis sets are practically identical, to within fractions of a millihartree, to the true cc-pVQZ PESs. It is also demonstrated that one can use a similar extrapolation procedure to accurately predict the complete basis set (CBS) limits of the calculated PESs from the results of smaller basis set calculations at a fraction of the effort required by the conventional pointwise CBS extrapolations.
Determination of transmission factors in tissue using a standard extrapolation chamber
International Nuclear Information System (INIS)
A commercial ionization chamber, Böhm extrapolation chamber, PTW, model 23392, recommended for measurements in low energy X-rays and beta radiation fields, was tested in three different 90Sr+90Y beams to verify its performance as a primary standard system for the calibration and dosimetry of beta radiation sources and detectors. Characterization tests were performed, as determination of the chamber null depth using two methods (the results presented a difference of only 0.9%), transmission factors in tissue, in comparison with those of the certificate (the maximum difference was 2.1%), and absorbed dose rates of the 90Sr+90Y sources, in comparison with the values provided by the calibration certificates (the maximum difference was 4.90%). The results obtained confirmed that this extrapolation chamber presents a very good behavior in beta radiation fields as a primary standard system. - Highlights: • Böhm extrapolation chamber was tested to be used as a primary standard system. • The chamber was exposed to the three 90Sr+90Y secondary standard sources. • Transmission factors were obtained. • Absorbed dose rates were determined using the sources at certificate conditions. • The results showed the good performance of the extrapolation chamber
Extrapolation distance in the cylindrical Milne problem in one- and two-group transport theory
International Nuclear Information System (INIS)
The extrapolation distance in the cylindrical Milne problem (''black'' cylinder immersed in a homogeneous, infinite, isotropically scattering and absorbing medium) is calculated in one- and two-group approximations. The method used consists of asymptotic expansions in 1/R and R (R being the radius of the cylinder) for large and small R, respectively, and of a variational method for R = O(1), R measured in meanfree-paths. The numerical results are given for two cases in the one-group (c = 0.90 and c = 0.95) and for two cases in the two-group approximation (both for k = 1). The results show convergence of the methods and sufficient accuracy of the applied numerical procedures. This conclusion is confirmed by the comparison of the values of the extrapolation distance calculated by variational and asymptotic expansion formulas in regions of R, where both can be applied
First Result of Field Extrapolation Based on HMI Vector Magnetic Data
Sun, X.; Hoeksema, J. T.; Wiegelmann, T.; Hayashi, K.; Liu, Y.
2010-12-01
Magnetic field extrapolation based on photospheric field has long been used to infer the coronal field. However, past studies are often restrained by the line-of-sight nature of observation, or the inadequate spatial/temporal resolution of the few available vector data. With the new Helioseismic and Magnetic Imager (HMI), we are now able to produce full-disk, high cadence (12 min), high resolution (1 arcsec) vector data continuously for the first time. In this paper, we analyze a time sequence of HMI vector data and apply several extrapolation methods (potential-field model, nonlinear force-free model, MHD simulation, etc.) to study the evolution of overlying field structure in lower corona. Results from different methods are cross-compared and examined against coronal observations. This study will provide insight to modeling the coronal field with greater detail and better accuracy, and eventually help the understanding of dynamic processes in solar atmosphere.
Extrapolation of Extreme Response for Wind Turbines based on Field Measurements
Toft, Henrik Stensgaard; Sørensen, John Dalsgaard
2009-01-01
The characteristic loads on wind turbines during operation are among others dependent on the mean wind speed, the turbulence intensity and the type and settings of the control system. These parameters must be taken into account in the assessment of the characteristic load. The characteristic load is normally determined by statistical extrapolation of the simulated response during operation according to IEC 61400-1 2005. However, this method assumes that the individual 10 min. time series are ...
Properties of an extrapolation chamber for beta radiation dosimetry
International Nuclear Information System (INIS)
The properties of a commercial extrapolation chamber were studied, and the possibility is shown of its use in beta radiation dosimetry. The chamber calibration factors were determined for several sources (90Sr, 90Y-204Tl and 147Pm) making known the dependence of its response on the energy of the incident radiation. Extrapolation curves allow to obtain independence on energy for each source. One of such curves, shown for the 90Sr-90Y source at 50 cm from the detector, is obtained through the variation of the chamber window thickness and the extrapolation to the null distance (determined graphically). Different curves shown also: 1) the dependence of the calibration factor on the average energy of beta radiation; 2) the variation of ionization current with the distance between the chamber and the sources; 3) the effect of the collecting electrode area on the value of calibration factors for the different sources. (I.C.R.)
Moraitis, Kostas; Archontis, Vasilis; Tziotziou, Konstantinos; Georgoulis, Manolis K.
We calculate the instantaneous free magnetic energy and relative magnetic helicity of solar active regions using two independent approaches: a) a non-linear force-free (NLFF) method that requires only a single photospheric vector magnetogram, and b) well known semi-analytical formulas that require the full three-dimensional (3D) magnetic field structure. The 3D field is obtained either from MHD simulations, or from observed magnetograms via respective NLFF field extrapolations. We find qualitative agreement between the two methods and, quantitatively, a discrepancy not exceeding a factor of 4. The comparison of the two methods reveals, as a byproduct, two independent tests for the quality of a given force-free field extrapolation. We find that not all extrapolations manage to achieve the force-free condition in a valid, divergence-free, magnetic configuration. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.
A study on extrapolation algorithm of percent depth dose
International Nuclear Information System (INIS)
A mathematical model of extrapolating percent depth dose (PDD) was presented according to the principle of interaction between X-ray and water. The difference of extrapolated PDD by the model and measured PDD by 3D radiation field analyzer is very small. For fields of different sizes within 20cm depth, the maximum absolute difference is 0.006 x 100% and the maximum relative difference is 1.1%. With the mathematical model, appropriate PDD values for different field sizes can be abstained without 3D radiation field analyses
Extrapolation of ASDEX Upgrade H-mode discharges to ITER
Tardini, G.; Kardaun, O. J. W. F.; Peeters, A. G.; Pereverzev, G. V.; Sips, A. C. C.; Stober, J.; ASDEX Upgrade Team
2009-07-01
In this paper we discuss a procedure to evaluate the fusion performance of ASDEX Upgrade discharges scaled up to ITER. The kinetic profile shape is taken from the measured profiles. Multiplication factors are used to obtain a fixed Greenwald fraction and an ITER normalized thermal pressure as in the corresponding ASDEX Upgrade discharge. The toroidal field and the plasma geometry are taken from the ITER-FEAT design (scenario 2), whereas q95 is taken from the experiment. The confinement time is inferred assuming that the measured H-factor with respect to several existing scaling laws also holds for ITER. While retaining the information contained in the multi-machine databases underlying the different scaling laws, this approach adds profile effects and confinement improvement with respect to the ITER baseline, thus including recent experimental evidence such as the prediction of peaked density profiles in ITER. Under this set of assumptions, of course not unique, we estimate the ITER performance on the basis of a wide database of ASDEX Upgrade H-mode discharges, in terms of fusion power, fusion gain and triple product. According to the three scalings considered, there is a finite probability of reaching ignition, while more than half of the discharges require less auxiliary power than the one foreseen for ITER. For all the scaling laws, high values of the thermal ?N up to 2.4 are accessible. A sensitivity study gives an estimate of the accuracy of the extrapolation. The impact of different levels of tungsten concentration on the fusion performance is also studied in this paper. This scaling method is used to verify some common 0D figures of merit of ITER's fusion performance.
International Nuclear Information System (INIS)
Over the last decades, elemental maps have become a powerful tool for the analysis of the spatial distribution of the elements within specimen. In energy-filtered transmission electron microscopy (EFTEM) one commonly uses two pre-edge and one post-edge image for the calculation of elemental maps. However, this so called three-window method can introduce serious errors into the extrapolated background for the post-edge window. Since this method uses only two pre-edge windows as data points to calculate a background model that depends on two fit parameters, the quality of the extrapolation can be estimated only statistically assuming that the background model is correct. In this paper, we will discuss a possibility to improve the accuracy and reliability of the background extrapolation by using a third pre-edge window. Since with three data points the extrapolation becomes over-determined, this change permits us to estimate not only the statistical uncertainly of the fit, but also the systematic error by using the experimental data. Furthermore we will discuss in this paper the acquisition parameters that should be used for the energy windows to reach an optimal signal-to-noise ratio (SNR) in the elemental maps. -- Highlights: ? Comparison of three pre-edge windows to the regular two pre-edge windows. ? Investigation of the optimal positioning of the third pre-edge window. ? Description of the ?2 test for extrapolation quality check.
Extrapolation of ZPR sodium void measurements to the power reactor
International Nuclear Information System (INIS)
Sodium-voiding measurements of ZPPR assemblies 2 and 5 are analyzed with ENDF/B Version IV data. Computations include directional diffusion coefficients to account for streaming effects resulting from the plate structure of the critical assembly. Bias factors for extrapolating critical assembly data to the CRBR design are derived from the results of this analysis
Freeze-out parameters from continuum extrapolated lattice data
Borsanyi, S; Katz, S D; Krieg, S; Ratti, C; Szabo, K K
2013-01-01
We present continuum extrapolated lattice results for the higher order fluctuations of conserved charges in high temperature Quantum Chromodynamics. Through the matching of the grand canonical ensemble on the lattice to the net charge and net baryon distribution realized in heavy ion experiments the temperature and the chemical potential may be estimated at the time of chemical freeze-out
Application of the Weibull extrapolation to 137Cs geochronology in Tokyo Bay and Ise Bay, Japan
International Nuclear Information System (INIS)
Considerable doubt surrounds the nature of processes by which 137Cs is deposited in marine sediments, leading to a situation where 137Cs geochronology cannot be always applied suitably. Based on extrapolation with Weibull distribution, the maximum concentration of 137Cs derived from asymptotic values for cumulative specific inventory was used to re-establish 137Cs geochronology, instead of original 137Cs profiles. Corresponding dating results for cores in Tokyo Bay and Ise Bay, Japan, by means of this new method, are in much closer agreement with those calculated from 210Pb method than the previous method
SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy
Energy Technology Data Exchange (ETDEWEB)
Liu, C; Kumarasiri, A; Chetvertkov, M; Gordon, J; Chetty, I; Siddiqui, F; Kim, J [Henry Ford Health System, Detroit, MI (United States)
2014-06-01
Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformably registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (?±?) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.
SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy
International Nuclear Information System (INIS)
Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformably registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (?±?) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA
Chaouche, L Yelles; Pillet, V Martínez; Moreno-Insertis, F
2012-01-01
The 3D structure of an active region (AR) filament is studied using nonlinear force-free field (NLFFF) extrapolations based on simultaneous observations at a photospheric and a chromospheric height. To that end, we used the Si I 10827 \\AA\\ line and the He I 10830 \\AA\\ triplet obtained with the Tenerife Infrared Polarimeter (TIP) at the VTT (Tenerife). The two extrapolations have been carried out independently from each other and their respective spatial domains overlap in a considerable height range. This opens up new possibilities for diagnostics in addition to the usual ones obtained through a single extrapolation from, typically, a photospheric layer. Among those possibilities, this method allows the determination of an average formation height of the He I 10830 \\AA\\ signal of \\approx 2 Mm above the surface of the sun. It allows, as well, to cross-check the obtained 3D magnetic structures in view of verifying a possible deviation from the force- free condition especially at the photosphere. The extrapolati...
Steinhausen, Heinz C.; Martín, Rodrigo; den Brok, Dennis; Hullin, Matthias B.; Klein, Reinhard
2015-03-01
Numerous applications in computer graphics and beyond benefit from accurate models for the visual appearance of real-world materials. Data-driven models like photographically acquired bidirectional texture functions (BTFs) suffer from limited sample sizes enforced by the common assumption of far-field illumination. Several materials like leather, structured wallpapers or wood contain structural elements on scales not captured by typical BTF measurements. We propose a method extending recent research by Steinhausen et al. to extrapolate BTFs for large-scale material samples from a measured and compressed BTF for a small fraction of the material sample, guided by a set of constraints. We propose combining color constraints with surface descriptors similar to normal maps as part of the constraints guiding the extrapolation process. This helps narrowing down the search space for suitable ABRDFs per texel to a large extent. To acquire surface descriptors for nearly at materials, we build upon the idea of photometrically estimating normals. Inspired by recent work by Pan and Skala, we obtain images of the sample in four different rotations with an off-the-shelf flatbed scanner and derive surface curvature information from these. Furthermore, we simplify the extrapolation process by using a pixel-based texture synthesis scheme, reaching computational efficiency similar to texture optimization.
Enhancing Robustness to Extrapolate Synergies Learned from Motion Capture
Aubry, Matthieu; De Loor, Pierre; Gibet, Sylvie
2010-01-01
Reproducing the characteristics of human movements, is a crucial issue in studying motion. In the context of this work, an explicit model of synergies which can be parametrized is used for reproducing the main features of reaching motions. This paper evaluates the possibility to extrapolate learned parameters from a captured motion to new targets and shows how learning process is a key issue to ensure the robustness of parameters. another target, some parameters displayed poor capacity to ext...
Revisiting Chiral Extrapolation by Studying a Lattice Quark Propagator
International Nuclear Information System (INIS)
The quark propagator in the Landau gauge is studied on the lattice, including the quenched and the unquenched results. No obvious unquenched effects are found by comparing the quenched quark propagator with the dynamical one. For the quenched and unquenched configurations, the results with different quark masses have been computed. For the quark mass function, a nonlinear chiral extrapolating behavior is found in the infrared region for both the quenched and dynamical results. (the physics of elementary particles and fields)
Properties of a commercial extrapolation chamber in ? radiation fields
International Nuclear Information System (INIS)
A commercial extrapolation chamber was tested in different ? radiation fields and its properties investigated. Its usefulness for ? radiation calibration and dosimetry was verified. Experiments were performed in order to obtain the main characteristics such as the calibration factors (and consequently the energy dependence) for all chamber collecting electrodes (between 10 and 40 mm diameter), the transmission factors in tissue and the useful source-detector distance range
Directory of Open Access Journals (Sweden)
Hyun Young Lee
2010-01-01
Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal ??(L2 error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.
Resolution enhancement in digital holography by self-extrapolation of holograms
Latychevskaia, Tatiana
2013-01-01
It is generally believed that the resolution in digital holography is limited by the size of the captured holographic record. Here, we present a method to circumvent this limit by self-extrapolating experimental holograms beyond the area that is actually captured. This is done by first padding the surroundings of the hologram and then conducting an iterative reconstruction procedure. The wavefront beyond the experimentally detected area is thus retrieved and the hologram reconstruction shows enhanced resolution. To demonstrate the power of this concept, we apply it to simulated as well as experimental holograms.
Energy Technology Data Exchange (ETDEWEB)
Alvarez R, M. T.; Morales P, J. R. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)
2001-01-15
From the year of 1987 the Department of Metrology of the ININ, in their Secondary Laboratory of Calibration Dosimetric, has a patron group of sources of radiation beta and an extrapolation chamber of electrodes of variable separation.Their objective is to carry out of the unit of the dose speed absorbed in air for radiation beta. It uses the ionometric method, cavity Bragg-Gray in the extrapolation chamber with which it counts. The services that offers are: i) it Calibration : Radioactive Fuentes of radiation beta, isotopes: {sup 90}Sr/{sup 90}Y; Ophthalmic applicators {sup 9}0{sup S}r/{sup 90}Y; Instruments for detection of beta radiation with to the radiological protection: Ionization chambers, Geiger-Muller, etc.; Personal Dosemeters. ii) Irradiation with beta radiation of materials to the investigation. (Author)
Extrapolation of vertical target motion through a brief visual occlusion.
Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco
2010-03-01
It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects. PMID:19882150
Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan
2013-04-01
Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation-quality-index. Subsequently the probability and quality information of the forecast ensemble is available and flexible blending to numerical prediction model for each subarea is possible. Simultaneously with automatic processing the ensemble nowcasting product is visualized in a new innovative way which combines the intensity, probability and quality information for different subareas in one forecast image.
International Nuclear Information System (INIS)
Two transversely oscillating coronal loops are investigated in detail during a flare on the 2011 September 6 using data from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. We compare two independent methods to determine the Alfvén speed inside these loops. Through the period of oscillation and loop length, information about the Alfvén speed inside each loop is deduced seismologically. This is compared with the Alfvén speed profiles deduced from magnetic extrapolation and spectral methods using AIA bandpass. We find that for both loops the two methods are consistent. Also, we find that the average Alfvén speed based on loop travel time is not necessarily a good measure to compare with the seismological result, which explains earlier reported discrepancies. Instead, the effect of density and magnetic stratification on the wave mode has to be taken into account. We discuss the implications of combining seismological, extrapolation, and spectral methods in deducing the physical properties of coronal loops.
Energy Technology Data Exchange (ETDEWEB)
Verwichte, E.; Foullon, C.; White, R. S. [Centre for Fusion, Space and Astrophysics, Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Van Doorsselaere, T., E-mail: Erwin.Verwichte@warwick.ac.uk [Centre for Plasma Astrophysics, Department of Mathematics, Katholieke Universiteit Leuven, Celestijnenlaan 200B, B-3001 Leuven (Belgium)
2013-04-10
Two transversely oscillating coronal loops are investigated in detail during a flare on the 2011 September 6 using data from the Atmospheric Imaging Assembly (AIA) on board the Solar Dynamics Observatory. We compare two independent methods to determine the Alfven speed inside these loops. Through the period of oscillation and loop length, information about the Alfven speed inside each loop is deduced seismologically. This is compared with the Alfven speed profiles deduced from magnetic extrapolation and spectral methods using AIA bandpass. We find that for both loops the two methods are consistent. Also, we find that the average Alfven speed based on loop travel time is not necessarily a good measure to compare with the seismological result, which explains earlier reported discrepancies. Instead, the effect of density and magnetic stratification on the wave mode has to be taken into account. We discuss the implications of combining seismological, extrapolation, and spectral methods in deducing the physical properties of coronal loops.
Properties of a commercial extrapolation chamber in beta radiation fields
International Nuclear Information System (INIS)
A commercial extrapolation chamber (PTW, Germany) was tested in different beta radiation fields and its properties investigated. Its usefulness for beta radiation calibration and dosimetry was demonstrated. The Beta Secondary Standard setup of the IPEN calibration laboratory was utilized. This system, developed by the Physikalisch-Tecknische Bundesanstalt, Brunswick (Germany) and manufactured by Buchler and Co., consists of a source stand, a control unit with timer and four interchangeable beta sources: 90Sr-90Y (1850 and 74 MBq), 204Tl (18,5 MBq) ionization current detection. The variable volume ionization chamber of cylindrical form is provided with different collecting electrodes of tissue equivalent material and Mylar entrance windows of different thickesses
The use of extrapolation concepts to augment the Frequency Separation Technique
Alexiou, Spiros
2015-03-01
The Frequency Separation Technique (FST) is a general method formulated to improve the speed and/or accuracy of lineshape calculations, including strong overlapping collisions, as is the case for ion dynamics. It should be most useful when combined with ultrafast methods, that, however have significant difficulties when the impact regime is approached. These difficulties are addressed by the Frequency Separation Technique, in which the impact limit is correctly recovered. The present work examines the possibility of combining the Frequency Separation Technique with the addition of extrapolation to improve results and minimize errors resulting from the neglect of fast-slow coupling and thus obtain the exact result with a minimum of extra effort. To this end the adequacy of one such ultrafast method, the Frequency Fluctuation Method (FFM) for treating the nonimpact part is examined. It is found that although the FFM is unable to reproduce the nonimpact profile correctly, its coupling with the FST correctly reproduces the total profile.
3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer
Lane, John
2012-01-01
Determining the Z-R relationship (where Z is the radar reflectivity factor and R is rainfall rate) from disdrometer data has been and is a common goal of cloud physicists and radar meteorology researchers. The usefulness of this quantity has traditionally been limited since radar represents a volume measurement, while a disdrometer corresponds to a point measurement. To solve that problem, a 3D-DSD (drop-size distribution) method of determining an equivalent 3D Z-R was developed at the University of Central Florida and tested at the Kennedy Space Center, FL. Unfortunately, that method required a minimum of three disdrometers clustered together within a microscale network (.1-km separation). Since most commercial disdrometers used by the radar meteorology/cloud physics community are high-cost instruments, three disdrometers located within a microscale area is generally not a practical strategy due to the limitations of these kinds of research budgets. A relatively simple modification to the 3D-DSD algorithm provides an estimate of the 3D-DSD and therefore, a 3D Z-R measurement using a single disdrometer. The basis of the horizontal extrapolation is mass conservation of a drop size increment, employing the mass conservation equation. For vertical extrapolation, convolution of a drop size increment using raindrop terminal velocity is used. Together, these two independent extrapolation techniques provide a complete 3DDSD estimate in a volume around and above a single disdrometer. The estimation error is lowest along a vertical plane intersecting the disdrometer position in the direction of wind advection. This work demonstrates that multiple sensors are not required for successful implementation of the 3D interpolation/extrapolation algorithm. This is a great benefit since it is seldom that multiple sensors in the required spatial arrangement are available for this type of analysis. The original software (developed at the University of Central Florida, 1998.- 2000) has also been modified to read standardized disdrometer data format (Joss-Waldvogel format). Other modifications to the software involve accounting for vertical ambient wind motion, as well as evaporation of the raindrop during its flight time.
Extrapolation from animals to the human for the retention of radiothallium in the blood
International Nuclear Information System (INIS)
The extrapolation of tissue distribution data from animal to human has implications in the clinical application of new radiopharmaceuticals, in studies of biodistribution and biokinetics, and in estimation of radiation absorbed dose. The extrapolative method described in this study is based on the assumption that the mechanism of tissue distribution of a radionuclide in different mammalian species is similar. This assumption implies that the fractional distribution function, ?/sub h/(t), of a radionuclide in a specific tissue of one species is related to the corresponding fractional distribution function in any other species by linear transformations in the activity and time variables. Hence, the successful application of the extrapolative technique requires determining a reference ?/sub h/(t) based on a conveniently studied species and finding the relationships between the factors of the transformations and one or more measurable species-dependent parameters. To test this approach, data for retention of T1-201 in the blood of several species were used. Detailed biokinetic data in mice, collected in our laboratory, were used to determine the reference ?/sub h/(t) mouse. Data for other species were extracted from the literature and compared with ?/sub h/(t) mouse to determine the transformation factors, using the least squares fitting technique. These factors appear, on the basis of the data available, to be power functions of body weight. Retention of activity in body weight. Retention of activity in blood was chosen as a test of the theory because data have been published for several nonhuman species as well as verifying values for the human
Image extrapolation for photo stitching using nonlocal patch-based inpainting
Voronin, V. V.; Marchuk, V. I.; Sherstobitov, A. I.; Semenischev, E. A.; Agaian, S.; Egiazarian, K.
2014-05-01
Image alignment and mosaicing are usually performed on a set of overlapping images, using features in the area of overlap for seamless stitching. In many cases such images have different size and shape. So we need to crop panoramas or to use image extrapolation for them. This paper focuses on novel image inpainting method based on modified exemplar-based technique. The basic idea is to find an example (patch) from an image using local binary patterns, and replacing non-existed (`lost') data with it. We propose to use multiple criteria for a patch similarity search since often in practice existed exemplar-based methods produce unsatisfactory results. The criteria for searching the best matching uses several terms, including Euclidean metric for pixel brightness and Chi-squared histogram matching distance for local binary patterns. A combined use of textural geometric characteristics together with color information allows to get more informative description of the patches. In particular, we show how to apply this strategy for image extrapolation for photo stitching. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.
DEFF Research Database (Denmark)
Ambühl, Simon; Sterndorff, Martin
2014-01-01
Mooring systems for floating wave energy converters (WECs) are a major cost driver. Failure of mooring systems often occurs due to extreme loads. This paper introduces an extrapolation method for extreme response which accounts for the control system of a WEC that controls the loads onto the structure and the harvested power of the device as well as the fact that extreme loads may occur during operation and not at extreme wave states when the device is in storm protection mode. The extrapolation method is based on shortterm load time series and applied to a case study where up-scaled surge load measurements from lab-scaled WEPTOS WEC are taken. Different catenary anchor leg mooring (CALM) systems as well as single anchor legmooring (SALM)mooring systemsare implemented for a dynamic simulation with different number of mooring lines. Extreme tension loads with a return period of 50 years are assessed for the hawser as well as at the different mooring lines. Furthermore, the extreme load impact given failure of one mooring line is assessed and compared with extreme loads given no system failure.
Application of extrapolation chambers in low-energy X-rays as reference systems.
da Silva, Eric A B; Caldas, Linda V E
2012-07-01
Extrapolation chambers are instruments designed to measure doses of low-energy radiations, mainly beta radiation. In this work, a commercial extrapolation chamber and a homemade extrapolation chamber were applied in measurements using standard radiotherapy X-ray beams. Saturation curves and polarity effect as well as short- and medium-term stabilities were obtained, and these results are within the recommendations of the International Electrotechnical Commission (IEC). The response linearity and the extrapolation curves were also obtained, and they presented good behavior. The results show the usefulness of these extrapolation chambers in low-energy X-ray beams. PMID:22520689
Making the most of what we have: application of extrapolation approaches in wildlife transfer models
International Nuclear Information System (INIS)
Radiological environmental protection models need to predict the transfer of many radionuclides to a large number of organisms. There has been considerable development of transfer (predominantly concentration ratio) databases over the last decade. However, in reality it is unlikely we will ever have empirical data for all the species-radionuclide combinations which may need to be included in assessments. To provide default values for a number of existing models/frameworks various extrapolation approaches have been suggested (e.g. using data for a similar organism or element). This paper presents recent developments in two such extrapolation approaches, namely phylogeny and allometry. An evaluation of how extrapolation approaches have performed and the potential application of Bayesian statistics to make best use of available data will also be given. Using a Residual Maximum Likelihood (REML) mixed-model regression we initially analysed a dataset comprising 597 entries for 53 freshwater fish species from 67 sites to investigate if phylogenetic variation in transfer could be identified. The REML analysis generated an estimated mean value for each species on a common scale after taking account of the effect of the inter-site variation. Using an independent dataset, we tested the hypothesis that the REML model outputs could be used to predict radionuclide activity concentrations in other species from the results of a species which had been sampled at a specific site. The outputs of the REML analysis accurately predicted 137Cs activity concentrations in different species of fish from 27 lakes. Although initially investigated as an extrapolation approach the output of this work is a potential alternative to the highly site dependent concentration ratio model. We are currently applying this approach to a wider range of organism types and different ecosystems. An initial analysis of these results will be presented. The application of allometric, or mass-dependent, relationships within radioecology has increased with the evolution of models to predict the exposure of wildlife as it presents a method of addressing the lack of empirical data. Among the parameters which scale allometrically is radionuclide biological half-life. However, sufficient data across a range of species with different masses are required to establish allometric relationships for biological half-life and this is not always available. We have recently derived an alternative allometric approach to predict the biological half-life of radionuclides in homeothermic vertebrates which does not require such data. Predicted biological half-life values for four radionuclides compared well to available data for a range of species. The potential to further develop these approaches will be discussed. (authors)
Making the most of what we have: application of extrapolation approaches in wildlife transfer models
Energy Technology Data Exchange (ETDEWEB)
Beresford, Nicholas A.; Barnett, Catherine L.; Wells, Claire [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi [Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Brown, Justin E.; Hosseini, Ali [Norwegian Radiation Protection Authority, P.O. Box 55, N-1332 Oesteraas (Norway); Yankovich, Tamara L. [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria); Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Willey, Neil [Centre for Research in Biosciences, University of the West of England, Coldharbour Lane, Frenchay, Bristol BS16 1QY (United Kingdom)
2014-07-01
Radiological environmental protection models need to predict the transfer of many radionuclides to a large number of organisms. There has been considerable development of transfer (predominantly concentration ratio) databases over the last decade. However, in reality it is unlikely we will ever have empirical data for all the species-radionuclide combinations which may need to be included in assessments. To provide default values for a number of existing models/frameworks various extrapolation approaches have been suggested (e.g. using data for a similar organism or element). This paper presents recent developments in two such extrapolation approaches, namely phylogeny and allometry. An evaluation of how extrapolation approaches have performed and the potential application of Bayesian statistics to make best use of available data will also be given. Using a Residual Maximum Likelihood (REML) mixed-model regression we initially analysed a dataset comprising 597 entries for 53 freshwater fish species from 67 sites to investigate if phylogenetic variation in transfer could be identified. The REML analysis generated an estimated mean value for each species on a common scale after taking account of the effect of the inter-site variation. Using an independent dataset, we tested the hypothesis that the REML model outputs could be used to predict radionuclide activity concentrations in other species from the results of a species which had been sampled at a specific site. The outputs of the REML analysis accurately predicted {sup 137}Cs activity concentrations in different species of fish from 27 lakes. Although initially investigated as an extrapolation approach the output of this work is a potential alternative to the highly site dependent concentration ratio model. We are currently applying this approach to a wider range of organism types and different ecosystems. An initial analysis of these results will be presented. The application of allometric, or mass-dependent, relationships within radioecology has increased with the evolution of models to predict the exposure of wildlife as it presents a method of addressing the lack of empirical data. Among the parameters which scale allometrically is radionuclide biological half-life. However, sufficient data across a range of species with different masses are required to establish allometric relationships for biological half-life and this is not always available. We have recently derived an alternative allometric approach to predict the biological half-life of radionuclides in homeothermic vertebrates which does not require such data. Predicted biological half-life values for four radionuclides compared well to available data for a range of species. The potential to further develop these approaches will be discussed. (authors)
UFOs in the LHC: Observations, studies and extrapolations
Baer, T; Cerutti, F; Ferrari, A; Garrel, N; Goddard, B; Holzer, EB; Jackson, S; Lechner, A; Mertens, V; Misiowiec, M; Nebot del Busto, E; Nordt, A; Uythoven, J; Vlachoudis, V; Wenninger, J; Zamantzas, C; Zimmermann, F; Fuster, N
2012-01-01
Unidentified falling objects (UFOs) are potentially a major luminosity limitation for nominal LHC operation. They are presumably micrometer sized dust particles which lead to fast beam losses when they interact with the beam. With large-scale increases and optimizations of the beam loss monitor (BLM) thresholds, their impact on LHC availability was mitigated from mid 2011 onwards. For higher beam energy and lower magnet quench limits, the problem is expected to be considerably worse, though. In 2011/12, the diagnostics for UFO events were significantly improved: dedicated experiments and measurements in the LHC and in the laboratory were made and complemented by FLUKA simulations and theoretical studies. The state of knowledge, extrapolations for nominal LHC operation and mitigation strategies are presented
Null Point Distribution in Global Coronal Potential Field Extrapolations
Edwards, S. J.; Parnell, C. E.
2015-06-01
Magnetic null points are points in space where the magnetic field is zero. Thus, they can be important sites for magnetic reconnection by virtue of the fact that they are weak points in the magnetic field and also because they are associated with topological structures, such as separators, which lie on the boundary between four topologically distinct flux domains and therefore are also locations where reconnection occurs. The number and distribution of nulls in a magnetic field acts as a measure of the complexity of the field. In this article, the numbers and distributions of null points in global potential field extrapolations from high-resolution synoptic magnetograms are examined. Extrapolations from magnetograms obtained with the Michelson Doppler Imager (MDI) are studied in depth and compared with those from high-resolution SOlar Long-time Investigations of the Sun (SOLIS) and Heliospheric Magnetic Imager (HMI). The fall-off in the density of null points with height is found to follow a power law with a slope that differs depending on whether the data are from solar maximum or solar minimum. The distribution of null points with latitude also varies with the cycle as null points form predominantly over quiet-Sun regions and avoid active-region fields. The exception to this rule are the null points that form high in the solar atmosphere, and these null points tend to form over large areas of strong flux in active regions. From case studies of data acquired with the MDI, SOLIS, and HMI, it is found that the distribution of null points is very similar between data sets, except, of course, that there are far fewer nulls observed in the SOLIS data than in the cases from MDI and HMI due to its lower resolution.
Extrapolation from animals to the human for the retention of radiothallium in the blood
International Nuclear Information System (INIS)
The extrapolation of tissue distribution data from animal to human has implications in the clinical application of new radiopharmaceuticals, in studies of biodistribution and biokinetics, and in estimation of radiation absorbed dose. The extrapolative method described in this study is based on the assumption that the mechanism of tissue distribution of a radionuclide in different mammalian species is similar. This assumption implies that the fractional distribution function, ?/sub h/(t), of a radionuclide in a specific tissue of one species is related to the corresponding fractional distribution function in any other species by linear transformations in the activity and time variables. To test this approach, data for retention of T1-201 in the blood of several species were used. Detailed biokinetic data in mice, collected in our laboratory, were used to determine the reference ?/sub h/(t)/sub mouse/. Data for other species were extracted from the literature and compared with ?/sub h/(t)/sub mouse/ to determine the transformation factors, using the least squares fitting technique. These factors appear, on the basis of the data available, to be power functions of body weight. Retention of activity in blood was chosen as a test of the theory because data have been published for several nonhuman species as well as verifying values for the human
Extrapolating W-Associated Jet-Production Ratios at the LHC
Bern, Z; Cordero, F Febres; Hoeche, S; Kosower, D A; Ita, H; Maitre, D
2014-01-01
Electroweak vector-boson production, accompanied by multiple jets, is an important background to searches for physics beyond the Standard Model. A precise and quantitative understanding of this process is helpful in constraining deviations from known physics. We study four key ratios in $W + n$-jet production at the LHC. We compute the ratio of cross sections for $W + n$- to $W + (n-1)$-jet production as a function of the minimum jet transverse momentum. We also study the ratio differentially, as a function of the $W$-boson transverse momentum; as a function of the scalar sum of the jet transverse energy, $H_T^{\\rm jets}$; and as a function of certain jet transverse momenta. We show how to use such ratios to extrapolate differential cross sections to $W+6$-jet production at next-to-leading order, and we cross-check the method against a direct calculation at leading order. We predict the differential distribution in $H_T^{\\rm jets}$ for $W+6$ jets at next-to-leading order using such an extrapolation. We use th...
International Nuclear Information System (INIS)
The work presented covers different parts of a repository system such as near and far field aspects. Investigations are reported for the degradation of HLW glass, for the corrosion of container materials, for changes of geochemical environment in geological repositories, and for the thermo-mechanical behaviour of granitic host rock. Extrapolation methods are developed and applied for temperature and stress development in the host rock and for the radionuclide transport through a fractured system. (author)
Precise estimates by finite-size extrapolations of the S=1 Haldane-gapped system
International Nuclear Information System (INIS)
We carry out finite-size extrapolations of numerical-diagonalization data of the S=1 Heisenberg chain having a nonzero energy gap between the unique singlet ground state and the first excited state, namely the Haldane gap. Very precise estimates of the ground-state energy per site Eg/N -1.4014840447(39) and the staggered component of the magnetic structure factor S? = 3.864356(31) at T=0 are successfully obtained from the finite-size data of system sizes up to N = 24 under the twisted boundary condition by the sequence interval squeeze method, which was applied to a precise estimation of the Haldane gap by Nakano and Terai [J. Phys. Soc. Jpn. 78 (2009) 014003]. The present estimates are compared with other estimates in previous studies from various methods including the quantum Monte Carlo simulation and the density matrix renormalization group calculation.
Entanglement entropy and negativity of disjoint intervals in CFT: Some numerical extrapolations
De Nobili, Cristiano; Tonni, Erik
2015-01-01
The entanglement entropy and the logarithmic negativity can be computed in quantum field theory through a method based on the replica limit. Performing these analytic continuations in some cases is beyond our current knowledge, even for simple models. We employ a numerical method based on rational interpolations to extrapolate the entanglement entropy of two disjoint intervals for the conformal field theories given by the free compact boson and the Ising model. The case of three disjoint intervals is studied for the Ising model and the non compact free massless boson. For the latter model, the logarithmic negativity of two disjoint intervals has been also considered. Some of our findings have been checked against existing numerical results obtained from the corresponding lattice models.
Application of extrapolation chambers in low-energy X-rays as reference systems
International Nuclear Information System (INIS)
Extrapolation chambers are instruments designed to measure doses of low-energy radiations, mainly beta radiation. In this work, a commercial extrapolation chamber and a homemade extrapolation chamber were applied in measurements using standard radiotherapy X-ray beams. Saturation curves and polarity effect as well as short- and medium-term stabilities were obtained, and these results are within the recommendations of the International Electrotechnical Commission (IEC). The response linearity and the extrapolation curves were also obtained, and they presented good behavior. The results show the usefulness of these extrapolation chambers in low-energy X-ray beams. - Highlights: ? Usefulness of two extrapolation chambers was studied for low-energy X-ray beam dosimetry. ? Performance of the chambers was verified at standard X-radiation qualities. ? Both chambers are suited for use with radiotherapy quality X-ray beams.
International Nuclear Information System (INIS)
The issues of the extrapolation to multi-GPa pressures of the experimental data obtained at moderate pressures are considered for different classes of glass-forming substances. For covalent glass-forming substances, the phase transitions and structural changes are major factors that hamper extrapolation. Organic glass-forming liquids are not ground thermodynamic states of matter; under high pressures they transform to polymeric substances and then to mixtures of simple inorganic compounds. Therefore, extrapolation of the data obtained at moderate pressures is hardly possible for this class either. Metallic melts and rare gas liquids are the only substances whose properties can be extrapolated into the megabar region. However, such an extrapolation is highly uncertain due to the low viscosity and weak pressure dependences of the properties of these liquids. New experimental studies of rare gas and metallic liquids in the pressure region of tens of GPa are urgently needed for the extrapolation to be reliable
Takahashi, Junichi; Ishii, Masahiro; Kouno, Hiroaki; Yahiro, Masanobu
2014-01-01
We evaluate quark number densities at imaginary chemical potential by lattice QCD with clover-improved two-flavor Wilson fermion. The quark number densities are extrapolated to the small real chemical potential region by assuming some function forms. The extrapolated quark number densities are consistent with those calculated at real chemical potential with the Taylor expansion method for the reweighting factors. In order to study the large real chemical potential region, we use the two-phase model consisting of the quantum hadrodynamics model for the hadron phase and the entanglement-PNJL model for the quark phase. The quantum hadrodynamics model is constructed to reproduce nuclear saturation properties, while the entanglement-PNJL model reproduces well lattice QCD data for the order parameters such as the Polyakov loop, the thermodynamic quantities and the screening masses. Then, we calculate the mass-radius relation of neutron stars and explore the hadron-quark phase transition with the two-phase model.
International Nuclear Information System (INIS)
The Intelligent Extrapolation Criticality Device is used for automatic counting and automatic extrapolation during the criticality experiment on the reactor. Test must be performed on the zero-power reactor or other reactor before the Device is used. The paper describes the test situation and test results of the Device on the zero-power reactor. The test results show that the Device has the function of automatic counting and automatic extrapolation, the deviation of the extrapolation data is small, and it can satisfy the requirements of physical startup on the reactor. (author)
An empirical relationship for extrapolating sparse experimental lap joint data.
Energy Technology Data Exchange (ETDEWEB)
Segalman, Daniel Joseph; Starr, Michael James
2010-10-01
Correctly incorporating the influence of mechanical joints in built-up mechanical systems is a critical element for model development for structural dynamics predictions. Quality experimental data are often difficult to obtain and is rarely sufficient to determine fully parameters for relevant mathematical models. On the other hand, fine-mesh finite element (FMFE) modeling facilitates innumerable numerical experiments at modest cost. Detailed FMFE analysis of built-up structures with frictional interfaces reproduces trends among problem parameters found experimentally, but there are qualitative differences. Those differences are currently ascribed to the very approximate nature of the friction model available in most finite element codes. Though numerical simulations are insufficient to produce qualitatively correct behavior of joints, some relations, developed here through observations of a multitude of numerical experiments, suggest interesting relationships among joint properties measured under different loading conditions. These relationships can be generalized into forms consistent with data from physical experiments. One such relationship, developed here, expresses the rate of energy dissipation per cycle within the joint under various combinations of extensional and clamping load in terms of dissipation under other load conditions. The use of this relationship-though not exact-is demonstrated for the purpose of extrapolating a representative set of experimental data to span the range of variability observed from real data.
Sato, A.; Yomogida, K.
2014-12-01
The early warning system operated by Japan Meteorological Agency (JMA) has been available in public since October 2007.The present system is still not effective in cases, that we cannot assume a nearly circular wavefront expansion from a source. We propose a new approach based on the extrapolation of the early observed wavefield alone without estimating its epicenter. The idea is similar to the migration method in exploration seismology, but we use not only the information of wave field at an early stage (i.e., at time T2 in Figure, but also its normal derivatives the difference between T1 and T2), that is, we utilize the apparent velocity and direction of early-stage wave propagation to predict the wavefield later (at T3 in Fig.). For the extrapolation of wavefield, we need a reliable Green's function from the observed point to a target point at which the wave arrives later. Since the complete 3-D wave propagation is extremely complex, particularly in and around Japan of highly heterogeneous structures, we shall consider a phenomenological 2-D Green's function, that is, a wavefront propagates on the surface with a certain apparent velocity and direction of P wave. This apparent velocity and direction may vary significantly depending on, for example, event depth and an area of propagation, so we examined those of P wave propagating in Japan in various situations. For example, the velocity of shallow events in Hokkaido is 7.1km/s while that in Nagano prefecture is about 5.5km/s. In addition, the apparent velocity depends on event depth, 7.1km/s for the depth of 10km and 8.9km/s for 100km in Hokkaido. We also conducted f-k array analyses of adjacent five or six stations where we can accurately estimate the apparent velocity and direction of P wave. For deep events with relatively simple waveforms, they are easily obtained, but we may need site corrections to enhance correlations of waveforms among stations for shallow ones. In the above extrapolation scheme, we can only estimate the arrival times of P wave at remote stations, but we need to estimate S wave arrival time and intensity in practice. We compare the actual S wave arrival times with P wave ones for various epicenter distances, event depths and regions, so that some empirical relations between them are listed for our final goal of S wave estimations.
Jiang, Chaowei
2013-01-01
Due to the absence of direct measurement, the magnetic field in the solar corona is usually extrapolated from the photosphere in numerical way. At the moment, the nonlinear force-free field (NLFFF) model dominates the physical models for field extrapolation in the low corona. Recently we have developed a new NLFFF model with MHD relaxation to reconstruct the coronal magnetic field. This method is based on CESE--MHD model with the conservation-element/solution-element (CESE) spacetime scheme. In this paper, we report the application of the CESE--MHD--NLFFF code to \\SDO/HMI data with magnetograms sampled for two active regions (ARs), NOAA AR 11158 and 11283, both of which were very non-potential, producing X-class flares and eruptions. The raw magnetograms are preprocessed to remove the force and then inputted into the extrapolation code. Qualitative comparison of the results with the \\SDO/AIA images shows that our code can reconstruct magnetic field lines resembling the EUV-observed coronal loops. Most importa...
International Nuclear Information System (INIS)
This document develops methods of measuring experimentally the limits of valid fracture mechanics data that can be obtained from small fracture mechanics specimens. The proposed technique generally shows that present ASTM limits are overly conservative and the new technique would allow almost a three fold increase in the amount of crack extension allowed in the testing of a surveillance specimen. Analytic relationships are then developed to allow use of the new experimentally measured limit to J controlled crack growth for design or failure analysis applications to pressure vessel structures. The new region of J controlled crack growth is shown to correlate best with the omega criterion which defines limits on both the maximum J level and the maximum crack extension allowable for a particular specimen size and material toughness combination. The final section looks at the problem of extrapolation of J-R curve data when needed for a structure fracture analysis. Several forms of extrapolation relationships are compared from the point of view of accurate and conservative extrapolation, particularly from the standpoint of tearing instability analysis of a growing, ductile crack on the material upper shelf. 35 refs., 38 figs., 12 tabs
Directory of Open Access Journals (Sweden)
Ezekiel Uba Nwose
2010-04-01
Full Text Available Background: There are many different methods for the assessment of whole blood viscosity, but not every pathology unit has equipment for any of the methods. However, a validated arithmetic method exists whereby whole blood viscosity can be extrapolated from haematocrit and total serum proteins. Aims: The objective of this work is to develop an algorithm in the form of a chart by which clinicians can easily extrapolate whole blood viscosity values in their consulting rooms or on the ward. Another objective is to suggest normal, subnormal and critical reference ranges applicable to this method. Materials and Methods: Whole blood viscosity at high shear stress was determined, from various possible pairs of haematocrit and total proteins. A chart was formulated so that whole blood viscosity can be extrapolated. After determination of two standard deviations from the mean and ascertainment of symmetric distribution, normal and abnormal reference ranges were defined. Results: The clinicians’ user-friendly chart is presented. Considering presumptive lower and upper limits, the continuum of ?14.28, 14.29 – 15.00, 15.01 – 19.01, 19.02 – 19.39 and ?19.40 (208 Sec-1 is obtained as reference ranges for critically low, subnormal low, normal, subnormal high and critically high whole blood viscosity levels respectively. Conclusion: This article advances a validated method to provide a user-friendly chart that would enable clinicians to assess whole blood viscosity for any patients who has results for full blood count and total proteins. It would make the assessment of whole blood viscosity costless and the neglect of a known cardiovascular risk factor less excusable.
Characterization of an extrapolation chamber in a 90Sr/90Y beta radiation field
International Nuclear Information System (INIS)
The extrapolation chamber is a parallel plate chamber and variable volume based on the Bragg-Gray theory. It determines in absolute mode, with high accuracy the dose absorbed by the extrapolation of the ionization current measured for a null distance between the electrodes. This camera is used for dosimetry of external beta rays for radiation protection. This paper presents the characterization of an extrapolation chamber in a 90Sr/90Y beta radiation field. The absorbed dose rate to tissue at a depth of 0.07 mm was calculated and is (0.13206±0.0028) ?Gy. The extrapolation chamber null depth was determined and its value is 60 ?m. The influence of temperature, pressure and humidity on the value of the corrected current was also evaluated. Temperature is the parameter that has more influence on this value and the influence of pressure and the humidity is not very significant. Extrapolation curves were obtained. (Author)
Fuel cycle design for ITER and its extrapolation to DEMO
Energy Technology Data Exchange (ETDEWEB)
Konishi, Satoshi [Institute of Advanced Energy, Kyoto University, Kyoto 611-0011 (Japan)], E-mail: s-konishi@iae.kyoto-u.ac.jp; Glugla, Manfred [Forschungszentrum Karlsruhe, P.O. Box 3640, D 76021 Karlsruhe (Germany); Hayashi, Takumi [Apan Atomic Energy AgencyTokai, Ibaraki 319-0015 Japan (Japan)
2008-12-15
ITER is the first fusion device that continuously processes DT plasma exhaust and supplies recycled fuel in a closed loop. All the tritium and deuterium in the exhaust are recovered, purified and returned to the tokamak with minimal delay, so that extended burn can be sustained with limited inventory. To maintain the safety of the entire facility, plant scale detritiation systems will also continuously run to remove tritium from the effluents at the maximum efficiency. In this entire tritium plant system, extremely high decontamination factor, that is the ratio of the tritium loss to the processing flow rate, is required for fuel economy and minimized tritium emissions, and the system design based on the state-of-the-art technology is expected to satisfy all the requirements without significant technical challenges. Considerable part of the fusion tritium system will be verified with ITER and its decades of operation experiences. Toward the DEMO plant that will actually generate energy and operate its closed fuel cycle, breeding blanket and power train that caries high temperature and pressure media from the fusion device to the generation system will be the major addition. For the tritium confinement, safety and environmental emission, particularly blanket, its coolant, and generation systems such as heat exchanger, steam generator and turbine will be the critical systems, because the tritium permeation from the breeder and handling large amount of high temperature, high pressure coolant will be further more difficult than that required for ITER. Detritiation of solid waste such as used blanket and divertor will be another issue for both tritium economy and safety. Unlike in the case of ITER that is regarded as experimental facility, DEMO will be expected to demonstrate the safety, reliability and social acceptance issue, even if economical feature is excluded. Fuel and environmental issue to be tested in the DEMO will determine the viability of the fusion as a future energy source. Some of the subjects cannot be expected to be within the extrapolation of ITER technology and require long term efforts paralleling ITER.
Montiel, Ariadna; Sendra, Irene; Escamilla-Rivera, Celia; Salzano, Vincenzo
2014-01-01
In this work we present a nonparametric approach, which works on minimal assumptions, to reconstruct the cosmic expansion of the Universe. We propose to combine a locally weighted scatterplot smoothing method and a simulation-extrapolation method. The first one (Loess) is a nonparametric approach that allows to obtain smoothed curves with no prior knowledge of the functional relationship between variables nor of the cosmological quantities. The second one (Simex) takes into account the effect of measurement errors on a variable via a simulation process. For the reconstructions we use as raw data the Union2.1 Type Ia Supernovae compilation, as well as recent Hubble parameter measurements. This work aims to illustrate the approach, which turns out to be a self-sufficient technique in the sense we do not have to choose anything by hand. We examine the details of the method, among them the amount of observational data needed to perform the locally weighted fit which will define the robustness of our reconstructio...
International Nuclear Information System (INIS)
The Interface System for the Extrapolation Chamber (SICE) contains several devices handled by a personal computer (PC), it is able to get the required data to calculate the absorbed dose due to Beta radiation. The main functions of the system are: a) Measures the ionization current or charge stored in the extrapolation chamber. b) Adjusts the distance between the plates of the extrapolation chamber automatically. c) Adjust the bias voltage of the extrapolation chamber automatically. d) Acquires the data of the temperature, atmospheric pressure, relative humidity of the environment and the voltage applied between the plates of the extrapolation chamber. e) Calculates the effective area of the plates of the extrapolation chamber and the real distance between them. f) Stores all the obtained information in hard disk or diskette. A comparison between the desired distance and the distance in the dial of the extrapolation chamber, show us that the resolution of the system is of 20 ?m. The voltage can be changed between -399.9 V and +399.9 V with an error of less the 3 % with a resolution of 0.1 V. These uncertainties are between the accepted limits to be used in the determination of the absolute absorbed dose due to beta radiation. (Author)
Quantitative in vitro-to-in vivo extrapolation in a high-throughput environment.
Wetmore, Barbara A
2015-06-01
High-throughput in vitro toxicity screening provides an efficient way to identify potential biological targets for environmental and industrial chemicals while conserving limited testing resources. However, reliance on the nominal chemical concentrations in these in vitro assays as an indicator of bioactivity may misrepresent potential in vivo effects of these chemicals due to differences in clearance, protein binding, bioavailability, and other pharmacokinetic factors. Development of high-throughput in vitro hepatic clearance and protein binding assays and refinement of quantitative in vitro-to-in vivo extrapolation (QIVIVE) methods have provided key tools to predict xenobiotic steady state pharmacokinetics. Using a process known as reverse dosimetry, knowledge of the chemical steady state behavior can be incorporated with HTS data to determine the external in vivo oral exposure needed to achieve internal blood concentrations equivalent to those eliciting bioactivity in the assays. These daily oral doses, known as oral equivalents, can be compared to chronic human exposure estimates to assess whether in vitro bioactivity would be expected at the dose-equivalent level of human exposure. This review will describe the use of QIVIVE methods in a high-throughput environment and the promise they hold in shaping chemical testing priorities and, potentially, high-throughput risk assessment strategies. PMID:24907440
Melting of “non-magic” argon clusters and extrapolation to the bulk limit
Energy Technology Data Exchange (ETDEWEB)
Senn, Florian, E-mail: f.senn@massey.ac.nz; Wiebke, Jonas; Schumann, Ole; Gohr, Sebastian; Schwerdtfeger, Peter, E-mail: p.a.schwerdtfeger@massey.ac.nz [Centre for Theoretical Chemistry and Physics, The New Zealand Institute for Advanced Study, Massey University Albany, Private Bag 102904, Auckland 0745 (New Zealand); Pahl, Elke, E-mail: e.pahl@massey.ac.nz [Centre for Theoretical Chemistry and Physics, Institute of Natural and Mathematical Sciences, Massey University Albany, Private Bag 102904, Auckland 0745 (New Zealand)
2014-01-28
The melting of argon clusters Ar{sub N} is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Ko?i, and P. Schwerdtfeger, “Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations,” Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes.
Melting of “non-magic” argon clusters and extrapolation to the bulk limit
International Nuclear Information System (INIS)
The melting of argon clusters ArN is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Ko?i, and P. Schwerdtfeger, “Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations,” Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes
Eco-label - simple environmental choice / Andres Viia, Külliki Tafel
Viia, Andres
2003-01-01
Autorid selgitavad ökomärgistuse olemust ja vajalikkust tarbijate teavitamisel vähem keskkonda kahjustavatest toodetest ning teenustest. Lisatud näiteid regionaalsetest ja rahvuslikest ökomärkidest EL-is, tuntumatest ökomärkidest väljaspool Euroopat, hoiatavatest ja informatiivsetest keskkonnamärkidest ning libaökomärkidest. Vt. samas: North-East Estonia - a seat of an environment-friendly batteries' recycling
Photometric accuracy - The impact of extrapolation in differing color-transformation schemes
Manfroid, Jean; Sterken, C.; Gosset, Eric
1992-01-01
We discuss photometric errors that arise from the use of differing color-transformation schemes when extrapolating outside the range of the standard values that define the system. An example based on data from the Long-Term Photometry of Variables project (LTPV) at ESO is analyzed. Practically, the extrapolation errors are most evident as systematic brightness jumps of some program stars, seen when monitored over several observing runs. Conclusions are drawn about the choice of standard stars...
Extrapolation chamber for absolute energy dose rate measurement of beta and soft x radiation
International Nuclear Information System (INIS)
A new extrapolation chamber is described, which is used as absolute standard for the determination of the absorbed beta radiation dose. The construction simulates a semi-infinite phantom and the absorbed dose to tissue was determined in tissue equivalent material. With a new system of concentric collecting electrodes it is possible to obtain the absorbed dose at the centre of the electrodes, which is extrapolated from five values of the surface of the collecting electrodes. (Author)
EXTRAPOLATION OF THE SOLAR CORONAL MAGNETIC FIELD FROM SDO/HMI MAGNETOGRAM BY A CESE-MHD-NLFFF CODE
Energy Technology Data Exchange (ETDEWEB)
Jiang Chaowei; Feng Xueshang, E-mail: cwjiang@spaceweather.ac.cn, E-mail: fengx@spaceweather.ac.cn [SIGMA Weather Group, State Key Laboratory for Space Weather, Center for Space Science and Applied Research, Chinese Academy of Sciences, Beijing 100190 (China)
2013-06-01
Due to the absence of direct measurement, the magnetic field in the solar corona is usually extrapolated from the photosphere in a numerical way. At the moment, the nonlinear force-free field (NLFFF) model dominates the physical models for field extrapolation in the low corona. Recently, we have developed a new NLFFF model with MHD relaxation to reconstruct the coronal magnetic field. This method is based on CESE-MHD model with the conservation-element/solution-element (CESE) spacetime scheme. In this paper, we report the application of the CESE-MHD-NLFFF code to Solar Dynamics Observatory/Helioseismic and Magnetic Imager (SDO/HMI) data with magnetograms sampled for two active regions (ARs), NOAA AR 11158 and 11283, both of which were very non-potential, producing X-class flares and eruptions. The raw magnetograms are preprocessed to remove the force and then inputted into the extrapolation code. Qualitative comparison of the results with the SDO/AIA images shows that our code can reconstruct magnetic field lines resembling the EUV-observed coronal loops. Most important structures of the ARs are reproduced excellently, like the highly sheared field lines that suspend filaments in AR 11158 and twisted flux rope which corresponds to a sigmoid in AR 11283. Quantitative assessment of the results shows that the force-free constraint is fulfilled very well in the strong-field regions but apparently not that well in the weak-field regions because of data noise and numerical errors in the small currents.
International Nuclear Information System (INIS)
Highlights: ? The maximal predictive step size is determined by the largest Lyapunov exponent. ? A proper forecasting step size is applied to load demand forecasting. ? The improved approach is validated by the actual load demand data. ? Non-linear fractal extrapolation method is compared with three forecasting models. ? Performance of the models is evaluated by three different error measures. - Abstract: Precise short-term load forecasting (STLF) plays a key role in unit commitment, maintenance and economic dispatch problems. Employing a subjective and arbitrary predictive step size is one of the most important factors causing the low forecasting accuracy. To solve this problem, the largest Lyapunov exponent is adopted to estimate the maximal predictive step size so that the step size in the forecasting is no more than this maximal one. In addition, in this paper a seldom used forecasting model, which is based on the non-linear fractal extrapolation (NLFE) algorithm, is considered to develop the accuracy of predictions. The suitability and superiority of the two solutions are illustrated through an application to real load forecasting using New South Wales electricity load data from the Australian National Electricity Market. Meanwhile, three forecasting models: the gray model, the seasonal autoregressive integrated moving average approach and the support vector machine method, which received high approval in STLF, are selected to compare with the NLFE algoed to compare with the NLFE algorithm. Comparison results also show that the NLFE model is outstanding, effective, practical and feasible.
EXTRAPOLATING PHOTOLYSIS RATES FROM THE LABORATORY TO THE ENVIRONMENT
The importance of environmental photolysis of pesticides and other xenobiotics has been realized in the last decade and methods for assessing these processes are continually being improved. The general goal has been to develop quantitative laboratory procedures that can be used t...
International Nuclear Information System (INIS)
A GIS database was established for fertiliser recommendation domains in Kisii District by using FURP fertiliser trial results, KSS soils data and MDBP climatic data. These are manipulated in ESRI's (Personal Computer Environmental Systems Research Institute) ARCINFO and ARCVIEW softwares. The extrapolations were only done for the long rains season (March- August) with three to four years data. GIS technology was used to cluster fertiliser recommendation domains as a geographical area expressed in terms of variation over space and not limited to the site of experiment where a certain agronomic or economic fertiliser recommendation was made. The extrapolation over space was found to be more representative for any recommendation, the result being digital maps describing each area in the geographical space. From the results of the extrapolations, approximately 38,255 ha of the district require zero Nitrogen (N) fertilisation while 94,330 ha requires 75 kg ha-1 Nitrogen fertilisation during the (March-August) long rains. The extrapolation was made difficult since no direct relationships could be established to occur between the available-N, % Carbon (C) or any of the other soil properties with the obtained yields. Decision rules were however developed based on % C which was the soil variable with values closest to the obtained yields. 3% organic carbon was found to be the boundary between 0 application and 75 kg-N application. GIS techniques made it possible to ation. GIS techniques made it possible to model and extrapolates the results using the available data. The extrapolations still need to be verified with more ground data from fertiliser trials. Data gaps in the soil map left some soil mapping units with no recommendations. Elevation was observed to influence yields and it should be included in future extrapolation by clustering digital elevation models with rainfall data in a spatial model at the district scale
Fang, Lei; Yang, Jian
2014-12-01
The Landsat derived differenced Normalized Burn Ratio (dNBR) is widely used for burn severity assessments. Studies of regional wildfire trends in response to climate change require consistency in dNBR mapping across multiple image dates, which may vary in atmospheric condition. Conversion of continuous dNBR images into categorical burn severity maps often requires extrapolation of dNBR thresholds from present fires for which field severity measurements such as Composite Burn Index (CBI) data are available, to historical fires for which CBI data are typically unavailable. Although differential atmospheric effects between image collection dates could lead to biased estimates of historical burn severity patterns, little is known concerning the influence of atmospheric effects on dNBR performance and threshold extrapolation. In this study, we compared the performance of dNBR calculated from six atmospheric correction methods using an optimality approach. The six correction methods included one partial (Top of atmosphere reflectance, TOA), two absolute, and three relative methods. We assessed how the correction methods affected the CBI-dNBR correlation and burn severity mapping in a Chinese boreal forest fire which occurred in 2010. The dNBR thresholds of the 2010 fire for each of the correction methods were then extrapolated to classify a historical fire from 2000. Classification accuracies of threshold extrapolations were assessed based on Cohen's Kappa analysis with 73 field-based validation plots. Our study found most correction methods improved mean dNBR optimality of the two fires. The relative correction methods generated 32% higher optimality than both TOA and absolute correction methods. All the correction methods yielded high CBI-dNBR correlations (mean R2 = 0.847) but distinctly different dNBR thresholds for severity classification of 2010 fire. Absolute correction methods could substantially increase optimality score, but were insufficient to provide a consistent scale of radiometric condition between multi-temporal Landsat images, which resulted in lower severity classification accuracies (Kappa = 0.53) than those relative correction methods (Kappa = 0.72) for the 2000 fire. Consistent radiometric response in remote sensing datasets proved essential for accuracy in regional burn severity trends monitoring. Extrapolation of empirical dNBR thresholds to historical conditions without relative normalization will likely lead to biased burn severity classifications.
Richardson Extrapolation Based Error Estimation for Stochastic Kinetic Plasma Simulations
Cartwright, Keigh
2014-10-01
To have a high degree of confidence in simulations one needs code verification, validation, solution verification and uncertainty qualification. This talk will focus on numerical error estimation for stochastic kinetic plasma simulations using the Particle-In-Cell (PIC) method and how it impacts the code verification and validation. A technique Is developed to determine the full converged solution with error bounds from the stochastic output of a Particle-In-Cell code with multiple convergence parameters (e.g. ?t, ?x, and macro particle weight). The core of this method is a multi parameter regression based on a second-order error convergence model with arbitrary convergence rates. Stochastic uncertainties in the data set are propagated through the model usin gstandard bootstrapping on a redundant data sets, while a suite of nine regression models introduces uncertainties in the fitting process. These techniques are demonstrated on Flasov-Poisson Child-Langmuir diode, relaxation of an electro distribution to a Maxwellian due to collisions and undriven sheaths and pre-sheaths. Sandia National Laboratories is a multie-program laboratory managed and operated by Sandia Corporation, a wholly owned subisidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Nonlinear Force-Free Extrapolation of Vector Magnetograms into the Corona
Thalmann, J. K.; Wiegelmann, T.; Sun, X.; Hoeksema, J. T.; Liu, Y.; Tadesse, T.
2011-12-01
To investigate the structure and evolution of the coronal magnetic field, we extrapolate measurements of the photospheric magnetic field vector into the corona based on the force-free assumption. A complication of this approach is that the measured photospheric magnetic field is not force-free and that one has to apply a preprocessing routine in order to achieve boundary conditions suitable for the force-free modelling. Furthermore the nonlinear force-free extrapolation code takes errors in the photospheric field data into account which occur due to noise, incomplete inversions or ambiguity removing techniques. Within this work we compare extrapolations from SDO/HMI and SOLIS vector magnetograms and explain how to find optimum parameters for handling the data of a particular instrument. The resulting coronal magnetic field lines are quantitatively compared with coronal EUV-images from SDO/AIA.
Possible sharp quantization of extrapolated high temperature viscosity- theory and experiment
Nussinov, Z; Blodgett, M; Kelton, K F
2014-01-01
Quantum effects in material systems are often pronounced at low energies and become insignificant at high temperatures. We find that, perhaps counterintuitively, certain quantum effects may follow the opposite route and become progressively sharper when extrapolated to the "classical" high temperature limit. In the current work, we derive basic relations, extend standard kinetic theory by taking into account a possible fundamental quantum time scale, find new general equalities connecting semi-classical dynamics and thermodynamics to Planck's constant, and compute current correlation functions. Our analysis suggests that, on average, the extrapolated high temperature viscosity of general liquids may tend to a value set by the product of the particle number density ${\\sf n}$ and Planck's constant $h$. We compare this theoretical result with experimental measurements of an ensemble of 23 metallic fluids where this seems to indeed be the case. The extrapolated high temperature viscosity of each of these liquids ...
Comparison between the response of two extrapolation chambers in low energy X-rays
International Nuclear Information System (INIS)
Full text: Extrapolation chambers are important metrological instruments for detection of beta radiation and low energy X-rays, since they are able to determine absolute measurements of radiations of soft penetration. These chambers are very useful, because they allow the determination of superficial doses through the variation of the air mass in its sensible volume. In this work, two extrapolation chambers were tested in order to establish which chamber presents the best response in some standard qualities of X-ray beams, radiotherapy level. For comparison, a commercial PTW extrapolation chamber model 23391 and an extrapolation chamber designed and constructed at the Radiation Metrology Laboratory of Instituto de Pesquisas Energeticas e Nucleares, were studied. The commercial chamber has a collecting electrode (40 mm diameter) and guard rings made of aluminum, and entrance window (0.025 mm thick) made of polyamide; the developed chamber presents a collecting electrode (10 mm diameter) and guard rings made of graphite, and entrance window (0,84 mg/cm2 thick) made of aluminized polyethylene terephthalate. Both chambers were positioned at 50 cm from the X-ray system focus. The ionization currents were measured at negative and positive polarities, and the mean values were considered. A Keithley 617 electrometer was utilized. The main characteristics of the extrapolation chambers, as ion collection efficiency, saturation curve, polarity effect, repeatability, long time stability, stabilization time, linearity response, extrapolation curve, energy dependency, and transmission factors were determined. The results show that both chambers present adequate responses for the verified X-ray beam qualities, confirming previous studies realized with these detectors. In conclusion, both chambers can be used for accurate measurements in low energy X-ray beams. (author)
International Nuclear Information System (INIS)
A technique for simulating all detection processes in a 4?(?,e,X)-? coincidence system by means of the Monte Carlo technique is described. This procedure yields a more realistic behaviour of the extrapolation curve as compared to the usual polynomial fit. The present paper describes its application to the standardisation of a typical pure beta emitter, namely 35S, by the efficiency tracing technique, and an EC-gamma radionuclide, namely 133Ba. The calculated extrapolations were compared to experimental values obtained at the IPEN
Characterization of low energy X-rays beams with an extrapolation chamber
International Nuclear Information System (INIS)
In laboratories involving Radiological Protection practices, it is usual to use reference radiations for calibrating dosimeters and to study their response in terms of energy dependence. The International Organization for Standardization (ISO) established four series of reference X-rays beams in the ISO- 4037 standard: the L and H series, as low and high air Kerma rates, respectively, the N series of narrow spectrum and W series of wide spectrum. The X-rays beams with tube potential below 30 kV, called 'low energy beams' are, in most cases, critical as far as the determination of their parameters for characterization purpose, such as half-value layer. Extrapolation chambers are parallel plate ionization chambers that have one mobile electrode that allows variation of the air volume in its interior. These detectors are commonly used to measure the quantity Absorbed Dose, mostly in the medium surface, based on the extrapolation of the linear ionization current as a function of the distance between the electrodes. In this work, a characterization of a model 23392 PTW extrapolation chamber was done in low energy X-rays beams of the ISO- 4037 standard, by determining the polarization voltage range through the saturation curves and the value of the true null electrode spacing. In addition, the metrological reliability of the extrapolation chamber was studied with measurements of the value of leakage current and repeatability tests; limit values were established for the proper use of the chamber. The PTW23392 extrapolation chamber was calibrated in terms of air Kerma in some of the ISO radiation series of low energy; the traceability of the chamber to the National Standard Dosimeter was established. The study of energy dependency of the extrapolation chamber and the assessment of the uncertainties related to the calibration coefficient were also done; it was shown that the energy dependence was reduced to 4% when the extrapolation technique was used. Finally, the first half-value layers were determined for the low energy ISO N series with the extrapolation chamber, in collimated and uncollimated beams and it was showed that this detector is feasible for such measurements. (author)
International Nuclear Information System (INIS)
A new technique for intensity modulated radiation therapy (IMRT) delivery is helical tomotherapy (HT). Like most IMRT delivery methods, HT utilizes many small fields as part of the treatment plan, which can be difficult to characterize. A novel technique for small field characterization, based on inter- and extrapolation of ion chamber readings, is presented in the context of HT. As a fan beam is characterized by its thickness and output factor, plane parallel chambers with different active volumes were used to scan the fan beam profiles. The fan beam thickness (FBT) can be determined from the thickness measured with the chamber by extrapolating to an infinitesimally small chamber size. The effective output was derived from the integral under the dose profile divided by the FBT. This was done for five FBTs and demonstrated a sharp fall off in dose when the FBT decreased below 8 mm. Similar techniques can be applied to other IMRT techniques to improve the characterization of various beam parameters
McNiven, Andrea; Kron, Tomas
2004-08-01
A new technique for intensity modulated radiation therapy (IMRT) delivery is helical tomotherapy (HT). Like most IMRT delivery methods, HT utilizes many small fields as part of the treatment plan, which can be difficult to characterize. A novel technique for small field characterization, based on inter- and extrapolation of ion chamber readings, is presented in the context of HT. As a fan beam is characterized by its thickness and output factor, plane parallel chambers with different active volumes were used to scan the fan beam profiles. The fan beam thickness (FBT) can be determined from the thickness measured with the chamber by extrapolating to an infinitesimally small chamber size. The effective output was derived from the integral under the dose profile divided by the FBT. This was done for five FBTs and demonstrated a sharp fall off in dose when the FBT decreased below 8 mm. Similar techniques can be applied to other IMRT techniques to improve the characterization of various beam parameters.
International Nuclear Information System (INIS)
In a brief survey risks of corrosion of stainless steels are determined by four parameters: pH, Cl content, oxidative power and temperature. Electrochemical corrosion tests, screen tests and loss of weight in immersion tests are examined. An example of corrosion resistance to seawater is given
Czech Academy of Sciences Publication Activity Database
Mejsnar, Jan; Sokol, Zbyn?k; Pešice, Petr
Oberpfaffenhofen-Wessling : Institut für Physik der Atmosphäre, 2014. [ERAD 2014 - 8th European Conference on Radar in Meteorology and Hydrology. 01.09.2014-05.09.2014, Garmisch-Partenkirchen] Institutional support: RVO:68378289 Subject RIV: DG - Athmosphere Sciences, Meteorology http://www.pa.op.dlr.de/erad2014/programme/ShortAbstracts/262_short.pdf
Cho, M.A.; Skidmore, A.K.
2006-01-01
There is increasing interest in using hyperspectral data for quantitative characterization of vegetation in spatial and temporal scopes. Many spectral indices are being developed to improve vegetation sensitivity by minimizing the background influence. The chlorophyll absorption continuum index (CACI) is such a measure to calculate the spectral continuum on which the analyses are based on the area of the troughs spanned by the spectral continuum. However, different values of CACI were obtaine...
Latychevskaia, Tatiana; Zontone, Federico; Fink, Hans-Werner
2015-01-01
We demonstrate enhancement in resolution of a noncrystalline object reconstructed from an experimental X-ray diffraction pattern by extrapolating the measured diffraction intensities beyond the detector area. The experimental record contains about 10% missing information, including the pixels in the center of the diffraction pattern. The extrapolation is done by applying an iterative routine. The optimal parameters for implementing the iterative routine, including initial padding distribution and an object support, are studied. Extrapolation results in resolution enhancement and better matching between the recovered and experimental amplitudes in the Fourier domain. The limits of the extrapolation procedure are discussed.
Using composite flow laws to extrapolate lab data on ice to nature
de Bresser, Hans; Diebold, Sabrina; Durham, William
2013-04-01
The progressive evolution of the grain size distribution of deforming and recrystallizing Earth materials directly affects their rheological behaviour in terms of composite grain-size-sensitive (GSS, diffusion/grain boundary sliding) and grain-size-insensitive (GSI, dislocation) creep. After time, such microstructural evolution might result in strain progressing at a steady-state balance of mechanisms of GSS and GSI creep. In order to come to a meaningful rheological description of materials deforming by combined GSS and GSI mechanisms, composite flow laws are required that bring together individual, laboratory derived GSS and GSI flow laws, and that include full grain size distributions rather than single mean values representing the grain size. A composite flow law approach including grain size distributions has proven to be very useful in solving discrepancies between microstructural observations in natural calcite mylonites and extrapolations of relatively simple laboratory flow laws (Herwegh et al., 2005, J. Struct Geol., 27, 503-521). In the current study, we used previous and new laboratory data on the creep behavior of water ice to investigate if a composite flow law approach also results in better extrapolation of lab data to nature for ice. The new lab data resulted from static grain-growth experiments and from deformation experiments performed on samples with a starting grain size of either ice") or of 180-250 microns ("coarse grained ice"). The deformation experiments were performed in a special cryogenic Heard-type deformation apparatus at temperatures 180-240 K, at confining pressures 30-100 MPa, and strain rates between 1E-08/s and 1E-04/s. After the experiments, all samples were studied using cryogenic SEM and image analysis techniques. We also investigated natural microstructures in EPICA drilling ice core samples of Dronning Maud Land in Antartica. The temperature of the core ranges from 228 K at the surface to 272 K close to the bedrock. Grain size distributions (in 2D) were determined for all 41 samples studied. Combining the experimental grain-growth results with the results of the fine-grained and coarse-grained samples allows us to describe the experimental deformation of ice in terms of composite flow and to speculate about the evolution towards a balance between GSS and GSI mechanisms. Flow stresses for the natural DML samples were calculated at realistic strain rates between 1E-10/s and 1E-12/s using i) pure GSS-creep, ii) pure GSI-creep, and iii) composite GSI+GSS creep taking the full grain size distribution into account. At a constant strain rate, the contribution of GSS mechanisms to the overall strain rate remains roughly the same along the ice core. Apparently, the change in temperature with depth goes hand in hand with a change in grain size such that there is an overall balance between GSI- and GSS-creep mechanisms. The results show that GSS-mechanisms might well be operative in ice at a range of conditions, but that GSI mechanisms will remain important except at very slow strain rates. In the presentation, new insights emerging from the composite flow law approach to ice as well as pitfalls of the method will be discussed.
EXTRAPOLATION IN HUMAN HEALTH AND ECOLOGICAL RISK ASSESSMENTS: PROCEEDINGS OF A SYMPOSIUM
A symposium was conducted in April 1998 by the U.S. Environmental Protection Agency's National Health and Environmental Effects Research Laboratory (NHEERL) to explore issues of extrapolation in human health and ecological risk assessments. Over the course of three and one half d...
Kinetic energy of solid neon by Monte Carlo with improved Trotter- and finite-size extrapolation
Cuccoli, Alessandro; Macchi, Alessandro; Pedrolli, Gaia; Tognetti, Valerio; Vaia, Ruggero
1997-01-01
The kinetic energy of solid neon is calculated by a path-integral Monte Carlo approach with a refined Trotter- and finite-size extrapolation. These accurate data present significant quantum effects up to temperature T=20 K. They confirm previous simulations and are consistent with recent experiments.
International Nuclear Information System (INIS)
The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, (90 Sr/90 Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)
International Nuclear Information System (INIS)
Steam generator tubes are subjected to two categories of corrosion; metal/sodium reactions and metal/water-steam interactions. Referring to these environmental conditions the relevant parameters are discussed. The influences of these parameters on the sodium corrosion and water/steam-reactions are evaluated. Extrapolations of corrosion values to steam generator design conditions are performed and discussed in detail. (author)
Photon neutrino-production in a chiral EFT for nuclei and extrapolation to $E_{\
Zhang, Xilin
2013-01-01
We carry out a series of studies on pion and photon productions in neutrino/electron/photon--nucleus scatterings. The low energy region is investigated by using a chiral effective field theory for nuclei. The results for the neutral current induced photon production ($\\gamma$-NCP) are then extrapolated to neutrino energy $E_{\
Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian
2014-07-01
Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species' evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals ("MammalDIET"). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external validation showed that: (1) extrapolations were most reliable for primary food items; (2) several diet categories ("Animal", "Mammal", "Invertebrate", "Plant", "Seed", "Fruit", and "Leaf") had high proportions of correctly predicted diet ranks; and (3) the potential of correctly extrapolating specific diet categories varied both within and among clades. Global maps of species richness and proportion showed congruence among trophic levels, but also substantial discrepancies between dietary guilds. MammalDIET provides a comprehensive, unique and freely available dataset on diet preferences for all terrestrial mammals worldwide. It enables broad-scale analyses for specific trophic levels and dietary guilds, and a first assessment of trait conservatism in mammalian diet preferences at a global scale. The digitalization, extrapolation and validation procedures could be transferable to other trait data and taxa. PMID:25165528
DEFF Research Database (Denmark)
Kissling, W. Daniel; Dalby, Lars
2014-01-01
Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species’ evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals (“MammalDIET”). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external validation showed that: (1) extrapolations were most reliable for primary food items; (2) several diet categories (“Animal”, “Mammal”, “Invertebrate”, “Plant”, “Seed”, “Fruit”, and “Leaf”) had high proportions of correctly predicted diet ranks; and (3) the potential of correctly extrapolating specific diet categories varied both within and among clades. Global maps of species richness and proportion showed congruence among trophic levels, but also substantial discrepancies between dietary guilds. MammalDIET provides a comprehensive, unique and freely available dataset on diet preferences for all terrestrial mammals worldwide. It enables broad-scale analyses for specific trophic levels and dietary guilds, and a first assessment of trait conservatism in mammalian diet preferences at a global scale. The digitalization, extrapolation and validation procedures could be transferable to other trait data and taxa.
Rong, Lu; Wang, Dayong; Zhou, Xun; Huang, Haochong; Li, Zeyu; Wang, Yunxin
2014-01-01
We report here on terahertz (THz) digital holography on a biological specimen. A continuous-wave (CW) THz in-line holographic setup was built based on a 2.52 THz CO2 pumped THz laser and a pyroelectric array detector. We introduced novel statistical method of obtaining true intensity values for the pyroelectric array detector's pixels. Absorption and phase-shifting images of a dragonfly's hind wing were reconstructed simultaneously from single in-line hologram. Furthermore, we applied phase retrieval routines to eliminate twin image and enhanced the resolution of the reconstructions by hologram extrapolation beyond the detector area. The finest observed features are 35 {\\mu}m width cross veins.
International Nuclear Information System (INIS)
Graphical abstract: Display Omitted Research highlights: ? GlyD1 exhibits inhibiting properties more than GlyD2 and Gly. ? Inhibition efficiency increases with inhibitor concentration. ? Inhibition efficiency decreases with temperature, suggesting physical adsorption. ? Validation of corrosion rates measured by Tafel extrapolation method is confirmed. - Abstract: A newly synthesized glycine derivative (GlyD1), 2-(4-(dimethylamino)benzylamino)acetic acid hydrochloride, was used to control mild steel corrosion in 4.0 M H2SO4 solutions at different temperatures (278-338 K). Tafel extrapolation, linear polarization resistance (LPR) and impedance methods were used to test corrosion inhibitor efficiency. An independent method of chemical analysis, namely ICP-AES (inductively coupled plasma atomic emission spectrometry) was also used to test validity of corrosion rate measured by Tafel extrapolation method. Results obtained were compared with an available glycine derivative (GlyD2) and glycine (Gly). Tafel polarization measurements revealed that the three tested inhibitors function as mixed-type compounds. The inhibition efficiency increased with increase in inhibitor concentration and decreased with temperature, suggesting the occurrence of physical adsorption. The adsorptive behaviour of the three inhibitors followed Temkin-type isotherm and the standard free energy changes of adsorption (?Gadso) were evaluated for the to) were evaluated for the three tested inhibitors as a function of temperature. The inhibition performance of GlyD1 was much better than those of GlyD2 and Gly itself. Results obtained from the different corrosion evaluation techniques were in good agreement.
Energy Technology Data Exchange (ETDEWEB)
Amin, Mohammed A., E-mail: maaismail@yahoo.co [Materials and Corrosion Lab (MCL), Department of Chemistry, Faculty of Science, Taif University, 888 Hawiya (Saudi Arabia); Department of Chemistry, Faculty of Science, Ain shams University, 11566 Abbassia, Cairo (Egypt); Ahmed, M.A. [Physics Department, Faculty of Science, Taif University, 888 Hawiya (Saudi Arabia); Arida, H.A. [Materials and Corrosion Lab (MCL), Department of Chemistry, Faculty of Science, Taif University, 888 Hawiya (Saudi Arabia); Arslan, Taner [Department of Chemistry, Eskisehir Osmangazi University, 26480 Eskisehir (Turkey); Saracoglu, Murat [Faculty of Education, Erciyes University, 38039 Kayseri (Turkey); Kandemirli, Fatma [Department of Chemistry, Nigde University, 41000 Nigde (Turkey)
2011-02-15
Research highlights: TX-305 exhibits inhibiting properties for iron corrosion more than TX-165 and TX 100. Inhibition efficiency increases with temperature, suggesting chemical adsorption. The three tested surfactants act as mixed-type inhibitors with cathodic predominance. Validation of corrosion rates measured by Tafel extrapolation method is confirmed. - Abstract: The inhibition characteristics of non-ionic surfactants of the TRITON-X series, namely TRITON-X-100 (TX-100), TRITON-X-165 (TX-165) and TRITON-X-305 (TX-305), on the corrosion of iron was studied in 1.0 M HCl solutions as a function of inhibitor concentration (0.005-0.075 g L{sup -1}) and solution temperature (278-338 K). Measurements were conducted based on Tafel extrapolation method. Electrochemical frequency modulation (EFM), a non-destructive corrosion measurement technique that can directly give values of corrosion current without prior knowledge of Tafel constants, is also presented. Experimental corrosion rates determined by the Tafel extrapolation method were compared with corrosion rates obtained by the EFM technique and an independent method of chemical analysis. The chemical method of confirmation of the corrosion rates involved determination of the dissolved cation, using ICP-AES (inductively coupled plasma atomic emission spectrometry). The aim was to confirm validation of corrosion rates measured by the Tafel extrapolation method. Results obtained showed that, in all cases, the inhibition efficiency increased with increase in temperature, suggesting that chemical adsorption occurs. The adsorptive behaviour of the three surfactants followed Temkin-type isotherm. The standard free energies of adsorption decreased with temperature, reflecting better inhibition performance. These findings confirm chemisorption of the tested inhibitors. Thermodynamic activation functions of the dissolution process were also calculated as a function of each inhibitor concentration. All the results obtained from the methods employed are in reasonable agreement.
Directory of Open Access Journals (Sweden)
Dalila Khalfa
2014-01-01
Full Text Available Increasing knowledge on wind shear models to strengthen their reliability appears as a crucial issue, markedly for energy investors to accurately predict the average wind speed at different turbine hub heights and thus the expected wind energy output. This is particularly helpful during the feasibility study to abate the costs of a wind power project. The extrapolation laws were found to provide the finest representation of the wind speed according to heights, thus avoiding installation of tall towers, or even more expensive devices such as LIDAR or SODAR. The proposed models are based on theories that determine the vertical wind profile from implicit relationships. However, these empirical extrapolation formulas have been developed for specific meteorological conditions and appropriate sites for wind turbines; reason that several studies have been made by various authors to determine the best suited formula to their own conditions. This study is aimed at proceeding the research issue addressed within a previous study, where some extrapolation models were tested and compared by extrapolating the energy resources at different heights. However, comparable results are returned by the power law and the log law which indeed proved to be preferable. In this context, this study deals the assessment of several wind speed extrapolation laws (six laws, by comparing the analytical results obtained with real data for two different meteorological Sites, different roughness, different altitudes and different measurement periods. The first site studied is an extremely rough site with daily measurements of March 2007, wind speed measurements are available at four different heights of Gantour/Gao site, obtained by the water, energy and environment company Senegal. The second site studied is a feeble rough site with monthly measurements for 2005, wind speed measurements are available at three different heights of Kuujjuarapik Site obtained by Hydro-Quebec Energy Helimax Canada. The study aims to determine the effectiveness and concordance between the extrapolation laws and the real measured data. The results show that the adjusted law is efficiently adequate for an extremely rough site and the modified laws with two other laws are efficiently adequate for a feeble rough site. The experimental results and numerical calculations exploited for the evaluation of the Weibull parameters fall the shape factors k greater than 9. The increase in altitude often causes an increase in the Weibull parameters values, however, our results show that the shape factor k can take lower values to those established in the reference altitude.
Infrared length scale and extrapolations for the no-core shell model
Wendt, K A; Papenbrock, T; Sääf, D
2015-01-01
We precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the $A$-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of $A$ nucleons in the NCSM space to that of $A$ nucleons in a $3(A-1)$-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for $^{6}$Li. We apply our result and perform accurate IR extrapolations for bound states of $^{4}$He, $^{6}$He, $^{6}$Li, $^{7}$Li. We also attempt to extrapolate NCSM results for $^{10}$B and $^{16}$O with bare interactions from chiral effective field theory over tens of MeV.
131I-CRTX internal dosimetry: animal model and human extrapolation
International Nuclear Information System (INIS)
Snake venoms molecules have been shown to play a role not only in the survival and proliferation of tumor cells but also in the processes of tumor cell adhesion, migration and angiogenesis. 125I-Crtx, a radiolabeled version of a peptide derived from Crotalus durissus terrificus snake venom, specifically binds to tumor and triggers apoptotic signalling. At the present work, 125I-Crtx biokinetic data (evaluated in mice bearing Erlich tumor) were treated by MIRD formalism to perform Internal Dosimetry studies. Doses in several organs of mice were determinate, as well as in implanted tumor, for 131I-Crtx. Doses results obtained for animal model were extrapolated to humans assuming a similar concentration ratio among various tissues between mouse and human. In the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from 131I in the tissue were considered in dose calculations. (author)
131I-SPGP internal dosimetry: animal model and human extrapolation
International Nuclear Information System (INIS)
Scorpaena plumieri is commonly called moreia-ati or manganga and is the most venomous and one of the most abundant fish species of the Brazilian coast. Soprani 2006, demonstrated that SPGP - an isolated protein from S. plumieri fish- possess high antitumoral activity against malignant tumours and can be a source of template molecules for the development (design) of antitumoral drugs. In the present work, Soprani's 125ISPGP biokinetic data were treated by MIRD formalism to perform Internal Dosimetry studies. Absorbed doses due to the 131I-SPGP uptake were determinate in several organs of mice, as well as in the implanted tumor. Doses obtained for animal model were extrapolated to humans assuming a similar ratio for various mouse and human tissues. For the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from 131I were considered. (author)
Electric form factors of the octet baryons from lattice QCD and chiral extrapolation
Energy Technology Data Exchange (ETDEWEB)
Shanahan, P.E.; Thomas, A.W.; Young, R.D.; Zanotti, J.M. [Adelaide Univ., SA (Australia). ARC Centre of Excellence in Particle Physics at the Terascale and CSSM; Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Nakamura, Y. [RIKEN Advanced Institute for Computational Science, Kobe, Hyogo (Japan); Pleiter, D. [Forschungszentrum Juelich (Germany). JSC; Regensburg Univ. (Germany). Inst. fuer Theoretische Physik; Rakow, P.E.L. [Liverpool Univ. (United Kingdom). Theoretical Physics Div.; Schierholz, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Stueben, H. [Hamburg Univ. (Germany). Regionales Rechenzentrum; Collaboration: CSSM and QCDSF/UKQCD Collaborations
2014-03-15
We apply a formalism inspired by heavy baryon chiral perturbation theory with finite-range regularization to dynamical 2+1-flavor CSSM/QCDSF/UKQCD Collaboration lattice QCD simulation results for the electric form factors of the octet baryons. The electric form factor of each octet baryon is extrapolated to the physical pseudoscalar masses, after finite-volume corrections have been applied, at six fixed values of Q{sup 2} in the range 0.2-1.3 GeV{sup 2}. The extrapolated lattice results accurately reproduce the experimental form factors of the nucleon at the physical point, indicating that omitted disconnected quark loop contributions are small. Furthermore, using the results of a recent lattice study of the magnetic form factors, we determine the ratio ?{sub p}G{sub E}{sup p}/G{sub M}{sup p}. This quantity decreases with Q{sup 2} in a way qualitatively consistent with recent experimental results.
Electric form factors of the octet baryons from lattice QCD and chiral extrapolation
International Nuclear Information System (INIS)
We apply a formalism inspired by heavy baryon chiral perturbation theory with finite-range regularization to dynamical 2+1-flavor CSSM/QCDSF/UKQCD Collaboration lattice QCD simulation results for the electric form factors of the octet baryons. The electric form factor of each octet baryon is extrapolated to the physical pseudoscalar masses, after finite-volume corrections have been applied, at six fixed values of Q2 in the range 0.2-1.3 GeV2. The extrapolated lattice results accurately reproduce the experimental form factors of the nucleon at the physical point, indicating that omitted disconnected quark loop contributions are small. Furthermore, using the results of a recent lattice study of the magnetic form factors, we determine the ratio ?pGEp/GMp. This quantity decreases with Q2 in a way qualitatively consistent with recent experimental results.
{sup 131}I-SPGP internal dosimetry: animal model and human extrapolation
Energy Technology Data Exchange (ETDEWEB)
Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soprani, Juliana; Santos, Raquel Gouvea dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: hma@cdtn.br; Figueiredo, Suely Gomes de [Universidade Federal do Espirito Santo, (UFES), Vitoria, ES (Brazil). Dept. de Ciencias Fisiologicas. Lab. de Quimica de Proteinas
2009-07-01
Scorpaena plumieri is commonly called moreia-ati or manganga and is the most venomous and one of the most abundant fish species of the Brazilian coast. Soprani 2006, demonstrated that SPGP - an isolated protein from S. plumieri fish- possess high antitumoral activity against malignant tumours and can be a source of template molecules for the development (design) of antitumoral drugs. In the present work, Soprani's {sup 125}ISPGP biokinetic data were treated by MIRD formalism to perform Internal Dosimetry studies. Absorbed doses due to the {sup 131}I-SPGP uptake were determinate in several organs of mice, as well as in the implanted tumor. Doses obtained for animal model were extrapolated to humans assuming a similar ratio for various mouse and human tissues. For the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from {sup 131}I were considered. (author)
{sup 131}I-CRTX internal dosimetry: animal model and human extrapolation
Energy Technology Data Exchange (ETDEWEB)
Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soares, Marcella Araugio; Silveira, Marina Bicalho; Santos, Raquel Gouvea dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: hma@cdtn.br
2009-07-01
Snake venoms molecules have been shown to play a role not only in the survival and proliferation of tumor cells but also in the processes of tumor cell adhesion, migration and angiogenesis. {sup 125}I-Crtx, a radiolabeled version of a peptide derived from Crotalus durissus terrificus snake venom, specifically binds to tumor and triggers apoptotic signalling. At the present work, {sup 125}I-Crtx biokinetic data (evaluated in mice bearing Erlich tumor) were treated by MIRD formalism to perform Internal Dosimetry studies. Doses in several organs of mice were determinate, as well as in implanted tumor, for {sup 131}I-Crtx. Doses results obtained for animal model were extrapolated to humans assuming a similar concentration ratio among various tissues between mouse and human. In the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from {sup 131}I in the tissue were considered in dose calculations. (author)
Klein, Agnes V.; Jian Wang; Patrick Bedford
2014-01-01
The principles Health Canada use when extrapolating the indications and uses of a biosimilar product subsequent to a single clinical trial or limited number and scope of clinical trials during product development are discussed. The principles underlying the regulatory framework for Subsequent Entry Biologics (SEBs or biosimilars) in Canada explain the position taken by the regulator in respect of the substitutability and/or interchangeability of SEBs.
Directory of Open Access Journals (Sweden)
Agnes V Klein
2014-11-01
Full Text Available The principles Health Canada use when extrapolating the indications and uses of a biosimilar product subsequent to a single clinical trial or limited number and scope of clinical trials during product development are discussed. The principles underlying the regulatory framework for Subsequent Entry Biologics (SEBs or biosimilars in Canada explain the position taken by the regulator in respect of the substitutability and/or interchangeability of SEBs.
Dalila Khalfa; Abdelouaheb Benretem; Lazher Herous; Issam Meghlaoui
2014-01-01
Increasing knowledge on wind shear models to strengthen their reliability appears as a crucial issue, markedly for energy investors to accurately predict the average wind speed at different turbine hub heights and thus the expected wind energy output. This is particularly helpful during the feasibility study to abate the costs of a wind power project. The extrapolation laws were found to provide the finest representation of the wind speed according to heights, thus avoiding installation of ta...
Curvature of the chiral pseudo-critical line in QCD: continuum extrapolated results
Bonati, Claudio; Mariti, Marco; Mesiti, Michele; Negro, Francesco; Sanfilippo, Francesco
2015-01-01
We determine the curvature of the pseudo-critical line of strong interactions by means of numerical simulations at imaginary chemical potentials. We consider $N_f=2+1$ stout improved staggered fermions with physical quark masses and the tree level Symanzik gauge action, and explore four different sets of lattice spacings, corresponding to $N_t = 6,8,10,12$, in order to extrapolate results to the continuum limit. Our final estimate is $\\kappa = 0.0135(20)$.
Mirus, B. B.; Halford, K. J.; Sweetkind, D. S.; Fenelon, J.
2014-12-01
The utility of geologic frameworks for extrapolating hydraulic conductivities to length scales that are commensurate with hydraulic data has been assessed at the Nevada National Security Site in highly-faulted volcanic rocks. Observed drawdowns from eight, large-scale, aquifer tests on Pahute Mesa provided the necessary constraints to test assumed relations between hydraulic conductivity and interpretations of the geology. The investigated volume of rock encompassed about 40 cubic miles where drawdowns were detected more than 2 mi from pumping wells and traversed major fault structures. Five sets of hydraulic conductivities at about 500 pilot points were estimated by simultaneously interpreting all aquifer tests with a different geologic framework for each set. Each geologic framework was incorporated as prior information that assumed homogeneous hydraulic conductivities within each geologic unit. Complexity of the geologic frameworks ranged from an undifferentiated mass of rock with a single unit to 14 unique geologic units. Analysis of the model calibrations showed that a maximum of four geologic units could be differentiated where each was hydraulically unique as defined by the mean and standard deviation of log-hydraulic conductivity. Consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation were evaluated qualitatively with maps of transmissivity. Distributions of transmissivity were similar within the investigated extents regardless of geologic framework except for a transmissive streak along a fault in the Fault-Structure framework. Extrapolation was affected by underlying geologic frameworks where the variability of transmissivity increased as the number of units increased.
Motion-based prediction explains the role of tracking in motion extrapolation.
Khoei, Mina A; Masson, Guillaume S; Perrinet, Laurent U
2013-11-01
During normal viewing, the continuous stream of visual input is regularly interrupted, for instance by blinks of the eye. Despite these frequents blanks (that is the transient absence of a raw sensory source), the visual system is most often able to maintain a continuous representation of motion. For instance, it maintains the movement of the eye such as to stabilize the image of an object. This ability suggests the existence of a generic neural mechanism of motion extrapolation to deal with fragmented inputs. In this paper, we have modeled how the visual system may extrapolate the trajectory of an object during a blank using motion-based prediction. This implies that using a prior on the coherency of motion, the system may integrate previous motion information even in the absence of a stimulus. In order to compare with experimental results, we simulated tracking velocity responses. We found that the response of the motion integration process to a blanked trajectory pauses at the onset of the blank, but that it quickly recovers the information on the trajectory after reappearance. This is compatible with behavioral and neural observations on motion extrapolation. To understand these mechanisms, we have recorded the response of the model to a noisy stimulus. Crucially, we found that motion-based prediction acted at the global level as a gain control mechanism and that we could switch from a smooth regime to a binary tracking behavior where the dot is tracked or lost. Our results imply that a local prior implementing motion-based prediction is sufficient to explain a large range of neural and behavioral results at a more global level. We show that the tracking behavior deteriorates for sensory noise levels higher than a certain value, where motion coherency and predictability fail to hold longer. In particular, we found that motion-based prediction leads to the emergence of a tracking behavior only when enough information from the trajectory has been accumulated. Then, during tracking, trajectory estimation is robust to blanks even in the presence of relatively high levels of noise. Moreover, we found that tracking is necessary for motion extrapolation, this calls for further experimental work exploring the role of noise in motion extrapolation. PMID:24036184
International Nuclear Information System (INIS)
Graphical abstract: The pseudo-cubic cobalt oxide microparticles have been successfully synthesized by a solution combustion method using Co(NO3)2·6H2O (oxidizer) and dextrose (sugar; fuel). The as-synthesized Co3O4 microparticles are crystalline and Rietveld refinement of calcined samples exhibited cubic structure with space group of Fm3m (No. 227). The generated Co3O4 microparticles were used to fabricate Zn–Co3O4 composite thin films for corrosion protection. Highlights: ? Synthesis of pseudo-cubic Co3O4 microparticles by solution combustion method. ? As-prepared Co3O4 compounds are calcined and structurally characterized. ? Prepared Co3O4 are utilized for the fabrication of Zn–Co3O4 composite thin films. - Abstract: Microcrystalline cobalt oxide (Co3O4) powder was successfully synthesized by a simple, fast, economical and eco-friendly solution-combustion method. The as-synthesized powder was calcined for an hour at temperatures ranging from 100 to 900 °C. The crystallite size, morphology, and chemical state of synthesized powders were characterized by powder XRD, TG-DTA, XPS, SEM/EDAX, TEM and FT-IR spectral methods. The as-synthesized Co3O4 powder was single-crystalline and Rietveld refinement of calcined samples exhibited cf calcined samples exhibited cubic structure with space group of Fm3m (No. 227). The effect of calcination temperature on crystallite size and morphology was assessed. Scanning electron micrographs show a uniform, randomly oriented pseudo-cubic particle with porous like morphology and EDAX measurement showed its chemical composition. Thermal behavior of as-synthesized compound was examined. The TEM result revealed that, the particles are pseudo-cubic in nature with diameter of 0.2–0.6 ?m and a length of 0.9–1.2 ?m. The crystallite size increased with increase of calcination temperature. The synthesized Co3O4 powder was used to fabricate Zn–Co3O4 composite thin films and its corrosion behavior was analyzed by anodic polarization, tafel extrapolation and electrochemical impedance spectroscopy. The results indicate that the Zn–Co3O4 composite thin films have potential applications to corrosion protection.
Wiegelmann, T.; Thalmann, J. K.; Inhester, B.; Tadesse, T.; Sun, X.; Hoeksema, J. T.
2012-11-01
The Helioseismic and Magnetic Imager (HMI) on board the Solar Dynamics Observatory (SDO) provides photospheric vector magnetograms with a high spatial and temporal resolution. Our intention is to model the coronal magnetic field above active regions with the help of a nonlinear force-free extrapolation code. Our code is based on an optimization principle and has been tested extensively with semianalytic and numeric equilibria and applied to vector magnetograms from Hinode and ground-based observations. Recently we implemented a new version which takes into account measurement errors in photospheric vector magnetograms. Photospheric field measurements are often affected by measurement errors and finite nonmagnetic forces inconsistent for use as a boundary for a force-free field in the corona. To deal with these uncertainties, we developed two improvements: i) preprocessing of the surface measurements to make them compatible with a force-free field, and ii) new code which keeps a balance between the force-free constraint and deviation from the photospheric field measurements. Both methods contain free parameters, which must be optimized for use with data from SDO/HMI. In this work we describe the corresponding analysis method and evaluate the force-free equilibria by how well force-freeness and solenoidal conditions are fulfilled, by the angle between magnetic field and electric current, and by comparing projections of magnetic field lines with coronal images from the Atmospheric Imaging Assembly (SDO/AIA). We also compute the available free magnetic energy and discuss the potential influence of control parameters.
Extrapolation of the Dutch 1 MW tunable free electron maser to a 5 MW ECRH source
International Nuclear Information System (INIS)
A Free Electron Maser (FEM) is now under construction at the FOM Institute (Rijnhuizen) Netherlands with the goal of producing 1 MW long pulse to CW microwave output in the range 130 GHz to 250 GHz with wall plug efficiencies of 50% (Verhoeven, et al EC-9 Conference). An extrapolated version of this device is proposed which by scaling up the beam current, would produce microwave power levels of up to 5 MW CW in order to reduce the cost per watt and increase the power per module, thus providing the fusion community with a practical ECRH source
Challenges for In vitro to in Vivo Extrapolation of Nanomaterial Dosimetry for Human Risk Assessment
Energy Technology Data Exchange (ETDEWEB)
Smith, Jordan N.
2013-11-01
The proliferation in types and uses of nanomaterials in consumer products has led to rapid application of conventional in vitro approaches for hazard identification. Unfortunately, assumptions pertaining to experimental design and interpretation for studies with chemicals are not generally appropriate for nanomaterials. The fate of nanomaterials in cell culture media, cellular dose to nanomaterials, cellular dose to nanomaterial byproducts, and intracellular fate of nanomaterials at the target site of toxicity all must be considered in order to accurately extrapolate in vitro results to reliable predictions of human risk.
Magnetic form factors of the octet baryons from lattice QCD and chiral extrapolation
International Nuclear Information System (INIS)
We present a 2+1-flavor lattice QCD calculation of the electromagnetic Dirac and Pauli form factors of the octet baryons. The magnetic Sachs form factor is extrapolated at six fixed values of Q2 to physical pseudoscalar masses and infinite volume using a formulation based on heavy baryon chiral perturbation theory with finite-range regularization. We properly account for omitted disconnected quark contractions using a partially-quenched effective field theory formalism. The results compare well with the experimental form factors of the nucleon and the magnetic moments of the octet baryons.
Energy extrapolation schemes for adaptive multi-scale molecular dynamics simulations.
Fleurat-Lessard, Paul; Michel, Carine; Bulo, Rosa E
2012-08-21
This paper evaluates simple schemes to extrapolate potential energy values using the set of energies and forces extracted from a molecular dynamics trajectory. In general, such a scheme affords the maximum amount of information about a molecular system at minimal computational cost. More specifically, schemes like this are very important in the field of adaptive multi-scale molecular dynamics simulations. In this field, often the computation of potential energy values at certain trajectory points is not required for the simulation itself, but solely for the a posteriori analysis of the simulation data. Extrapolating the values at these points from the available data can save considerable computational time. A set of extrapolation schemes are employed based on Taylor series and central finite difference approximations. The schemes are first tested on the trajectories of molecular systems of varying sizes, obtained at MM and QM level using velocity-Verlet integration with standard simulation time steps. Remarkably good accuracy was obtained with some of the approximations, while the failure of others can be explained in terms of the distinct features of a molecular dynamics trajectory. We have found that, for a Taylor expansion of the potential energy, both a first and a second order truncation exhibit errors that grow with system size. In contrast, the second order central finite difference approximation displays an accuracy that is independent of the size of the system, while giving a very good estimate of the energy, and costing as little as a first order truncation of the Taylor series. A fourth order central finite difference approximation requires more input data, which is not always available in adaptive multi-scale simulations. Furthermore, this approximation gives errors of similar magnitude or larger than its second order counterpart, at standard simulation time steps. This leads to the conclusion that a second order central finite difference approximation is the optimal choice for energy extrapolation from molecular dynamics trajectories. This finding is confirmed in a final application to the analysis of an adaptive multi-scale simulation. PMID:22920107
Scientific Electronic Library Online (English)
E., Ortiz-Rascón; N. C., Bruce; A. A., Rodríguez-Rosales; J., Garduño-Mejía; R., Ortega-Martínez.
2014-02-01
Full Text Available Este artículo presenta resultados de un método para la formación de imágenes resueltas temporalmente mediante la transmisión de luz usando una extrapolación temporal. La extrapolación temporal se realiza mediante la solución a la ecuación de transporte mediante la expansión en cumulantes. Los result [...] ados obtenidos se comparan con los resultados del mismo método pero usando la solución mediante la aproximación de difusión. Se encuentra que los resultados son consistentes pero la el método usando la expansión en cumulantes da mejor resolución, en un factor de aproximadamente 3, para el proceso de formación de imágenes, esto debido a que da una mejor estimación de la contribución de los fotones con tiempos de integración menores. Abstract in english This paper presents results of a time-resolved transillumination imaging method using temporal extrapolation. The temporal extrapolation is performed with the cumulant expansion solution to the transport equation. The results obtained are compared to results of the same method but using the diffusio [...] n approximation solution. It is found that the results are consistent but that the cumulant expansion method gives better resolution, by a factor of approximately 3, for the imaging process, because it gives a better estimation of the photon contribution for shorter integration times.
Directory of Open Access Journals (Sweden)
L. Gong
2012-06-01
Full Text Available Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2–4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
Paasschens, J C J; Beenakker, C W J
2008-01-01
The linear intensity profile of multiply scattered light in a slab geometry extrapolates to zero at a certain distance beyond the boundary. The diffusion equation with this "extrapolated boundary condition" has been used in the literature to obtain analytical formulas for the transmittance of light through the slab as a function of angle of incidence and refractive index. The accuracy of these formulas is determined by comparison with a numerical solution of the Boltzmann equation for radiative transfer.
Richmond, Orien M. W.; McEntee, Jay P.; Hijmans, Robert J; Brashares, Justin S
2010-01-01
Species distribution models (SDMs) are increasingly used for extrapolation, or predicting suitable regions for species under new geographic or temporal scenarios. However, SDM predictions may be prone to errors if species are not at equilibrium with climatic conditions in the current range and if training samples are not representative. Here the controversial “Pleistocene rewilding” proposal was used as a novel example to address some of the challenges of extrapolating modeled species-climate...
Extrapolation of the J-R curve for predicting reactor vessel integrity
International Nuclear Information System (INIS)
the work in this report was conducted in support of the issues studied by the US Nuclear Regulatory Commission (NRC) JD/JM Workers Group during the period 1987--1989. The major issues studies were the J-R curve extrapolation techniques for using small-specimen test results to predict ductile instability in larger structures where the extent of crack extension from the small-specimen test was not sufficient. This included the choice of parameter in characterizing the J-R curve, deformation J, or modified J, JM. These issues are studied both by comparing small- and large-specimen J-R curves and by using J-R curves from smaller specimens to predict the behavior of larger specimens and pressure vessel models. An additional issue was raised during the course of this work by the testing a low-upper-shelf A 302 shelf. The results from these tests were not typical of ductile fracture in many steel and suggested that small-specimen J-R curves may not predict the behavior of large structures in some cases. The causes of this behavior were studies as well as the consequences of using the J-R curve results from small specimens of this kind of material. Finally, a discussion and recommendations are given relating to the use of extrapolated J-R curves
Zhang, Lu-Lu; Gao, Shou-Bao; Meng, Qing-Tian; Song, Yu-Zhi
2015-01-01
The potential energy curves (PECs) of the first electronic excited state of S2(ã1?g) are calculated employing a multi-reference configuration interaction method with the Davidson correction in combination with a series of correlation-consistent basis sets from Dunning: aug-cc-pVXZ (X = T, Q, 5, 6). In order to obtain PECs with high accuracy, PECs calculated with aug-cc-pV(Q, 5)Z basis sets are extrapolated to the complete basis set limit. The resulting PECs are then fitted to the analytical potential energy function (APEF) using the extended Hartree-Fock approximate correlation energy method. By utilizing the fitted APEF, accurate and reliable spectroscopic parameters are obtained, which are consistent with both experimental and theoretical results. By solving the Schrödinger equation numerically with the APEFs obtained at the AV6Z and the extrapolated AV(Q, 5)Z level of theory, we calculate the complete set of vibrational levels, classical turning points, inertial rotation and centrifugal distortion constants. Project supported by the National Natural Science Foundation of China (Grant Nos. 11304185 and 11074151).
DEFF Research Database (Denmark)
Thorndahl, SØren Liedtke; Grum, M.
2011-01-01
Forecasting of flows, overflow volumes, water levels, etc. in drainage systems can be applied in real time control of drainage systems in the future climate in order to fully utilize system capacity and thus save possible construction costs. An online system for forecasting flows and water levels in a small urban catchment has been developed. The forecast is based on application of radar rainfall data, which by a correlation based technique, is extrapolated with a lead time up to two hours. The runoff forecast in the drainage system is based on a fully distributed MOUSE model which is auto-calibrated on flow measurements in order to produce the best possible forecast for the drainage system at all times. The system shows great potential for the implementation of real time control in drainage systems and forecasting flows and water levels.
Modeling of systematic retention of beryllium in rats. Extrapolation to humans
International Nuclear Information System (INIS)
In this work, we analyzed different approaches, assayed in order to numerically describe the systemic behaviour of Beryllium. The experimental results used in this work, were previously obtained by Furchner et al. (1973), using Sprague-Dawley rats, and other animal species. Furchner's work includes the obtained model for whole body retention in rats but not for each target organ. In this work we present the results obtained by modeling the kinetic behaviour of Beryllium in several target organs. The results of this kind of models were used in order to establish correlations among the estimated kinetic constants. The parameters of the model were extrapolated to humans and, finally, compared with other previously published
Energy Technology Data Exchange (ETDEWEB)
King, A W
1991-12-31
A general procedure for quantifying regional carbon dynamics by spatial extrapolation of local ecosystem models is presented Monte Carlo simulation to calculate the expected value of one or more local models, explicitly integrating the spatial heterogeneity of variables that influence ecosystem carbon flux and storage. These variables are described by empirically derived probability distributions that are input to the Monte Carlo process. The procedure provides large-scale regional estimates based explicitly on information and understanding acquired at smaller and more accessible scales.Results are presented from an earlier application to seasonal atmosphere-biosphere CO{sub 2} exchange for circumpolar ``subarctic`` latitudes (64{degree}N-90{degree}N). Results suggest that, under certain climatic conditions, these high northern ecosystems could collectively release 0.2 Gt of carbon per year to the atmosphere. I interpret these results with respect to questions about global biospheric sinks for atmospheric CO{sub 2} .
International Nuclear Information System (INIS)
Hemibody irradiation (HBI) has emerged as an efficient and effective treatment for the palliation of pain in patients with symptomatic widespread cancer. The rational for such an unconventional radiation delivery is to try to achieve effective pain palliation to large affected body areas with prompt efficiency and minimal inconvenience to terminal cancer patients. In this report a prospective study is undertaken as to the existence of a correlation between Extrapolated Radiation Dose (ERD) versus response rate in a series of patients with disseminated painful osseous metastases with megavoltage HBI treatments. Correlation of ERD with response rate was excellent. There was a sharp increase in response rate from ERD of 13 Gy onwards (p value < 0.05). From our results it is suggested that in the HBI treatments of disseminated painful osseous metastases ERD value 13 Gy as the minimum for a favourable response rate as per the treatment schedule that has been used. (author). 13 refs., 5 tabs
Modeling the systemic retention of beryllium in rat. Extrapolation to human
International Nuclear Information System (INIS)
In this work, we analyzed different approaches, assayed in order to numerically describe the systemic behaviour of Beryllium. The experimental results used in this work, were previously obtained by Furchner et al. (1973), using Sprague-Dawley rats, and others animal species. Furchner's work includes the obtained model for whole body retention in rats, but not for each target organ. In this work we present the results obtained by modeling the kinetic behaviour of Beryllium in several target organs. The results of this kind of models were used in order to establish correlations among the estimated kinetic constants. The parameters of the model were extrapolated to humans and, finally, compared with others previously published. (Author) 12 refs
Viollier, M.; Kandel, R.; Raberanto, P.
The study is mainly focused on combination of CERES with Meteosat-5. Examples of radiances and flux comparisons are shown between March 2000 and July 2002. They fix the errors due to possible calibration shift and narrowband-to-broadband conversion uncertainties. The Meteosat flux estimates at each half hour combined with the CERES fluxes are then used to compute monthly means. Compared to the ERBE-like extrapolation scheme, the changes are small in the LW domain, but significant in the SW (regional means: ˜ 20 Wm-2, 20S-20N means: ˜ 4 Wm-2 over the Meteosat-5 area). Improvements in this research field are expected from the analysis of the first Geostationary Radiation Budget instrument GERB/Meteosat-8.
Track extrapolation and distribution for the CDF-II trigger system
International Nuclear Information System (INIS)
The CDF-II experiment is a multipurpose detector designed to study a wide range of processes observed in the high energy proton-antiproton collisions produced by the Fermilab Tevatron. With event rates greater than 1 MHz, the CDF-II trigger system is crucial for selecting interesting events for subsequent analysis. This document provides an overview of the Track Extrapolation System (XTRP), a component of the CDF-II trigger system. The XTRP is a fully digital system that is utilized in the track-based selection of high momentum lepton and heavy flavor signatures. The design of the XTRP system includes five different custom boards utilizing discrete and FPGA technology residing in a single VME crate. We describe the design, construction, commissioning and operation of this system
Verner, Marc-André; Gaspar, Fraser W; Chevrier, Jonathan; Gunier, Robert B; Sjödin, Andreas; Bradman, Asa; Eskenazi, Brenda
2015-03-17
Study sample size in prospective birth cohorts of prenatal exposure to persistent organic pollutants (POPs) is limited by costs and logistics of follow-up. Increasing sample size at the time of health assessment would be beneficial if predictive tools could reliably back-extrapolate prenatal levels in newly enrolled children. We evaluated the performance of three approaches to back-extrapolate prenatal levels of p,p'-dichlorodiphenyltrichloroethane (DDT), p,p'-dichlorodiphenyldichloroethylene (DDE) and four polybrominated diphenyl ether (PBDE) congeners from maternal and/or child levels 9 years after delivery: a pharmacokinetic model and predictive models using deletion/substitution/addition or Super Learner algorithms. Model performance was assessed using the root mean squared error (RMSE), R2, and slope and intercept of the back-extrapolated versus measured levels. Super Learner outperformed the other approaches with RMSEs of 0.10 to 0.31, R2s of 0.58 to 0.97, slopes of 0.42 to 0.93 and intercepts of 0.08 to 0.60. Typically, models performed better for p,p'-DDT/E than PBDE congeners. The pharmacokinetic model performed well when back-extrapolating prenatal levels from maternal levels for compounds with longer half-lives like p,p'-DDE and BDE-153. Results demonstrate the ability to reliably back-extrapolate prenatal POP levels from levels 9 years after delivery, with Super Learner performing best based on our fit criteria. PMID:25698216
(Solid + liquid) solubility of organic compounds in organic solvents – Correlation and extrapolation
International Nuclear Information System (INIS)
Highlights: • A novel, robust semi-empirical model for regression of solubility is presented. • The model fulfils thermodynamic boundary conditions at the melting point. • The activity coefficient is modelled with a scaled three-parameter Weibull function. • A three-parameter regression equation is derived from the semi-empirical model. • This equation provides good accuracy and robustness compared to standard models. - Abstract: A semi-empirical model is developed for the regression of (solid + liquid) solubility data with temperature. The model fulfils the required boundary conditions, allowing for robust extrapolation to higher and lower temperatures. The model combines a representation of the solid-state activity which accommodates a temperature-dependent heat capacity difference contribution with a scaled three-parameter Weibull function representing the temperature dependence of the solution activity coefficient at equilibrium. Evaluation of the model is based on previously published experimental calorimetric and solubility data of four organic compounds, fenoxycarb, fenofibrate, risperidone and butyl paraben, in five common organic solvents, methanol, ethyl acetate, acetone, acetonitrile, and toluene. The temperature dependence of the van’t Hoff enthalpy of solution and its components is analysed and discussed. Among the four compounds the influence of temperature on the enthalpy of fusion varies from moderate to substantial. Based on the semi-empirical model, a new equation containing three adjustable parameters is proposed for regression and extrapolation of solubility data for cases when only melting data and solubility data is available. The equation is shown to provide good accuracy and robustness when evaluated against the full semi-empirical model as well as against commonly used, more simple empirical equations. It is shown how such a model can be used to obtain an estimate of the heat capacity difference for cases where accurate solubility data is available in multiple solvents
International Nuclear Information System (INIS)
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide
International Nuclear Information System (INIS)
Full text: The desired precision of 25 MeV for the W mass with the ATLAS detector is planed to be achieved by using the leptonic decay channel of the W : W ? l?, where l = e, ? . As the longitudinal momentum of the neutrino can not be measured, the measurement is done using the transverse momentum of lepton and neutrino, which is calculated through a recoil method. Results from CDF and D0 have shown that an unprecise knowledge of the total lepton energy and momentum scale is the dominating source of uncertainty of the W mass measurement. The knowledge of the lepton mass scale requires a deep understanding of the material in the ATLAS Inner Detector with a uncertainty of about 1 % which is of an order of magnitude better than in any comparable high energy physics experiment so far. In addition the magnetic field map has to be known with a precision of 0.1 %. This also requires tracking algorithms to process this detailed input. The methodology on how to achieve such a detailed description and its correct treatment including energy loss and multiple scattering effects during track extrapolation will be presented. In addition, results from the ATLAS Combined Testbeam 2004 using the new extrapolation scheme will be included into the presentation. (author)
Kadoura, Ahmad; Sun, Shuyu; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide.
Energy Technology Data Exchange (ETDEWEB)
Kadoura, Ahmad; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; Salama, Amgad
2014-08-01
Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (?, ?) for single site models were proposed for methane, nitrogen and carbon monoxide.
Barth, Eric; Schlick, Tamar
1998-08-01
We present an efficient new method termed LN for propagating biomolecular dynamics according to the Langevin equation that arose fortuitously upon analysis of the range of harmonic validity of our normal-mode scheme LIN. LN combines force linearization with force splitting techniques and disposes of LIN's computationally intensive minimization (anharmonic correction) component. Unlike the competitive multiple-timestepping (MTS) schemes today—formulated to be symplectic and time-reversible—LN merges the slow and fast forces via extrapolation rather than "impulses;" the Langevin heat bath prevents systematic energy drifts. This combination succeeds in achieving more significant speedups than these MTS methods which are limited by resonance artifacts to an outer timestep less than some integer multiple of half the period of the fastest motion (around 4-5 fs for biomolecules). We show that LN achieves very good agreement with small-timestep solutions of the Langevin equation in terms of thermodynamics (energy means and variances), geometry, and dynamics (spectral densities) for two proteins in vacuum and a large water system. Significantly, the frequency of updating the slow forces extends to 48 fs or more, resulting in speedup factors exceeding 10. The implementation of LN in any program that employs force-splitting computations is straightforward, with only partial second-derivative information required, as well as sparse Hessian/vector multiplication routines. The linearization part of LN could even be replaced by direct evaluation of the fast components. The application of LN to biomolecular dynamics is well suited for configurational sampling, thermodynamic, and structural questions.
GÃƒÂ¼nter HÃƒÂ¤felinger; Alexander Neugebauer
2005-01-01
Abstract : Stationary points for four geometrically different states of methylene: bent and linear triplet methylene, bent and linear singlet methylene were investigated using the highly reliable post-HF CCSD(T) method. Extrapolations to the CCSD(T) basis set (CBS) limit from Dunning triple to quintuple correlation consistent polarized basis sets were performed for total energies, for the equilibrium CH distances re(CH), for singlettriplet separation energies, for energy barriers to linearity...
International Nuclear Information System (INIS)
Sixty-nine critical configurations of up to 186 kg of uranium are reported from very early experiments (1960s) performed at the Rocky Flats Critical Mass Laboratory near Denver, Colorado. Enriched (93%) uranium metal spherical and hemispherical configurations were studied. All were thick-walled shells except for two solid hemispheres. Experiments were essentially unreflected; or they included central and/or external regions of mild steel. No liquids were involved. Critical parameters are derived from extrapolations beyond subcritical data. Extrapolations, rather than more precise interpolations between slightly supercritical and slightly subcritical configurations, were necessary because experiments involved manually assembled configurations. Many extrapolations were quite long; but the general lack of curvature in the subcritical region lends credibility to their validity. In addition to delayed critical parameters, a procedure is offered which might permit the determination of prompt critical parameters as well for the same cases. This conjectured procedure is not based on any strong physical arguments
International Nuclear Information System (INIS)
The numerical analysis of practically all existing formulae such as expansion series, Tait, logarithm, Van der Waals and virial equations for interpolation of experimental molar volumes versus high pressure was carried out. One can conclude that extrapolating dependences of molar volumes versus pressure and temperature can be valid. It was shown that virial equations can be used for fitting experimental data at relatively low pressures P<3 kbar too in distinction to other equations. Direct solving of a linear equation of the third order relatively to volume using extrapolated virial coefficients allows us to obtain good agreement between existing experimental data for high pressure and calculated values
Neves, Lucio P; Silva, Eric A B; Perini, Ana P; Maidana, Nora L; Caldas, Linda V E
2012-07-01
The extrapolation chamber is a parallel-plate ionization chamber that allows variation of its air-cavity volume. In this work, an experimental study and MCNP-4C Monte Carlo code simulations of an ionization chamber designed and constructed at the Calibration Laboratory at IPEN to be used as a secondary dosimetry standard for low-energy X-rays are reported. The results obtained were within the international recommendations, and the simulations showed that the components of the extrapolation chamber may influence its response up to 11.0%. PMID:22182629
International Nuclear Information System (INIS)
The concentrations and the organ distribution patterns of 228Th, 230Th and 232Th in two 9-y-old dogs of our beagle colony were determined. The dogs were exposed only to background environmental levels of Th isotopes through ingestion (food and water) and inhalation as are humans. The organ distribution patterns of the isotopes in the beagles were compared to the organ distribution patterns in humans to determine if it is appropriate to extrapolate the beagle organ burden data to humans. Among soft tissues, only the lungs, lymph nodes, kidney and liver, and skeleton contained measurable amounts of Th isotopes. The organ distribution pattern of Th isotopes in humans and dog are similar, the majority of Th being in the skeleton of both species. The average skeletal concentrations of 228Th in dogs were 30 to 40 times higher than the average skeletal concentrations of the parent 232Th, whereas the concentration of 228Th in human skeleton was only four to five times higher than 232Th. This suggests that dogs have a higher intake of 228Ra through food than humans. There is a similar trend in the accumulations of 232Th, 230Th and 228Th in the lungs of dog and humans. The percentages of 232Th, 230Th and 228Th in human lungs are 26, 9.7 and 4.8, respectively, compared to 4.2, 2.6 and 0.48, respectively, in dog lungs. The larger percentages of Th isotopes in human lungs may be due simply to the longer life span of humans. If the burdens of Th isotopes in human lungs are normalizof Th isotopes in human lungs are normalized to an exposure time of 9.2 y (mean age of dogs at the time of sacrifice), the percent burden of 232Th, 230Th and 228Th in human lungs are estimated to be 3.6, 1.3 and 0.66, respectively. These results suggest that the beagle may be an appropriate experimental animal for extrapolating the organ distribution pattern of Th in humans
An age-classified projection matrix model has been developed to extrapolate the chronic (28-35d) demographic responses of Americamysis bahia (formerly Mysidopsis bahia) to population-level response. This study was conducted to evaluate the efficacy of this model for predicting t...
Directory of Open Access Journals (Sweden)
Trevor G. Jones
2014-07-01
Full Text Available Information derived from high spatial resolution remotely sensed data is critical for the effective management of forested ecosystems. However, high spatial resolution data-sets are typically costly to acquire and process and usually provide limited geographic coverage. In contrast, moderate spatial resolution remotely sensed data, while not able to provide the spectral or spatial detail required for certain types of products and applications, offer inexpensive, comprehensive landscape-level coverage. This study assessed using an object-based approach to extrapolate detailed tree species heterogeneity beyond the extent of hyperspectral/LiDAR flightlines to the broader area covered by a Landsat scene. Using image segments, regression trees established ecologically decipherable relationships between tree species heterogeneity and the spectral properties of Landsat segments. The spectral properties of Landsat bands 4 (i.e., NIR: 0.76–0.90 µm, 5 (i.e., SWIR: 1.55–1.75 µm and 7 (SWIR: 2.08–2.35 µm were consistently selected as predictor variables, explaining approximately 50% of variance in richness and diversity. Results have important ramifications for ongoing management initiatives in the study area and are applicable to wide range of applications.
Extrapolation of stress rupture data on 9 to 12% Cr steels
International Nuclear Information System (INIS)
In this document the stress rupture strengths at times of up to 300 000h have been evaluated. In relation to the stress rupture strength, data from four steels, namely 9Cr1Mo, 9Cr2MoNbV, 9Cr1MoVNb and 12CrMoV, were examined and in each case four different parameters (Larson-Miller, Orr-Sherby-Dorn (original), Orr-Sherby-Dorn (ORNL) and Manson-Haferd) were employed to extrapolate the data out to 300 000h. At temperatures relevant to steam generators (c. 500OC) there was found to be little difference in predicted long-term strength values using the four approaches. However, the lower 95% confidence limits have been evaluated and it was found that for some of the steels these were different to the often assumed minimum set at 80% of the average. The rupture ductility values have been statistically evaluated at specific temperatures to establish the trend in ductility with increasing rupture time
Extrapolated renormalization group calculation of the surface tension in square-lattice Ising model
International Nuclear Information System (INIS)
By using self-dual clusters (whose sizes are characterized by the numbers b=2, 3, 4, 5) within a real space renormalization group framework, the longitudinal surface tension of the square-lattice first-neighbour 1/2-spin ferromagnetic Ising model is calculated. The exact critical temperature T sub(c) is recovered for any value of b; the exact assymptotic behaviour of the surface tension in the limit of low temperatures is analytically recovered; the approximate correlation length critical exponents monotonically tend towards the exact value ?=1 (which, at two dimensions, coincides with the surface tension critical exponent ?) for increasingly large cells; the same behaviour is remarked in what concerns the approximate values for the surface tension amplitude in the limit T?T sub(c). Four different numerical procedures are developed for extrapolating to b?infinite the renormalization group results for the surface tension, and quite satisfactory agreement is obtained with Onsager's exact expression (error varying from zero to a few percent on the whole temperature domain). Furthermore the set of RG surface tensions is compared with a set of biased surface tensions (associated to appropriate misfit seams), and find only fortuitous coincidence among them. (Author)
Tsiftsoglou, Asterios S; Trouvin, Jean Hugues; Calvo, Gonzalo; Ruiz, Sol
2014-12-01
The regulatory framework for biosimilars was established across Europe in 2005 based on the concept of biosimilarity. This legislation secures the manufacturing, evaluation, and market authorization (MA) of high-quality safe and efficacious biopharmaceuticals that are highly similar to their reference medicinal product (biosimilars). Demonstration of biosimilarity is documented by full-scale comparability exercises between the biosimilar and the reference product at quality, preclinical, and clinical level. However, the complexity, diversity, and heterogeneity of biosimilars, both in structure and manufacturing, combined with the scientific knowledge accumulated in biotechnological analysis of recombinant therapeutic proteins requires continuous improvement of the regulatory framework based on the evolution and experience gained in this field. This current opinion article presents the concept of biosimilarity, discusses the extrapolation of indications that is acceptable based on a case-by-case basis by CHMP/EMA and uncovers other challenges lying ahead in the development of biosimilars. Biosimilars are still quite 'young' products that require worldwide attention. PMID:25391420
The risk of extrapolation in neuroanatomy: the case of the mammalian vomeronasal system
Directory of Open Access Journals (Sweden)
IgnacioSalazar
2009-10-01
Full Text Available The sense of smell plays a crucial role in mammalian social and sexual behaviour, identification of food, and detection of predators. Nevertheless, mammals vary in their olfactory ability. One reason for this concerns the degree of development of their pars basalis rhinencephali, an anatomical feature that has has been considered in classifying this group of animals as macrosmatic, microsmatic or anosmatic. In mammals, different structures are involved in detecting odours: the main olfactory system, the vomeronasal system (VNS, and two subsystems, namely the ganglion of Grüneberg and the septal organ. Here, we review and summarise some aspects of the comparative anatomy of the VNS and its putative relationship to other olfactory structures. Even in the macrosmatic group, morphological diversity is an important characteristic of the VNS, specifically of the vomeronasal organ and the accessory olfactory bulb. We conclude that it is a big mistake to extrapolate anatomical data of the VNS from species to species, even in the case of relatively close evolutionary proximity between them. We propose to study other mammalian VNS than those of rodents in depth as a way to clarify its exact role in olfaction. Our experience in this field leads us to hypothesise that the VNS, considered for all mammalian species, could be a system undergoing involution or regression, and could serve as one more integrated olfactory subsystem.
Spatial extrapolation of light use efficiency model parameters to predict gross primary production
Directory of Open Access Journals (Sweden)
Karsten Schulz
2011-12-01
Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.
International Nuclear Information System (INIS)
This report addresses safety analysis of the whole repository life-cycle that may require long term performance assessment of its components and evaluation of potential impacts of the facility on the environment. Generic consideration of procedures for the development of predictive tools are completed by detailed characterization of selected principles and methods that were applied and presented within the co-ordinated research project (CRP). The project focused on different approaches to extrapolation, considering radionuclide migration/sorption, physical, geochemical and geotechnical characteristics of engineered barriers, irradiated rock and backfill performance, and on corrosion of metallic and vitreous materials. This document contains a comprehensive discussion of the overall problem and the practical results of the individual projects preformed within the CRP. Each of the papers on the individual projects has been indexed separately
International Nuclear Information System (INIS)
The potential energy curves (PECs) of the b1?g+ state of S2 have been calculated using a multi-reference configuration interaction method with the Davidson correction and a series of Dunning’s correlation-consistent basis sets: aug-c.c.-pVXZ and aug-c.c.-pV(X + d)Z (X = Q, 5 and 6). The calculated PECs are subsequently extrapolated to the complete basis set limit. Such PECs are then used to deduce the analytical potential energy functions (APEFs), which show small root mean square derivations. Based on the APEFs, we have calculated the spectroscopic parameters and compared them with the experimental data available at present. By solving the Schrödinger equation numerically, we also obtain the complete set of vibrational levels, classical turning points and rotation and centrifugal distortion constants when J=0. The present results can serve as a useful reference for future experimental and dynamics studies. (paper)
Zhang, Lu-Lu; Zhang, Jing; Meng, Qing-Tian; Song, Yu-Zhi
2015-03-01
The potential energy curves (PECs) of the {{b}1}\\sum g+ state of {{S}2} have been calculated using a multi-reference configuration interaction method with the Davidson correction and a series of Dunning’s correlation-consistent basis sets: aug-c.c.-pVXZ and aug-c.c.-pV(X + d)Z (X = Q, 5 and 6). The calculated PECs are subsequently extrapolated to the complete basis set limit. Such PECs are then used to deduce the analytical potential energy functions (APEFs), which show small root mean square derivations. Based on the APEFs, we have calculated the spectroscopic parameters and compared them with the experimental data available at present. By solving the Schrödinger equation numerically, we also obtain the complete set of vibrational levels, classical turning points and rotation and centrifugal distortion constants when J=0. The present results can serve as a useful reference for future experimental and dynamics studies.
Accelerated aging embrittlement of cast duplex stainless steel: Activation energy for extrapolation
International Nuclear Information System (INIS)
Cast duplex stainless steels, used extensively in LWR systems for primary pressure boundary components such as primary coolant pipes, valves, and pumps, are susceptible to thermal aging embrittlement at reactor operating or higher temperatures. Since a realistic aging embrittlement for end-of-life or life-extension conditions (i.e., 32--50 yr of aging at 280--320 degree C) cannot be produced, it is customary to simulate the metallurgical structure by accelerated aging at ?400 degree C. Over the past several years, extensive data on accelerated aging have been reported from a number of laboratories. The most important information from these studies is the activation energy, namely, the temperature dependence of the aging kinetics between 280 and 400 degree C, which is used to extrapolate the aging characteristics to reactor operating conditions. The activation energies (in the range of 18--50 kcal/mole) are, in general, sensitive to material grade, chemical composition, and fabrication process, and a few empirical correlations, obtained as a function of bulk chemical composition, have been reported. In this paper, a mechanistic understanding of the activation energy is described on the basis of the results of microstructural characterization of various heats of CF-3, -8, and -8M grades that were used in aging studies at different laboratories. The primary mechanism of aging embrittlement at temperatures between 280 and 400 degree C is the spinodal decomposition of the ferrite phase, and M23C6 carbide precipitation on the ferrite/austenite boundaries is the secondary mechanism for high-carbon CF-8 grade. 20 refs., 10 figs., 3 tabs
Extrapolation of the relative risk of radiogenic neoplasms across mouse strains and to man
International Nuclear Information System (INIS)
We have examined two interrelated questions: is the susceptibility for radiogenic cancer related to the natural incidence, and are the responses of cancer induction by radiation described better by an absolute or a relative risk model. Also, we have examined whether it is possible to extrapolate relative risk estimates across species, from mice to humans. The answers to these questions were obtained from determinations of risk estimates for nine neoplasms in female and male C3Hf/Bd and C57BL/6 Bd mice and from data obtained from previous experiments with female BALB/c Bd and RFM mice. The mice were exposed to 137Cs gamma rays at 0.4 Gy/min to doses of 0, 0.5, 1.0, or 2.0 Gy. When tumors that were considered the cause of death were examined, both the control and induced mortality rates for the various tumors varied considerably among sexes and strains. The results suggest that in general susceptibility is determined by the control incidence. The relative risk model was significantly superior in five of the tumor types: lung, breast, liver, ovary, and adrenal. Both models appeared to fit myeloid leukemia and Harderian gland tumors, and neither provided good fits for thymic lymphoma and reticulum cell sarcoma. When risk estimates of radiation-induced tumors in humans and mice were compared, it was found that the relative risk estimates for lung, breast, and leukemia were not significantly different between humans and mice. In the case of liver tumors, mice hadice. In the case of liver tumors, mice had a higher risk than humans. These results indicate that the relative risk model is the appropriate approach for risk estimation for a number of tumors. The apparent concordance of relative risk estimates between humans and mice for the small number of cancers examined encourages us to undertake further studies
Li, Yong-Qing; Song, Yu-Zhi; Joaquim de Campos Varandas, António
2015-01-01
An accurate single-sheeted double many-body expansion potential energy surface is reported for the title system. It is obtained by using the aug-cc-pVTZ and aug-cc-pVQZ basis sets with extrapolation of the electron correlation energy to the complete basis set limit, plus extrapolation to the complete basis set limit of the complete-active-space self-consistent field energy. The collinear and bending barrier heights of the new global potential energy surface is 2.301 and 1.768 kcal mol-1, in very good agreement with the values of 2.222 and 1.770 kcal mol-1 from the current best potential energy surface. In particular, the new potential energy surface describes well the important van der Waals interactions which is very useful for investigating the dynamics of the title system. Thus, the new potential energy surface can both be recommended for dynamics studies of the F + H2 reaction and as building block for constructing the potential energy surfaces of larger fluorine/hydrogen containing systems. Based on the new potential energy surface, a preliminary theoretical study of the reaction F(2P) + H2 (X1 ?g+) ? FH(X1?+) + H(2S) has been carried out with the methods of quasi-classical trajectory and quantum mechanical. The results have shown that the new PES is suitable for any kind of dynamics studies. Supplementary material in the form of one pdf file available from the Journal web page at http://dx.doi.org/10.1140/epjd/e2014-50445-3
Semiokhina, A F; Ochinskaia, E I; Rubtsova, N B; Pleskacheva, M G; Krushinski?, L V
1985-01-01
Sharp EEG changes are recorded in bioelectrical activity of the dorsal cortex and dorsal ventricular edge in marsh tortoises in conditions of free movement during solving of an extrapolation task (a test of elementary reasoning ability). These changes of a pathological character, accompanied by neurotic states, were observed in some animals having correctly solved the task several times in succession (2-5), beginning with the first presentation. Such changes of EEG and behaviour were not found in tortoises that committed errors at first presentations of the task and only gradually learned correct solving. Formation of the adequate behaviour can proceed by two means: on the basis of elementary reasoning ability and learning. Disturbance of adequate behaviour in the experiment with characteristic changes of EEG testifies to a difficult state of the animal during solving of the extrapolation task. PMID:4090728
DEFF Research Database (Denmark)
Thorndahl, SØren Liedtke; Rasmussen, Michael R.
2013-01-01
Model based short-term forecasting of urban storm water runoff can be applied in realtime control of drainage systems in order to optimize system capacity during rain and minimize combined sewer overflows, improve wastewater treatment or activate alarms if local flooding is impending. A novel online system, which forecasts flows and water levels in real-time with inputs from extrapolated radar rainfall data, has been developed. The fully distributed urban drainage model includes auto-calibration using online in-sewer measurements which is seen to improve forecast skills significantly. The radar rainfall extrapolation (nowcast) limits the lead time of the system to two hours. In this paper, the model set-up is tested on a small urban catchment for a period of 1.5 years. The 50 largest events are presented.
International Nuclear Information System (INIS)
A borehole investigating device takes measurements of a subsurface earth formation and provides signals forming sonic, formation density or similar logs of the borehole. Additionally, the investigating device measures the dip of seismic signal reflectors traversed by the borehole and provides corresponding dip signals. A seismic section which may or may not include the borehole is selected, and the log and dip signals are combined with signals defining the location of the seismic section with respect to the borehole, to thereby provide synthetic logs for each of a number of virtual boreholes which coincide with selected virtual and/or actual shotpoints of the seismic section. The synthetic log signals are then combined to form a truly twodimensional synthetic seismogram for the selected seismic section. The synthetically derived signals may be corrected in accordance with a selected geological model of the formation
Full-disk nonlinear force-free field extrapolation of SDO/HMI and SOLIS/VSM magnetograms
Tadesse, Tilaye; Wiegelmann, T.; Inhester, B.; MACNEICE, P; Pevtsov, A.; Sun, X.
2012-01-01
Extrapolation codes in Cartesian geometry for modelling the magnetic field in the corona do not take the curvature of the Sun's surface into account and can only be applied to relatively small areas, e.g., a single active region. We compare the analysis of the photospheric magnetic field and subsequent force-free modeling based on full-disk vector maps from Helioseismic and Magnetic Imager (HMI) on board solar dynamics observatory (SDO) and Vector Spectromagnetograph (VSM) o...
Wiegelmann, T.; Thalmann, J. K.; Inhester, B.; Tadesse, T.; Sun, X.; Hoeksema, J. T.
2012-01-01
The SDO/HMI instruments provide photospheric vector magnetograms with a high spatial and temporal resolution. Our intention is to model the coronal magnetic field above active regions with the help of a nonlinear force-free extrapolation code. Our code is based on an optimization principle and has been tested extensively with semi-analytic and numeric equilibria and been applied before to vector magnetograms from Hinode and ground based observations. Recently we implemented ...
International Nuclear Information System (INIS)
The metrological coherence among standard systems is a requirement for assuring the reliability of dosimetric quantities measurements in ionizing radiation field. Scientific and technologic improvements happened in beta radiation metrology with the installment of the new beta secondary standard BSS2 in Brazil and with the adoption of the internationally recommended beta reference radiations. The Dosimeter Calibration Laboratory of the Development Center for Nuclear Technology (LCD/CDTN), in Belo Horizonte, implemented the BSS2 and methodologies are investigated for characterizing the beta radiation fields by determining the field homogeneity, the accuracy and uncertainties in the absorbed dose in air measurements. In this work, a methodology to be used for verifying the metrological coherence among beta radiation fields in standard systems was investigated; an extrapolation chamber and radiochromic films were used and measurements were done in terms of absorbed dose in air. The reliability of both the extrapolation chamber and the radiochromic film was confirmed and their calibrations were done in the LCD/CDTN in 90Sr/90Y, 85Kr and 147Pm beta radiation fields. The angular coefficients of the extrapolation curves were determined with the chamber; the field mapping and homogeneity were obtained from dose profiles and isodose with the radiochromic films. A preliminary comparison between the LCD/CDTN and the Instrument Calibration Laboratory of the Nuclear and Energy Research Institute / Sao Paulo (LCI/IPEN) was carried out. Results with the extrapolation chamber measurements showed in terms of absorbed dose in air rates showed differences between both laboratories up to de -I % e 3%, for 90Sr/90Y, 85Kr and 147Pm beta radiation fields, respectively. Results with the EBT radiochromic films for 0.1, 0.3 and 0.15 Gy absorbed dose in air, for the same beta radiation fields, showed differences up to 3%, -9% and -53%. The beta radiation field mappings with radiochromic films in both BSS2 showed that some of them were not geometrically aligned. (author)
Scott, Bradley J; Klein, Agnes V; Wang, Jian
2015-03-01
Monoclonal antibodies have become mainstays of treatment for many diseases. After more than a decade on the Canadian market, a number of authorized monoclonal antibody products are facing patent expiry. Given their success, most notably in the areas of oncology and autoimmune disease, pharmaceutical and biotechnology companies are eager to produce their own biosimilar versions and have begun manufacturing and testing for a variety of monoclonal antibody products. In October of 2013, the first biosimilar monoclonal antibody products were approved by the European Medicines Agency (Remsima™ and Inflectra™). These products were authorized by Health Canada shortly after; however, while the EMA allowed for extrapolation to all of the indications held by the reference product, Health Canada limited extrapolation to a subset of the indications held by the reference product, Remicade®. The purpose of this review is to discuss the Canadian regulatory framework for the authorization of biosimilar mAbs with specific discussion around the clinical requirements for establishing (bio)-similarity and to present the principles that are used in the clinical assessment of New Drug Submissions for intended biosimilar monoclonal antibodies. Health Canada's current views regarding indication extrapolation, product interchangeability, and post-market surveillance are discussed as well. PMID:24965228
Jiang, Chaowei
2015-01-01
In the solar corona, magnetic flux rope is believed to be a fundamental structure accounts for magnetic free energy storage and solar eruptions. Up to the present, the extrapolation of magnetic field from boundary data is the primary way to obtain fully three-dimensional magnetic information of the corona. As a result, the ability of reliable recovering coronal magnetic flux rope is important for coronal field extrapolation. In this paper, our coronal field extrapolation code (CESE-MHD-NLFFF, Jiang & Feng 2012) is examined with an analytical magnetic flux rope model proposed by Titov & Demoulin (1999), which consists of a bipolar magnetic configuration holding an semi-circular line-tied flux rope in force-free equilibrium. By using only the vector field in the bottom boundary as input, we test our code with the model in a representative range of parameter space and find that the model field is reconstructed with high accuracy. Especially, the magnetic topological interfaces formed between the flux rop...
Gaddah, Wajdi A.
2015-05-01
In this paper, we present highly accurate numerical results for the lowest four energy eigenvalues of the quartic, sextic and octic anharmonic oscillators over a wide range of the anharmonicity parameter ? . Also, we provide illustrative graphs describing the dependence of the eigenvalues on ? . Our computation is carried out by using higher-order finite-difference approximation, involving the nine-and-ten-point differentiation formulas. In addition, we apply Richardson’s extrapolation method in our calculation for the purpose of achieving a maximum numerical precision. The main advantage of utilizing the finite-difference approach lies in its simplicity and capability to transform the time-independent Schrödinger equation into an eigenvalue matrix equation. This allows the use of numerical matrix algebra for obtaining several eigenvalues and eigenvectors simultaneously without consuming much of the computer time. The method is illustrated in a simple pedagogical way through which the close relation between differential and algebraic eigenvalue problems are clearly seen. The findings of our computations via MATLAB are tested on a number of accurate results derived by different methods.
International Nuclear Information System (INIS)
In this paper, we present a first numerical scheme to estimate Partition Functions (PF) of 3D Ising fields. Our strategy is applied to the context of the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated regions and estimate region-dependent, hemodynamic filters. For any region, a specific binary Markov random field may embody spatial correlation over the hidden states of the voxels by modeling whether they are activated or not. To make this spatial regularization fully adaptive, our approach is first based upon it, classical path-sampling method to approximate a small subset of reference PFs corresponding to pre-specified regions. Then, file proposed extrapolation method allows its to approximate the PFs associated with the Ising fields defined over the remaining brain regions. In comparison with preexisting approaches, our method is robust; to topological inhomogeneities in the definition of the reference regions. As a result, it strongly alleviates the computational burden and makes spatially adaptive regularization of whole brain fMRI datasets feasible. (authors)
Reeves, J. A.; Knight, R. J.; Zebker, H. A.; Kitanidis, P. K.; Schreuder, W. A.
2013-12-01
A 2004 court decision established that hydraulic head levels within the confined aquifer system of the San Luis Valley (SLV), Colorado be maintained within the range experienced in the years between 1978 and 2000. The current groundwater flow model for this area is not able to predict hydraulic head accurately in the confined aquifer system due to a dearth of calibration points, i.e., hydraulic head measurements, during the time period of interest. The work presented here investigates the extent to which spatially and temporally dense measurements of deformation from Interferometric Synthetic Aperture Radar (InSAR) data could be used to interpolate and extrapolate temporal and spatial gaps in the hydraulic head dataset by performing a calibration at the well locations. We first predicted the magnitude of the seasonal deformation at the confined aquifer well locations by using aquifer thickness/lithology information from well logs and estimates of the aquifer compressibility from the literature. At 11 well locations the seasonal magnitude of the deformation was sufficiently large so as to be reliably measured with InSAR, given the accepted level of uncertainty of the measurement (~ 5 mm). Previous studies in arid or urban areas have shown that high quality InSAR deformation measurements are often collocated with hydraulic head measurements at monitoring wells, making such a calibration approach relatively straightforward. In contrast, the SLV is an agricultural area where many factors, e.g. crop growth, can seriously degrade the quality of the InSAR data. We used InSAR data from the ERS-1 and ERS-2 satellites, which have a temporal sampling of 35 days and a spatial sampling on the order of 10's of meters, and found that the InSAR data were not of sufficiently high quality at any of the 11 selected well locations. Hence, we used geostatistical techniques to analyze the high quality InSAR deformation data elsewhere in the scene and to estimate the deformation at the selected well locations. At the 11 locations we estimated the compressibility parameter that relates the deformation and the hydraulic head. We found that this calibration was effective at 3 of the well locations where the magnitude of the seasonal deformation was > 3 cm, well above the uncertainty of the InSAR measurement. We then estimated the hydraulic head prior to and within the temporal sampling window of the hydraulic head measurements at the 3 well locations. We found that 59% of the InSAR-predicted hydraulic head values agree with the measured hydraulic head values, within the uncertainty of the data. Given our success in extending the hydraulic head data temporally, the next step in our research is to use InSAR data to interpolate spatially between hydraulic head measurements at field sites where the magnitude of the deformation is large enough to be accurately measured by InSAR.
Energy Technology Data Exchange (ETDEWEB)
Meyer, M.; Lerjen, M.; Menth, S. [emkamatik GmbH, Wettingen (Switzerland); Luethi, M. [Swiss Federal Insitute of Technology (ETHZ), Institute for Transport Planning and Systems (IVT), Zuerich (Switzerland); Tuchschmid, M. [SBB AG, BahnUmwelt-Center, 3000 Bern (Switzerland)
2009-11-15
This appendix to a final report for the Swiss Federal Office of Energy (SFOE) presents the results of measurements made on trains and presents and discusses extrapolations made on the basis of these measurements. The evaluation and selection of the trains on which the measurements were to be made is discussed. Mainly passenger trains were selected as only few goods engines have the necessary equipment and equipping them would be costly. Measurements made on a Re 460 locomotive are presented and discussed. The methods used in the energy analysis are described and the results obtained on several itineraries that include partial single-track working are presented and discussed.
Directory of Open Access Journals (Sweden)
GÃƒÂ¼nter HÃƒÂ¤felinger
2005-01-01
Full Text Available Abstract : Stationary points for four geometrically different states of methylene: bent and linear triplet methylene, bent and linear singlet methylene were investigated using the highly reliable post-HF CCSD(T method. Extrapolations to the CCSD(T basis set (CBS limit from Dunning triple to quintuple correlation consistent polarized basis sets were performed for total energies, for the equilibrium CH distances re(CH, for singlettriplet separation energies, for energy barriers to linearity and for correlation energies. Post-HF calculations with Dunning basis sets of the literature are presented for comparisons.
Rogers, Richard
2004-01-01
Objective: The overriding objective is a critical examination of Munchausen syndrome by proxy (MSBP) and its closely-related alternative, factitious disorder by proxy (FDBP). Beyond issues of diagnostic validity, assessment methods and potential detection strategies are explored. Methods: A painstaking analysis was conducted of the MSBP and FDBP…
Gajewska, M; Worth, A; Urani, C; Briesen, H; Schramm, K-W
2014-06-16
The application of physiologically based toxicokinetic (PBTK) modelling in route-to-route (RtR) extrapolation of three cosmetic ingredients: coumarin, hydroquinone and caffeine is shown in this study. In particular, the oral no-observed-adverse-effect-level (NOAEL) doses of these chemicals are extrapolated to their corresponding dermal values by comparing the internal concentrations resulting from oral and dermal exposure scenarios. The PBTK model structure has been constructed to give a good simulation performance of biochemical processes within the human body. The model parameters are calibrated based on oral and dermal experimental data for the Caucasian population available in the literature. Particular attention is given to modelling the absorption stage (skin and gastrointestinal tract) in the form of several sub-compartments. This gives better model prediction results when compared to those of a PBTK model with a simpler structure of the absorption barrier. In addition, the role of quantitative structure-property relationships (QSPRs) in predicting skin penetration is evaluated for the three substances with a view to incorporating QSPR-predicted penetration parameters in the PBTK model when experimental values are lacking. Finally, PBTK modelling is used, first to extrapolate oral NOAEL doses derived from rat studies to humans, and then to simulate internal systemic/liver concentrations - Area Under Curve (AUC) and peak concentration - resulting from specified dermal and oral exposure conditions. Based on these simulations, AUC-based dermal thresholds for the three case study compounds are derived and compared with the experimentally obtained oral threshold (NOAEL) values. PMID:24731971
Directory of Open Access Journals (Sweden)
Ravichandran R
2009-01-01
Full Text Available The objective of the present study is to establish radiation standards for absorbed doses, for clinical high energy linear accelerator beams. In the nonavailability of a cobalt-60 beam for arriving at Nd, water values for thimble chambers, we investigated the efficacy of perspex mounted extrapolation chamber (EC used earlier for low energy x-rays and beta dosimetry. Extrapolation chamber with facility for achieving variable electrode separations 10.5mm to 0.5mm using micrometer screw was used for calibrations. Photon beams 6 MV and 15 MV and electron beams 6 MeV and 15 MeV from Varian Clinac linacs were calibrated. Absorbed Dose estimates to Perspex were converted into dose to solid water for comparison with FC 65 ionisation chamber measurements in water. Measurements made during the period December 2006 to June 2008 are considered for evaluation. Uncorrected ionization readings of EC for all the radiation beams over the entire period were within 2% showing the consistency of measurements. Absorbed doses estimated by EC were in good agreement with in-water calibrations within 2% for photons and electron beams. The present results suggest that extrapolation chambers can be considered as an independent measuring system for absorbed dose in addition to Farmer type ion chambers. In the absence of standard beam quality (Co-60 radiations as reference Quality for Nd,water the possibility of keeping EC as Primary Standards for absorbed dose calibrations in high energy radiation beams from linacs should be explored. As there are neither Standard Laboratories nor SSDL available in our country, we look forward to keep EC as Local Standard for hospital chamber calibrations. We are also participating in the IAEA mailed TLD intercomparison programme for quality audit of existing status of radiation dosimetry in high energy linac beams. The performance of EC has to be confirmed with cobalt-60 beams by a separate study, as linacs are susceptible for minor variations in dose output on different days.
Richmond, Orien M W; McEntee, Jay P; Hijmans, Robert J; Brashares, Justin S
2010-01-01
Species distribution models (SDMs) are increasingly used for extrapolation, or predicting suitable regions for species under new geographic or temporal scenarios. However, SDM predictions may be prone to errors if species are not at equilibrium with climatic conditions in the current range and if training samples are not representative. Here the controversial "Pleistocene rewilding" proposal was used as a novel example to address some of the challenges of extrapolating modeled species-climate relationships outside of current ranges. Climatic suitability for three proposed proxy species (Asian elephant, African cheetah and African lion) was extrapolated to the American southwest and Great Plains using Maxent, a machine-learning species distribution model. Similar models were fit for Oryx gazella, a species native to Africa that has naturalized in North America, to test model predictions. To overcome biases introduced by contracted modern ranges and limited occurrence data, random pseudo-presence points generated from modern and historical ranges were used for model training. For all species except the oryx, models of climatic suitability fit to training data from historical ranges produced larger areas of predicted suitability in North America than models fit to training data from modern ranges. Four naturalized oryx populations in the American southwest were correctly predicted with a generous model threshold, but none of these locations were predicted with a more stringent threshold. In general, the northern Great Plains had low climatic suitability for all focal species and scenarios considered, while portions of the southern Great Plains and American southwest had low to intermediate suitability for some species in some scenarios. The results suggest that the use of historical, in addition to modern, range information and randomly sampled pseudo-presence points may improve model accuracy. This has implications for modeling range shifts of organisms in response to climate change. PMID:20877563
Energy Technology Data Exchange (ETDEWEB)
Sussmann, R.; Homburg, F.; Freudenthaler, V.; Jaeger, H. [Frauenhofer Inst. fuer Atmosphaerische Umweltforschung, Garmisch-Partenkirchen (Germany)
1997-12-31
The CCD image of a persistent contrail and the coincident LIDAR measurement are presented. To extrapolate the LIDAR derived optical thickness to the video field of view an anisotropy correction and calibration has to be performed. Observed bright halo components result from highly regular oriented hexagonal crystals with sizes of 200 {mu}m-2 mm. This explained by measured ambient humidities below the formation threshold of natural cirrus. Optical thickness from LIDAR shows significant discrepancies to the result from coincident NOAA-14 data. Errors result from anisotropy correction and parameterized relations between AVHRR channels and optical properties. (author) 28 refs.
The efficiency variation method for 4??-? coincidence counting by ink-jet printing
International Nuclear Information System (INIS)
In order to vary the counting efficiencies in the 4??-? coincidence extrapolation technique, a radioactive source was coated directly with varying amounts of an electrical conducting pigment using an ink-jet printer. This method can be used to efficiently prepare the multiple sources needed to generate efficiency extrapolation curves, and was successfully applied to the standardization of a 54Mn source
Ford, William P.; Orden, J. W.
2013-01-01
In this work, an off-shell extrapolation is proposed for the Regge-model $NN$ amplitudes of \\cite{FVO_Reggemodel}. A prescription for extrapolating these amplitudes for one nucleon off-shell in the initial state are given. Application of these amplitudes to calculations of deuteron electrodisintegration are presented and compared to the limited available precision data in the kinematical region covered by the Regge model.
Transport equation solving methods
International Nuclear Information System (INIS)
This work is mainly devoted to Csub(N) and Fsub(N) methods. CN method: starting from a lemma stated by Placzek, an equivalence is established between two problems: the first one is defined in a finite medium bounded by a surface S, the second one is defined in the whole space. In the first problem the angular flux on the surface S is shown to be the solution of an integral equation. This equation is solved by Galerkin's method. The Csub(N) method is applied here to one-velocity problems: in plane geometry, slab albedo and transmission with Rayleigh scattering, calculation of the extrapolation length; in cylindrical geometry, albedo and extrapolation length calculation with linear scattering. Fsub(N) method: the basic integral transport equation of the Csub(N) method is integrated on Case's elementary distributions; another integral transport equation is obtained: this equation is solved by a collocation method. The plane problems solved by the Csub(N) method are also solved by the Fsub(N) method. The Fsub(N) method is extended to any polynomial scattering law. Some simple spherical problems are also studied. Chandrasekhar's method, collision probability method, Case's method are presented for comparison with Csub(N) and Fsub(N) methods. This comparison shows the respective advantages of the two methods: a) fast convergence and possible extension to various geometries for Csub(N) method; b) easy calculations and easy extension to polynomial scattering for Fsub(N) method
Directory of Open Access Journals (Sweden)
Leonieke Vermeer
2007-01-01
Full Text Available
'If the table turns, science will stagger'. The relationship between spiritualism and science in the Netherlands around 1900
Spiritualism is the belief that living men can keep contact, usually through an intermediary called a 'medium', with spirits of the dead. The history of modern spiritualism started in 1848 in America and in the decades that followed it spread all over the world. Especially as a result of British influences modern Anglo-Saxon spiritualism is characterized by a search for scientific proof of the so-called spiritualist phenomena. In the 1920s the Netherlands were late in comparison with neighbouring countries to institutionalize the scientific study of these phenomena. But this does not imply that there was no earlier discussion about it. Indeed, around 1900 there were attempts for a debate about the scientific underpinning of spiritualism and the main stage for it was the journal Het toekomstig leven [The future life]. In the historical conceptualization of this debate it has long been common to see the spiritualists as an anti-modern counterculture and the scientists as the representatives of modernity. Recently this dichotomic order has been replaced for a nuanced view that does more justice to the historical reality. Although Het toekomstig leven often used rhetoric strategies that emphasized the confrontation with science, the journal also lavishly incorporated scientific elements and made inexhaustible attempts for a scientific debate and study of the paranormal phenomena. Unlike in neighbouring countries there were hardly any natural scientists who responded, but there were some physicians as well as pioneers of the new field of parapsychology who pleaded for scientific research of spiritualism. This research eventually became reality in 1920 under the direction of some heavyweight scientists, but just as Het toekomstig leven the Dutch Society for Psychical Research was also marked by the difference between the critical-scientific approach and the not so critical approach of the believers. In my contribution I have showed that this demarcation was however not the same as the one between science and spiritualism, because these boundaries were considerably permeable.
Directory of Open Access Journals (Sweden)
T. Gerken
2012-04-01
Full Text Available This paper introduces a surface model with two soil-layers for use in a high-resolution circulation model that has been modified with an extrapolated surface temperature, to be used for the calculation of turbulent fluxes. A quadratic temperature profile based on the layer mean and base temperature is assumed in each layer and extended to the surface. The model is tested at two sites on the Tibetan Plateau near Nam Co Lake during four days during the 2009 Monsoon season. In comparison to a two-layer model without explicit surface temperature estimate, there is a greatly reduced delay in diurnal flux cycles and the modelled surface temperature is much closer to observations. Comparison with a SVAT model and eddy covariance measurements shows an overall reasonable model performance based on RMSD and cross correlation comparisons between the modified and original model. A potential limitation of the model is the need for careful initialisation of the initial soil temperature profile, that requires field measurements. We show that the modified model is capable of reproducing fluxes of similar magnitudes and dynamics when compared to more complex methods chosen as a reference.
Taylor, Nicholas W.; Boyle, Michael; Reisswig, Christian; Scheel, Mark A.; Chu, Tony; Kidder, Lawrence E.; Szilagyi, Bela
2013-01-01
We extract gravitational waveforms from numerical simulations of black hole binaries computed using the Spectral Einstein Code. We compare two extraction methods: direct construction of the Newman-Penrose (NP) scalar $\\Psi_4$ at a finite distance from the source and Cauchy-characteristic extraction (CCE). The direct NP approach is simpler than CCE, but NP waveforms can be contaminated by near-zone effects---unless the waves are extracted at several distances from the source ...
Imaging of defects in girth welds using inverse wave field extrapolation of ultrasonic data:
Po?rtzgen, N.
2007-01-01
Ultrasonic non-destructive testing is a renowned method for the inspection of girth welds. However, defect sizing and characterization remains challenging with the current inspection philosophy. In addition, data display and interpretation is not straightforward and requires skill and experience from the inspector. A better and more reliable inspection result would contribute to safer pipeline construction and economic benefits (like low false call rates and the possibility to use smaller wal...
Gaspar, Leticia; López-Vicente, Manuel; Palazón, Leticia; Quijano, Laura; Navas, Ana
2015-04-01
The use of fallout radionuclides, particularly 137Cs, in soil erosion investigations has been successfully used over a range of different landscapes. This technique provides mean annual values of spatially distributed soil erosion and deposition rates for the last 40-50 years. However, upscaling the data provided by fallout radionuclides to catchment level is required to understand soil redistribution processes, to support catchment management strategies, and to assess the main soil erosion factors like vegetation cover or topography. In recent years, extrapolating field scale soil erosion rates estimated from 137Cs data to catchment scale has been addressed using geostatistical interpolation and Geographical Information Systems (GIS). This study aims to assess soil redistribution in an agroforestry catchment characterized by abrupt topography and an intricate mosaic of land uses using 137Cs data and GIS. A new methodological approach using GIS is presented as an alternative of interpolation tools to extrapolating soil redistribution rates in complex landscapes. This approach divides the catchment into Homogeneous Physiographic Units (HPUs) based on unique land use, hydrological network and slope value. A total of 54 HPUs presenting specific land use, strahler order and slope combinations, were identified within the study area (2.5 km2) located in the north of Spain. Using 58 soil erosion and deposition rates estimated from 137Cs data, we were able to characterize the predominant redistribution processes in 16 HPUs, which represent the 78% of the study area surface. Erosion processes predominated in 6 HPUs (23%) which correspond with cultivated units in which slope and strahler order is moderate or high, and with scrubland units with high slope. Deposition was predominant in 3 HPUs (6%), mainly in riparian areas, and to a lesser extent in forest and scrubland units with low slope and low and moderate strahler order. Redistribution processes, both erosion and deposition processes, were recorded in 7 HPUs (49%). The units of forest with high slope but low strahler order showed low redistribution rates because the soil surface was well protected by vegetation, while cultivated units with moderate slope and low strahler order showed high erosion and deposition rates due to the tillage practices. This new approach provides the basis for extrapolating field-scale soil redistribution rates at catchment scale in complex landscapes. Additional 137Cs data in strategic locations would improve the results with a better characterization of some of the HPU's.
International Nuclear Information System (INIS)
The amount of deformation that a component can successfully endure without incurring failure-inducing damage can be a useful limit in elevated temperature design. Treatment of experimental creep ductility data by a parametric analysis technique allows the prediction of such limits. Application of such a method to data for various ductility indices for four elevated-temperature structural materials is presented, including a discussion of the results. Materials studied include types 304 and 316 stainless steel, 21/4 Cr-1 Mo steel, and Inconel Alloy 718. (10 tables, 16 figures) (U.S.)
Wiegelmann, T; Inhester, B; Tadesse, T; Sun, X; Hoeksema, J T
2012-01-01
The SDO/HMI instruments provide photospheric vector magnetograms with a high spatial and temporal resolution. Our intention is to model the coronal magnetic field above active regions with the help of a nonlinear force-free extrapolation code. Our code is based on an optimization principle and has been tested extensively with semi-analytic and numeric equilibria and been applied before to vector magnetograms from Hinode and ground based observations. Recently we implemented a new version which takes measurement errors in photospheric vector magnetograms into account. Photospheric field measurements are often due to measurement errors and finite nonmagnetic forces inconsistent as a boundary for a force-free field in the corona. In order to deal with these uncertainties, we developed two improvements: 1.) Preprocessing of the surface measurements in order to make them compatible with a force-free field 2.) The new code keeps a balance between the force-free constraint and deviation from the photospheric field m...
Energy Technology Data Exchange (ETDEWEB)
Scott, B.R.; Muggenburg, B.A.; Welsh, C.A.; Angerstein, D.A.
1994-11-01
The alpha emitter plutonium-238 ({sup 238}Pu), which is produced in uranium-fueled, light-water reactors, is used as a thermoelectric power source for space applications. Inhalation of a mixed oxide form of Pu is the most likely mode of exposure of workers and the general public. Occupational exposures to {sup 238}PuO{sub 2} have occurred in association with the fabrication of radioisotope thermoelectric generators. Organs and tissue at risk for deterministic and stochastic effects of {sup 238}Pu-alpha irradiation include the lung, liver, skeleton, and lymphatic tissue. Little has been reported about the effects of inhaled {sup 238}PuO{sub 2} on peripheral blood cell counts in humans. The purpose of this study was to investigate hematological responses after a single inhalation exposure of Beagle dogs to alpha-emitting {sup 238}PuO{sub 2} particles and to extrapolate results to humans.
Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming
2015-03-01
The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.
Extrapolation of Urn Models via Poissonization: Accurate Measurements of the Microbial Unknown
Lladser, Manuel; Reeder, Jens; 10.1371/journal.pone.0021105
2011-01-01
The availability of high-throughput parallel methods for sequencing microbial communities is increasing our knowledge of the microbial world at an unprecedented rate. Though most attention has focused on determining lower-bounds on the alpha-diversity i.e. the total number of different species present in the environment, tight bounds on this quantity may be highly uncertain because a small fraction of the environment could be composed of a vast number of different species. To better assess what remains unknown, we propose instead to predict the fraction of the environment that belongs to unsampled classes. Modeling samples as draws with replacement of colored balls from an urn with an unknown composition, and under the sole assumption that there are still undiscovered species, we show that conditionally unbiased predictors and exact prediction intervals (of constant length in logarithmic scale) are possible for the fraction of the environment that belongs to unsampled classes. Our predictions are based on a P...
A Magnetostatic Grad-Rubin Code for Coronal Magnetic Field Extrapolations
Gilchrist, S. A.; Wheatland, M. S.
2013-01-01
The coronal magnetic field cannot be directly observed, but, in principle, it can be reconstructed from the comparatively well observed photospheric magnetic field. A popular approach uses a nonlinear force-free model. Non-magnetic forces at the photosphere are significant, meaning the photospheric data are inconsistent with the force-free model, and this causes problems with the modeling (De Rosa et al., Astrophys. J. 696, 1780, 2009). In this paper we present a numerical implementation of the Grad-Rubin method for reconstructing the coronal magnetic field using a magnetostatic model. This model includes a pressure force and a non-zero magnetic Lorentz force. We demonstrate our implementation on a simple analytic test case and obtain the speed and numerical error scaling as a function of the grid size.
A Magnetostatic Grad-Rubin Code for Coronal Magnetic Field Extrapolations
Gilchrist, S A
2012-01-01
The coronal magnetic field cannot be directly observed, but in principle it can be reconstructed from the comparatively well observed photospheric magnetic field. A popular approach uses a nonlinear force-free model. Non-magnetic forces at the photosphere are significant meaning the photospheric data are inconsistent with the force-free model, and this causes problems with the modeling (De Rosa et al., Astrophys. J. 696, 1780, 2009). In this paper we present a numerical implementation of the Grad-Rubin method for reconstructing the coronal magnetic field using a magnetostatic model. This model includes a pressure force and a non-zero magnetic Lorentz force. We demonstrate our implementation on a simple analytic test case and obtain the speed and numerical error scaling as a function of the grid size.
Extrapolating ecological risks of ionizing radiation from individuals to populations to ecosystems
International Nuclear Information System (INIS)
Approaches for protecting ecosystems from ionizing radiation are quite different from those used for protecting ecosystems from adverse effects of toxic chemicals. The methods used for chemicals are conceptually similar to those used to assess risks of chemicals to human health in that they focus on the protection of the most sensitive or most highly exposed individuals. The assumption is that if sensitive or maximally exposed species and life stages are protected, then ecosystems will be protected. Radiological protection standards, on the other hand, are explicitly premised on the assumption that organisms, populations and ecosystems all possess compensatory capabilities to allow them to survive in the face of unpredictable natural variation in their environments. These capabilities are assumed to persist in the face of at least some exposure to ionizing radiation. The prevailing approach to radiological protection was developed more than 30 years ago, at a time when the terms risk assessment and risk management were rarely used. The expert review approach used to derive radiological protection standards is widely perceived to be inconsistent with the open, participatory approach that prevails today for the regulation of toxic chemicals. The available data for environmental radionuclides vastly exceeds that available for any chemical. Therefore, given an understanding of dose-response relationships for radiation effects and exposures for individual organisms, it shoexposures for individual organisms, it should be possible to develop methods for quantifying effects of radiation on populations. A tiered assessment scheme as well as available population models that could be used for the ecological risk assessment of radionuclides is presented. (author)
Evaluation of extrapolation for creep-fatigue life by hysteresis energy
International Nuclear Information System (INIS)
The creep-fatigue life has been evaluated by the hysteresis energy in 316FR stainless steel with low carbon and medium nitrogen, which is a candidate for structural material in Fast Breeder Reactor (FBR) plant with the design life of 60 years. The creep-fatigue is a main damage mode to prevent. The hysteresis energy rate is considered as the parameter to predict the life time. It is clear that the relationship between this parameter and the time to failure can be approximately expressed by the power-law function. The function depends on the ratio of plastic strain to total strain. Total fracture energy for creep-fatigue loading intends to be independent of the ratio of plastic strain to total strain in long-term test condition. The value is related to grain boundary strength for creep-fatigue loading because fracture mode in long-term test condition is intergranular fracture. The life could be predicted by the function in the case of no significant change of fracture energy. Coarse precipitation, for example sigma phase, might be considered as a factor to change the fracture energy. It is important to predict the precipitation formation. The result of life prediction by the hysteresis energy rate is compared with that of the time fraction rule based on 'Demonstration Reactor Design Standard (Draft)'. The predicted lives by both methods for long-term region are comparable and independent of the ratio of plastic strain to total strain. (author)strain. (author)
DEFF Research Database (Denmark)
Hui, Cang; McGeoch, Melodie A.
2009-01-01
The estimation of species abundances at regional scales requires a cost-efficient method that can be applied to existing broadscale data. We compared the performance of eight models for estimating species abundance and community structure from presence-absence maps of the southern African avifauna. Six models were based on the intraspecific occupancy-abundance relationship (OAR); the other two on the scaling pattern of species occupancy (SPO), which quantifies the decline in species range size when measured across progressively finer scales. The performance of these models was examined using five tests: the first three compared the predicted community structure against well-documented macroecological patterns; the final two compared published abundance estimates for rare species and the total regional abundance estimate against predicted abundances. Approximately two billion birds were estimated as occurring in South Africa, Lesotho, and Swaziland. SPO models outperformed the OAR models, due to OAR models assuming environmental homogeneity and yielding scale-dependent estimates. Therefore, OAR models should only be applied across small, homogenous areas. By contrast, SPO models are suitable for data at larger spatial scales because they are based on the scale dependence of species range size and incorporate environmental heterogeneity (assuming fractal habitat structure or performing a Bayesian estimate of occupancy). Therefore, SPO models are recommended for assemblage-scale regional abundance estimation based on spatially explicit presence-absence data.
International Nuclear Information System (INIS)
SCK-CEN is studying the disposal of high and long-lived medium level waste in the Boom Clay at Mol, Belgium. In the performance assessment for such a repository time extrapolation is an inherent problem due to the extremely long half-life of some important radionuclides. To increase the confidence in these time extrapolations SCK-CEN applies a combination of different experimental and modelling approaches including laboratory and in situ experiments, natural analogue studies, deterministic (or mechanistic) models and stochastical models. An overview is given of these approaches and some examples of applications to the different repository system components are given. (author)
Energy Technology Data Exchange (ETDEWEB)
Bastos, Fernanda Martins
2015-04-01
In laboratories involving Radiological Protection practices, it is usual to use reference radiations for calibrating dosimeters and to study their response in terms of energy dependence. The International Organization for Standardization (ISO) established four series of reference X-rays beams in the ISO- 4037 standard: the L and H series, as low and high air Kerma rates, respectively, the N series of narrow spectrum and W series of wide spectrum. The X-rays beams with tube potential below 30 kV, called 'low energy beams' are, in most cases, critical as far as the determination of their parameters for characterization purpose, such as half-value layer. Extrapolation chambers are parallel plate ionization chambers that have one mobile electrode that allows variation of the air volume in its interior. These detectors are commonly used to measure the quantity Absorbed Dose, mostly in the medium surface, based on the extrapolation of the linear ionization current as a function of the distance between the electrodes. In this work, a characterization of a model 23392 PTW extrapolation chamber was done in low energy X-rays beams of the ISO- 4037 standard, by determining the polarization voltage range through the saturation curves and the value of the true null electrode spacing. In addition, the metrological reliability of the extrapolation chamber was studied with measurements of the value of leakage current and repeatability tests; limit values were established for the proper use of the chamber. The PTW23392 extrapolation chamber was calibrated in terms of air Kerma in some of the ISO radiation series of low energy; the traceability of the chamber to the National Standard Dosimeter was established. The study of energy dependency of the extrapolation chamber and the assessment of the uncertainties related to the calibration coefficient were also done; it was shown that the energy dependence was reduced to 4% when the extrapolation technique was used. Finally, the first half-value layers were determined for the low energy ISO N series with the extrapolation chamber, in collimated and uncollimated beams and it was showed that this detector is feasible for such measurements. (author)
International Nuclear Information System (INIS)
A preliminary result of calculation of Extrapolated-to-Zero-Mesh-Size Solution (EZMSS) for the Second AER Kinetic benchmark is presented. Calculation has been made with the code MAG. The standard MCFD (mesh - center finite difference) approximation in ?-Z geometry has been used for space approximation. The mesh refinement technique with h2 - extrapolation has been used to calculate EXMSS and to evaluate its accuracy. The preliminary result shows a significant difference with all known to author solutions generated earlier by nodal codes, in particular with DYN3D reference solution. The only exception is BIPR8 solution that seems to be rather close to preliminary estimation of the EZMSS. (Authors)
Grafting of HEMA onto dopamine coated stainless steel by 60Co-? irradiation method
Jin, Wanqin; Yang, Liming; Yang, Wei; Chen, Bin; Chen, Jie
2014-12-01
A novel method for grafting of 2-hydroxyethyl methacrylate (HEMA) onto the surface of stainless steel (SS) was explored by using 60Co-? irradiation. The surface of SS was modified by coating of dopamine before radiation grafting. The grafting reaction was performed in a simultaneous irradiation condition. The chemical structures change of the surface before and after grafting was demonstrated by Fourier transform infrared (FTIR) spectrometer. The hydrophilicity of the samples was determined by water contact angle measurement in the comparison of the stainless steel in the conditions of pristine, dopamine coated and HEMA grafted. Surface morphology of the samples was characterized by atomic force microscope (AFM) and scanning electron microscope (SEM). The corrosion resistance properties of the samples were evaluated by Tafel polarization curve. The hemocompatibility of the samples were tested by platelet adhesion assay.
cDNA Cloning of Fathead minnow (Pimephales promelas) Estrogen and Androgen Receptors for Use in Steroid Receptor Extrapolation Studies for Endocrine Disrupting Chemicals. Wilson, V.S.1,, Korte, J.2, Hartig P. 1, Ankley, G.T.2, Gray, L.E., Jr 1, , and Welch, J.E.1. 1U.S...
Scientific Electronic Library Online (English)
José Francisco dos, Reis Sobrinho; Levi de Oliveira, Bueno.
2014-04-01
Full Text Available Hot tensile and creep data were obtained for 2.25Cr-1Mo steel, ASTM A387 Gr.22CL2, at the temperatures of 500-550-600-650-700 °C. Using the concept of equivalence between hot tensile data and creep data, the results were analyzed according to the methodology based on Kachanov Continuum Damage Mechan [...] ics proposed by Penny, which suggests the possibility of using short time creep data obtained in laboratory for extrapolation to long operating times corresponding to tens of thousands hours. The hot tensile data (converted to creep) define in a better way the region where ?=0 and the creep data define the region where ?=1, according to the methodology. Extrapolation to 10,000 h and 100,000 h is performed and the results compared with results obtained by other extrapolation procedures such as the Larson-Miller and Manson-Haferd methodologies. Extrapolation from ASTM and NIMS Datasheets for 10,000 h and 100,000 h as well as data from other authors on 2.25Cr-1Mo steel are used for assessing the reliability of the results.
Directory of Open Access Journals (Sweden)
B. Deutsch
2010-04-01
Full Text Available Rates of denitrification in sediments were measured with the isotope pairing technique at different sites in the southern and central Baltic Sea. They varied between 0.5 ?mol m^{?2} h^{?1} in sands and 28.7 ?mol m^{?2} h^{?1} in muddy sediments and showed a good correlation to the organic carbon contents of the surface sediments. N-removal rates via sedimentary denitrification were estimated for the entire Baltic Sea calculating sediment specific denitrification rates and interpolating them to the whole Baltic Sea area. Another approach was carried out by using the relationship between the organic carbon content and the rate of denitrification. For the entire Baltic Sea the N-removal by denitrification in sediments varied between 426–652 kt N a^{?1}, which is around 48–73% of the external N inputs delivered via rivers, coastal point sources and atmospheric deposition. Moreover, an expansion of the anoxic bottom areas was considered under the assumption of a rising oxycline from 100 to 80 m water depth. This leads to an increase of the area with anoxic conditions and an overall decrease in sedimentary denitrification by 14%. Overall we can show here that this type of data extrapolation is a powerful tool to estimate the nitrogen losses for a whole coastal sea and may be applicable to other coastal regions and enclosed seas, too.
Energy Technology Data Exchange (ETDEWEB)
Maingi, R [PPPL
2014-07-01
Large edge localized modes (ELMs) typically accompany good H-mode confinement in fusion devices, but can present problems for plasma facing components because of high transient heat loads. Here the range of techniques for ELM control deployed in fusion devices is reviewed. The two baseline strategies in the ITER baseline design are emphasized: rapid ELM triggering and peak heat flux control via pellet injection, and the use of magnetic perturbations to suppress or mitigate ELMs. While both of these techniques are moderately well developed, with reasonable physical bases for projecting to ITER, differing observations between multiple devices are also discussed to highlight the needed community R & D. In addition, recent progress in ELM-free regimes, namely Quiescent H-mode, I-mode, and Enhanced Pedestal H-mode is reviewed, and open questions for extrapolability are discussed. Finally progress and outstanding issues in alternate ELM control techniques are reviewed: supersonic molecular beam injection, edge electron cyclotron heating, lower hybrid heating and/or current drive, controlled periodic jogs of the vertical centroid position, ELM pace-making via periodic magnetic perturbations, ELM elimination with lithium wall conditioning, and naturally occurring small ELM regimes.
Tassis, Konstantinos; Pavlidou, Vasiliki
2015-07-01
Recent Planck results have shown that radiation from the cosmic microwave background passes through foregrounds in which aligned dust grains produce polarized dust emission, even in regions of the sky with the lowest level of dust emission. One of the most commonly used ways to remove the dust foreground is to extrapolate the polarized dust emission signal from frequencies where it dominates (e.g. ˜350 GHz) to frequencies commonly targeted by cosmic microwave background experiments (e.g. ˜150 GHz). In this Letter, we describe an interstellar medium effect that can lead to decorrelation of the dust emission polarization pattern between different frequencies due to multiple contributions along the line of sight. Using a simple 2-cloud model we show that there are two conditions under which this decorrelation can be large: (a) the ratio of polarized intensities between the two clouds changes between the two frequencies; (b) the magnetic fields between the two clouds contributing along a line of sight are significantly misaligned. In such cases, the 350 GHz polarized sky map is not predictive of that at 150 GHz. We propose a possible correction for this effect, using information from optopolarimetric surveys of dichroicly absorbed starlight.
The 16O(p,?)17F direct capture cross section with an extrapolation to astrophysical energies
International Nuclear Information System (INIS)
The cross section for the direct radiative capture of protons by 16O has been measured relative to the proton elastic scattering cross section for energies from 800 to 2400 keV (CM). The elastic scattering cross section was normalized to the Rutherford scattering cross section at 385.5 keV. The capture cross section for the reaction 16O(p,?)17F, which plays a role in hydrogen burning stars, has been extrapolated to stellar energies using a theoretical model which gives a good fit to the measured cross sections. The model involves calculation of electromagnetic matrix elements between initial and final state wave functions evaluated for Saxon-Woods potentials with parameters adjusted to fit both elastic scattering data and binding energies for the ground and first excited states of 17F. Cross sections for capture to the 5/2+ ground and 1/2+ first excited states of 17F in terms of astrophysical S factors valid for energies less than or equal to 100 keV have been found. (author)
Maingi, R.
2014-11-01
Large edge localized modes (ELMs) typically accompany good H-mode confinement in fusion devices, but can present problems for plasma facing components because of high transient heat loads. Here the range of techniques for ELM control deployed in fusion devices is reviewed. Two strategies in the ITER baseline design are emphasized: rapid ELM triggering and peak heat flux control via pellet injection, and the use of magnetic perturbations to suppress or mitigate ELMs. While both of these techniques are moderately well developed, with reasonable physical bases for projecting to ITER, differing observations between multiple devices are also discussed to highlight the needed community R&D. In addition, recent progress in ELM-free regimes, namely quiescent H-mode, I-mode, and enhanced pedestal H-mode is reviewed, and open questions for extrapolability are discussed. Finally progress and outstanding issues in alternate ELM control techniques are reviewed: supersonic molecular beam injection, edge electron cyclotron heating, lower hybrid heating and/or current drive, controlled periodic jogs of the vertical centroid position, ELM pace-making via periodic magnetic perturbations, ELM elimination with lithium wall conditioning, and naturally occurring small ELM regimes.
Campbell, Jerry L; Yoon, Miyoung; Clewell, Harvey J
2015-06-01
Parabens have been reported as potential endocrine disrupters and are widely used in consumer projects including cosmetics, foods and pharmaceuticals. We report on the development of a PBPK model for methyl-, propyl-, and butylparaben. The model was parameterized through a combination of QSAR for tissue solubility and quantitative in vitro to in vivo extrapolation (IVIVE) for hydrolysis in portals of entry including intestine and skin as well as in the primary site of metabolism, the liver. Overall, the model provided very good agreement with published time-course data in blood and urine from controlled dosing studies in rat and human, and demonstrates the potential value of quantitative IVIVE in expanding the use of human biomonitoring data in safety assessment. An in vitro based cumulative margin of safety (MOS) was calculated by comparing the effective concentrations from an in vitro assay of estrogenicity to the free paraben concentrations predicted by the model to be associated with the 95th percentile urine concentrations reported in NHANES (2009-2010 collection period). The calculated MOS for adult females was 108, whereas the MOS for males was 444. PMID:25839974
International Nuclear Information System (INIS)
The distributions of approximately 7000 flares of importance >= 1 were plotted relative to the sector-structure boundaries of the interplanetary magnetic field (+-) and (-+) extrapolated to the Sun. The data obtained for the time period July 1955 - December 1961 were used. The distributions obtained were analy ed jointly with the same distributions for 1964-1974. It is shown that stable concentration of the flares is observed only near the boundaries (-+) for both hemispheres of the Sun during the increase of the activity and near the maxima cycles No 19 and 20. There are no difference between ''Hale'' and ''non-Hale'' boundaries for these flares. The decrease of the flares was revealed even near the boundary type (+-). For activity decrease phase, after the Sun's general field polarity inversion the concentration of the flares to the boundaries is absent. The difference between Hale and non-Hale boundaries for flares is revealed only in some increase of the flare concentration near the Hale boundaries. The results obtained are likely to give additional evidence in favour of solar magnetic field and flare activity connection
International Nuclear Information System (INIS)
Large edge localized modes (ELMs) typically accompany good H-mode confinement in fusion devices, but can present problems for plasma facing components because of high transient heat loads. Here the range of techniques for ELM control deployed in fusion devices is reviewed. Two strategies in the ITER baseline design are emphasized: rapid ELM triggering and peak heat flux control via pellet injection, and the use of magnetic perturbations to suppress or mitigate ELMs. While both of these techniques are moderately well developed, with reasonable physical bases for projecting to ITER, differing observations between multiple devices are also discussed to highlight the needed community R and D. In addition, recent progress in ELM-free regimes, namely quiescent H-mode, I-mode, and enhanced pedestal H-mode is reviewed, and open questions for extrapolability are discussed. Finally progress and outstanding issues in alternate ELM control techniques are reviewed: supersonic molecular beam injection, edge electron cyclotron heating, lower hybrid heating and/or current drive, controlled periodic jogs of the vertical centroid position, ELM pace-making via periodic magnetic perturbations, ELM elimination with lithium wall conditioning, and naturally occurring small ELM regimes. (paper)
Explanation, Extrapolation, and Existence
Yablo, Stephen
2012-01-01
Mark Colyvan (2010) raises two problems for ‘easy road’ nominalism about mathematical objects. The first is that a theory’s mathematical commitments may run too deep to permit the extraction of nominalistic content. Taking the math out is, or could be, like taking the hobbits out of Lord of the Rings. I agree with the ‘could be’, but not (or not yet) the ‘is’. A notion of logical subtraction is developed that supports the possibility, questioned by Colyvan, of bracketing a theor...
Mathematical methods for physical and analytical chemistry
Goodson, David Z
2011-01-01
Mathematical Methods for Physical and Analytical Chemistry presents mathematical and statistical methods to students of chemistry at the intermediate, post-calculus level. The content includes a review of general calculus; a review of numerical techniques often omitted from calculus courses, such as cubic splines and Newton's method; a detailed treatment of statistical methods for experimental data analysis; complex numbers; extrapolation; linear algebra; and differential equations. With numerous example problems and helpful anecdotes, this text gives chemistry students the mathematical
International Nuclear Information System (INIS)
Testing was performed to determine if gravel particles will creep into and puncture the high-density polyethylene (HDPE) liner in the catch basin of a grout vault over a nominal 30-year period. Testing was performed to support a design without a protective geotextile cover after the geotextile was removed from the design. Recently, a protective geotextile cover over the liner was put back into the design. The data indicate that the geotextile has an insignificant effect on the creep of gravel into the liner. However, the geotextile may help to protect the liner during construction. Two types of tests were performed to evaluate the potential for creep-related puncture. In the first type of test, a very sensitive instrument measured the rate at which a probe crept into HDPE over a 20-minute period at temperatures of 176 degrees F to 212 degrees F (80 degrees C to 100 degrees C). The second type of test consisted of placing the liner between gravel and mortar at 194 degrees F (90 degrees C) and 45.1 psi overburden pressure for periods up to 1 year. By combining data from the two tests, the long-term behavior of the creep was extrapolated to 30 years of service. After 30 years of service, the liner will be in a nearly steady condition and further creep will be extremely small. The results indicate that the creep of gravel into the liner will not create a puncture during service at 194 degrees F (90 degrees C). The estimated creep over 30 years is expected to be less than 25 mils out of the total initial thickness of 60 mils. The test temperature of 194 degrees F (90 degrees C) corresponds to the design basis temperature of the vault. Lower temperatures are expected at the liner, which makes the test conservative. Only the potential for failure of the liner resulting from creep of gravel is addressed in this report
Teeguarden, Justin G; Barton, Hugh A
2004-06-01
One measure of the potency of compounds that lead to the effects through ligand-dependent gene transcription is the relative affinity for the critical receptor. Endocrine active compounds that are presumed to act principally through binding to the estrogen receptor (e.g., estradiol, genistein, bisphenol A, and octylphenol) comprise one class of such compounds. For making simple comparisons, receptor-binding affinity has been equated to in vivo potency, which consequently defines the dose-response characteristics for the compound. Direct extrapolation of in vitro estimated affinities to the corresponding in vivo system and to specific species or life stages (e.g., neonatal, pregnancy) can be misleading. Accurate comparison of the potency of endocrine active compounds requires characterization of biochemical and pharmacokinetic factors that affect their free concentration. Quantitative in vitro and in vivo models were developed for integrating pharmacokinetics factors (e.g., serum protein and receptor-binding affinities, clearance) that affect potency. Data for parameterizing these models for several estrogenic compounds were evaluated and the models exercised. While simulations of adult human or rat sera were generally successful, difficulties in describing early life stages were identified. Exogenous compounds were predicted to be largely ineffective at competing estradiol off serum-binding proteins, suggesting this was unlikely to be physiologically significant. Discrepancies were identified between relative potencies based upon modeling in vitro receptor-binding activity versus in vivo activity in the presence of clearance and serum-binding proteins. The examples illustrate the utility of this approach for integrating available experimental data from in vitro and in vivo studies to estimate the relative potency of these compounds. PMID:15209943
Cohen, Martin; Witteborn, Fred C.; Carbon, Duane F.; Davies, John K.; Wooden, Diane H.; Bregman, Jesse D.
1996-01-01
We present five new absolutely calibrated continuous stellar spectra constructed as far as possible from spectral fragments observed from the ground, the Kuiper Airborne Observatory (KAO), and the IRAS Low Resolution Spectrometer. These stars-alpha Boo, gamma Dra, alpha Cet, gamma Cru, and mu UMa-augment our six, published, absolutely calibrated spectra of K and early-M giants. All spectra have a common calibration pedigree. A revised composite for alpha Boo has been constructed from higher quality spectral fragments than our previously published one. The spectrum of gamma Dra was created in direct response to the needs of instruments aboard the Infrared Space Observatory (ISO); this star's location near the north ecliptic pole renders it highly visible throughout the mission. We compare all our low-resolution composite spectra with Kurucz model atmospheres and find good agreement in shape, with the obvious exception of the SiO fundamental, still lacking in current grids of model atmospheres. The CO fundamental seems slightly too deep in these models, but this could reflect our use of generic models with solar metal abundances rather than models specific to the metallicities of the individual stars. Angular diameters derived from these spectra and models are in excellent agreement with the best observed diameters. The ratio of our adopted Sirius and Vega models is vindicated by spectral observations. We compare IRAS fluxes predicted from our cool stellar spectra with those observed and conclude that, at 12 and 25 microns, flux densities measured by IRAS should be revised downwards by about 4.1% and 5.7%, respectively, for consistency with our absolute calibration. We have provided extrapolated continuum versions of these spectra to 300 microns, in direct support of ISO (PHT and LWS instruments). These spectra are consistent with IRAS flux densities at 60 and 100 microns.
Energy Technology Data Exchange (ETDEWEB)
Schabel, C.; Bongers, M.; Grosse, U.; Mangold, S.; Claussen, C.D.; Thomas, C. [University Hospital of Tuebingen (Germany). Dept. of Diagnostic and Interventional Radiology; Sedlmair, M. [Siemens AG, Forchheim (Germany). Healthcare; Korn, A. [University Hospital of Tuebingen (Germany). Dept. of Diagnostic and Interventional Neuroradiology
2014-06-15
Purpose: To evaluate a novel monoenergetic post-processing algorithm (MEI+) in patients with poor intrahepatic contrast enhancement. Materials and Methods: 25 patients were retrospectively included in this study. Late-phase imaging of the upper abdomen, which was acquired in dual-energy mode (100/140 kV), was used as a model for poor intrahepatic contrast enhancement. Traditional monoenergetic images (MEI), linearly weighted mixed images with different mixing ratios (MI), sole 100 and 140 kV and MEI+ images were calculated. MEI+ is a novel technique which applies frequency-based mixing of the low keV images and an image of optimal keV from a noise perspective to combine the benefits of both image stacks. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the intrahepatic vasculature (IHV) and liver parenchyma (LP) were objectively measured and depiction of IHV was subjectively rated and correlated with portal venous imaging by two readers in consensus. Results: MEI+ was able to increase the SNR of the IHV (5.7 ± 0.4 at 40 keV) and LP (4.9 ± 1.0 at 90 keV) and CNR (2.1 ± 0.6 at 40 keV) greatly compared to MEI (5.1 ± 1.1 at 80 keV, 4.7 ± 1.0 at 80 keV, 1.0 ± 0.4 at 70 keV), MI (5.2 ± 1.1 M5:5, 4.8 ± 1.0 M5:5, 1.0 ± 3.5 M9:1), sole 100 kV images (4.4 ± 1.0, 3.7 ± 0.8, 1.0 ± 0.3) and 140 kV images (2.8 ± 0.5, 3.1 ± 0.6, 0.1 ± 0.2). Subjective assessment rated MEI+ of virtual 40 keV superior to all other images. Conclusion: MEI+ is a very promising algorithm for monoenergetic extrapolation which is able to overcome noise limitations associated with traditional monoenergetic techniques at low virtual keV levels and consequently does not suffer from a decline of SNR and CNR at low keV values. This algorithm allows an improvement of IHV depiction in the presence of poor contrast. (orig.)
Kujur, Alice; Bajaj, Deepak; Upadhyaya, Hari D.; Das, Shouvik; Ranjan, Rajeev; Shree, Tanima; Saxena, Maneesha S.; Badoni, Saurabh; Kumar, Vinod; Tripathi, Shailesh; Gowda, C. L. L.; Sharma, Shivali; Singh, Sube; Tyagi, Akhilesh K.; Parida, Swarup K.
2015-01-01
The genome-wide discovery and high-throughput genotyping of SNPs in chickpea natural germplasm lines is indispensable to extrapolate their natural allelic diversity, domestication, and linkage disequilibrium (LD) patterns leading to the genetic enhancement of this vital legume crop. We discovered 44,844 high-quality SNPs by sequencing of 93 diverse cultivated desi, kabuli, and wild chickpea accessions using reference genome- and de novo-based GBS (genotyping-by-sequencing) assays that were physically mapped across eight chromosomes of desi and kabuli. Of these, 22,542 SNPs were structurally annotated in different coding and non-coding sequence components of genes. Genes with 3296 non-synonymous and 269 regulatory SNPs could functionally differentiate accessions based on their contrasting agronomic traits. A high experimental validation success rate (92%) and reproducibility (100%) along with strong sensitivity (93–96%) and specificity (99%) of GBS-based SNPs was observed. This infers the robustness of GBS as a high-throughput assay for rapid large-scale mining and genotyping of genome-wide SNPs in chickpea with sub-optimal use of resources. With 23,798 genome-wide SNPs, a relatively high intra-specific polymorphic potential (49.5%) and broader molecular diversity (13–89%)/functional allelic diversity (18–77%) was apparent among 93 chickpea accessions, suggesting their tremendous applicability in rapid selection of desirable diverse accessions/inter-specific hybrids in chickpea crossbred varietal improvement program. The genome-wide SNPs revealed complex admixed domestication pattern, extensive LD estimates (0.54–0.68) and extended LD decay (400–500 kb) in a structured population inclusive of 93 accessions. These findings reflect the utility of our identified SNPs for subsequent genome-wide association study (GWAS) and selective sweep-based domestication trait dissection analysis to identify potential genomic loci (gene-associated targets) specifically regulating important complex quantitative agronomic traits in chickpea. The numerous informative genome-wide SNPs, natural allelic diversity-led domestication pattern, and LD-based information generated in our study have got multidimensional applicability with respect to chickpea genomics-assisted breeding. PMID:25873920
Kujur, Alice; Bajaj, Deepak; Upadhyaya, Hari D; Das, Shouvik; Ranjan, Rajeev; Shree, Tanima; Saxena, Maneesha S; Badoni, Saurabh; Kumar, Vinod; Tripathi, Shailesh; Gowda, C L L; Sharma, Shivali; Singh, Sube; Tyagi, Akhilesh K; Parida, Swarup K
2015-01-01
The genome-wide discovery and high-throughput genotyping of SNPs in chickpea natural germplasm lines is indispensable to extrapolate their natural allelic diversity, domestication, and linkage disequilibrium (LD) patterns leading to the genetic enhancement of this vital legume crop. We discovered 44,844 high-quality SNPs by sequencing of 93 diverse cultivated desi, kabuli, and wild chickpea accessions using reference genome- and de novo-based GBS (genotyping-by-sequencing) assays that were physically mapped across eight chromosomes of desi and kabuli. Of these, 22,542 SNPs were structurally annotated in different coding and non-coding sequence components of genes. Genes with 3296 non-synonymous and 269 regulatory SNPs could functionally differentiate accessions based on their contrasting agronomic traits. A high experimental validation success rate (92%) and reproducibility (100%) along with strong sensitivity (93-96%) and specificity (99%) of GBS-based SNPs was observed. This infers the robustness of GBS as a high-throughput assay for rapid large-scale mining and genotyping of genome-wide SNPs in chickpea with sub-optimal use of resources. With 23,798 genome-wide SNPs, a relatively high intra-specific polymorphic potential (49.5%) and broader molecular diversity (13-89%)/functional allelic diversity (18-77%) was apparent among 93 chickpea accessions, suggesting their tremendous applicability in rapid selection of desirable diverse accessions/inter-specific hybrids in chickpea crossbred varietal improvement program. The genome-wide SNPs revealed complex admixed domestication pattern, extensive LD estimates (0.54-0.68) and extended LD decay (400-500 kb) in a structured population inclusive of 93 accessions. These findings reflect the utility of our identified SNPs for subsequent genome-wide association study (GWAS) and selective sweep-based domestication trait dissection analysis to identify potential genomic loci (gene-associated targets) specifically regulating important complex quantitative agronomic traits in chickpea. The numerous informative genome-wide SNPs, natural allelic diversity-led domestication pattern, and LD-based information generated in our study have got multidimensional applicability with respect to chickpea genomics-assisted breeding. PMID:25873920
Jiang, Chaowei; Feng, Xueshang; Hu, Qiang
2014-01-01
Solar filament are commonly thought to be supported in magnetic dips, in particular, of magnetic flux ropes (FRs). In this Letter, from the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is the first in that current NLFFF extrapolations with presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion line (PIL) with strong transverse field and magnetic shear, and the existence of a FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength $\\lesssim 100$ G), where the PIL is very fragmented due to small parasitic polarities on both side of the PIL and the transverse field has a low value of signal-to-noise ratio. Thus it represents a far more difficult challenge to extrapolate a large-sc...
Scientific Electronic Library Online (English)
Pradeep, Kumar; A. Nityananda, Shetty.
2013-01-01
Full Text Available The corrosion behaviour of welded maraging steel in hydrochloric acid solutions was studied over a range of acid concentration and solution temperature by electrochemical techniques like Tafel extrapolation method and electrochemical impedance spectroscopy. The corrosion rate of welded maraging stee [...] l increases with the increase in temperature and concentration of hydrochloric acid in the medium. The energies of activation, enthalpy of activation and entropy of activation for the corrosion process were calculated. The surface morphology of the corroded sample was evaluated by surface examination using scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDS).
Corrosion inhibition of brass by aliphatic amines
International Nuclear Information System (INIS)
Aliphatic amines hexylamine (HCA), octylamine (OCA) and decylamine (DCA) have been used as corrosion inhibitors for (70/30) brass in 0.I M HCIO4. The inhibitor efficiency (%P) calculated using weight loss, Tafel extrapolation, linear polarization and impedance methods was found to be in the order DCA> OCA> HCA. These adsorb on brass surface following bockris-swinkels' isotherm. DCA, OCA and HCA displaced 4, 3 and 2 molecules of water from interface respectively. Displacement of water molecules brought a great reorganization of double layer at the interface. These amines during corrosion form complexes with dissolved zinc and copper ions.(Author)
International Nuclear Information System (INIS)
In this paper, we propose a fast numerical scheme to estimate Partition Functions (PF) of symmetric Potts fields. Our strategy is first validated on 2D two-color Potts fields and then on 3D two- and three-color Potts fields. It is then applied to the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated, deactivated and inactivated brain regions and to estimate region dependent hemodynamic filters. For any brain region, a specific 3D Potts field indeed embodies the spatial correlation over the hidden states of the voxels by modeling whether they are activated, deactivated or inactive. To make spatial regularization adaptive, the PFs of the Potts fields over all brain regions are computed prior to the brain activity estimation. Our approach is first based upon a classical path-sampling method to approximate a small subset of reference PFs corresponding to pre-specified regions. Then, we propose an extrapolation method that allows us to approximate the PFs associated to the Potts fields defined over the remaining brain regions. In comparison with preexisting methods either based on a path sampling strategy or mean-field approximations, our contribution strongly alleviates the computational cost and makes spatially adaptive regularization of whole brain fMRI datasets feasible. It is also robust against grid inhomogeneities and efficient irrespective of the topological confcient irrespective of the topological configurations of the brain regions. (authors)
The proposed paradigm for ?Toxicity Testing in the 21st Century? supports the development of mechanistically-based, high-throughput in vitro assays as a potential cost effective and scientifically-sound alternative to some whole animal hazard testing. To accomplish this long-term...
Scholze, Martin; Silva, Elisabete; Kortenkamp, Andreas
2014-01-01
Dose addition, a commonly used concept in toxicology for the prediction of chemical mixture effects, cannot readily be applied to mixtures of partial agonists with differing maximal effects. Due to its mathematical features, effect levels that exceed the maximal effect of the least efficacious compound present in the mixture, cannot be calculated. This poses problems when dealing with mixtures likely to be encountered in realistic assessment situations where chemicals often show differing max...
Scientific Electronic Library Online (English)
Pedro, Höfig; Elvio, Giasson; Pedro Rodolfo Siqueira, Vendrame.
2014-12-01
Full Text Available O objetivo deste trabalho foi testar metodologias de mapeamento digital de solos (MDS) e avaliar a possibilidade de extrapolação de mapas entre áreas fisiograficamente semelhantes. A área de referência para o treinamento do modelo localizou-se no Município de Sentinela do Sul, RS, e a extrapolação f [...] oi feita para o Município Cerro Grande do Sul, RS. Desenvolveram-se pelo MDS modelos com o uso de variáveis ambientais, como preditoras, e as classes de solos - obtidas de um levantamento convencional na escala 1:50.000 - como variáveis dependentes. Testou-se o uso combinado de dois modelos de árvore de decisão (AD), treinados em duas paisagens com diferentes classes de drenagem. Para Sentinela do Sul, a concordância dos mapas preditos com os produzidos pelo levantamento convencional foi avaliada por matrizes de erro. Como a importância dos erros de mapeamento é variável, criou-se uma matriz ponderada, para atribuir diferentes importâncias aos erros específicos de mapeamento entre as distintas unidades de mapeamento. A acurácia do mapa de Cerro Grande do Sul foi avaliada pela verdade de campo. A extrapolação dos mapas gera resultados satisfatórios, com acurácia maior do que 75%. O uso de modelos com duas AD separadas por paisagens homogêneas gera mapas extrapolados com maior acurácia, avaliada pela verdade de campo. Abstract in english The objective of this work was to test methodologies for digital soil mapping (DSM) and to evaluate the possibility of map extrapolation between physiographically similar areas. The reference area for model training was located at the municipality of Sentinela do Sul, in the state of Rio Grand do Su [...] l (RS), Brazil, and the extrapolation was done for the municipality of Cerro Grande do Sul, RS. Models were developed by DSM using environmental variables as predictors, and soil classes - obtained from a conventional soil survey at 1:50,000 scale - as dependent variables. The combined use of two decision trees (DT), trained in two landscapes with different drainage classes, was tested. For Sentinela do Sul, the agreement between the predicted maps with the ones produced by conventional survey was evaluated using error matrices. Since the importance of mapping errors is variable, a weighted error matrix was created to assign different importances to specific mapping errors between different mapping units. Map accuracy of Cerro Grande do Sul was evaluated by ground truth. Map extrapolation yields satisfactory results, with accuracy higher than 75%. The use of models with two DTs divided by homogeneous landscapes generates extrapolated maps with a greater accuracy, evaluated by ground truth.
International Nuclear Information System (INIS)
The recent demonstration that transformation of cultured cells can be induced by exposure to DNA fragments prepared from normal mouse tissues provides experimental support to the gene transfer-misrepair hypothesis of radiation carcinogenesis. It is predicted that the proposed mechanism implies a non-linear extrapolation model for the calculation of cancer risks caused by very low doses of ionizing radiation of low LET. It also follows from this hypothesis that X- and ?-radiation delivered at an extremely low dose rate will be less carcinogenic than at high dose rate, in particular where low total doses are concerned. (author)
Gangodagamage, Chandana; Rowland, Joel C; Hubbard, Susan S; Brumby, Steven P.; Liljedahl, Anna K; Wainwright, Haruko; Wilson, Cathy J; Altmann, Garrett L; Dafflon, Baptiste; Peterson, John; Ulrich, Craig; Tweedie, Craig E; Stan D. Wullschleger
2014-01-01
Landscape attributes that vary with microtopography, such as active layer thickness (ALT), are labor intensive and difficult to document effectively through in situ methods at kilometer spatial extents, thus rendering remotely sensed methods desirable. Spatially explicit estimates of ALT can provide critically needed data for parameterization, initialization, and evaluation of Arctic terrestrial models. In this work, we demonstrate a new approach using high-resolution remotely sensed data for...
Miyaguchi, Takamori; Suemizu, Hiroshi; Shimizu, Makiko; Shida, Satomi; Nishiyama, Sayako; Takano, Ryohji; Murayama, Norie; Yamazaki, Hiroshi
2015-06-01
The aim of this study was to extrapolate to humans the pharmacokinetics of estrogen analog bisphenol A determined in chimeric mice transplanted with human hepatocytes. Higher plasma concentrations and urinary excretions of bisphenol A glucuronide (a primary metabolite of bisphenol A) were observed in chimeric mice than in control mice after oral administrations, presumably because of enterohepatic circulation of bisphenol A glucuronide in control mice. Bisphenol A glucuronidation was faster in mouse liver microsomes than in human liver microsomes. These findings suggest a predominantly urinary excretion route of bisphenol A glucuronide in chimeric mice with humanized liver. Reported human plasma and urine data for bisphenol A glucuronide after single oral administration of 0.1mg/kg bisphenol A were reasonably estimated using the current semi-physiological pharmacokinetic model extrapolated from humanized mice data using algometric scaling. The reported geometric mean urinary bisphenol A concentration in the U.S. population of 2.64?g/L underwent reverse dosimetry modeling with the current human semi-physiological pharmacokinetic model. This yielded an estimated exposure of 0.024?g/kg/day, which was less than the daily tolerable intake of bisphenol A (50?g/kg/day), implying little risk to humans. Semi-physiological pharmacokinetic modeling will likely prove useful for determining the species-dependent toxicological risk of bisphenol A. PMID:25805149
International Nuclear Information System (INIS)
The austenitic stainless steel X6crni1811 (Din 1.4948) used as a structure material for the German Fast Breeder Reactor SNR 300 was creep tested in a temperature range of 550-650 degree centigree material condition as well as welded material condition. The main point of this program (Extrapolation-Program) lies in the knowledge of the creep-rupture-strength and creep-behaviour up to 3 x 104 hours higher temperatures in order to extrapolated up to ?105 hours for operating temperatures. In order to study the stress dependency of the minimum creep rate additional tests were carried out of 550 degree centigree - 750 degree centigree. The present report describes the state in the running program with test-times of 23.000 hours and results from tests up to 55.000 hours belonging to other parallel programs are taken into account. Besides the creep-rupture behaviour it is also made a study of ductility between 550 and 750 degree centigree. Extensive metallographic examinations have been made to study the fracture behaviour and changes in structure. (Author)
International Nuclear Information System (INIS)
The austenitic stainless steel X6CrNi1811 (DIN 1.4948) that is used as a structure material for the German Fast Breeder Reactor SNR 300 was creep-tested in a temperature range of 550-6500C under base material condition as well as welded material condition. The main point of this program ( Extrapolation Program ) lies in the knowledge of the creep-rupture-strength and creep-behaviour up to 3 x 104 hours at higher temperatures in order to extrapolate up to >= 105 hours for operating temperatures. In order to study the stress dependency of the minimum creep rate additional tests were carried out over temperature range of 550-7500C. The present report describes the state in the total running program with test-times up to 55 000 hours. Besides the creep-rupture behaviour it is possible to make a distinct quantitativ statement for the creep-behaviour and ductility. Extensive metallographic and electronmicroscopic examinations show the fracture behaviour and changes in structure. (orig.)
International Nuclear Information System (INIS)
The austenitic stainless steel X6CrNi1811 (DIN 1.4948) used as a structure material for the German Fast Breeder Reactor SNR 300 was creep tested in a temperature range of 550-650 deg under base material condition as well as welded material condition. The main point of this program (''Extrapolation-Program'') lies in the knowledge of the creep-rupture-strength and creepbehaviour up to 3 x 104 hours at higher temperatures in order to extrapolate up to >=105 hours for operating temperatures. In order to study the stress dependency of the minimum creep rate additional tests were carried out of 550 deg - 750 deg C. The present report describes the state in the running program with test-times of 23.000 hours and results from tests up to 55.000 hours belonging to other parallel programs are taken into account. Besides the creep-rupture behaviour it is also made a study of ductility between 550 and 750 deg C. Extensive metallographic examinations have been made to study the fracture behaviour and changes in structure. (author)
International Nuclear Information System (INIS)
The austenitic stainless steel X6CrNi1811 (DIN 1.4948) that is used as a structure material for the German Fast Breeder Reactor SNR 300 was creep tested in a temperature range of 550-6500C under base material condition as well as welded material condition. The main point of this program ('Extrapolation-Program') lies in the knowledge of the creep-rupture-strength and creep-behaviour up to 3 x 104 hours at higher temperatures in order to extrapolate up to >= 105 hours for operating temperatures. In order to study the stress dependency of the minimum creep rate additional tests were carried out over temperature range of 5500C - 7500C. The present report describes the state in the running program with test-times up to 35.000 hours. Besides the creep-rupture behaviour it is possible to make a distinct quantitativ statement for the creep-behaviour and ductility. Extensive metallographic examinations show the fracture behaviour and changes in structure. (orig.)
Directory of Open Access Journals (Sweden)
Botton R.
2006-12-01
Full Text Available Les unités de production en lits fluidisés catalytiques sont apparues vers 1942 dans l'industrie pétrolière et vers 1960 dans l'industrie chimique. On se limitera ici au problème de l'extrapolation des lits fluidisés catalytiques pour l'industrie chimique, qui exigent de très hautes performances (> 99 % de conversion. Leur mise au point a, dans le passé, nécessité l'exploitation sur des sites industriels de coûteux pilotes de 0,5 m de diamètre et de plus de 10 m de hauteur. Nous montrerons que ces pilotes peuvent être évités et que le passage direct du laboratoire à l'échelle industrielle est réalisable. Cette possibilité offre en plus une méthode simple pour améliorer les catalyseurs des unités industrielles. Elle ouvre aussi cette technique, très appréciée en production, aux produits de petits tonnages. La présentation de cet article sera faite en trois parties : - La première, présentée ci-après, expose les problèmes majeurs posés par l'extrapolation, puis résume les études effectuées. Les travaux d'extrapolation relatifs à deux procédés effectués avec des pilotes sont ensuite présentés, à titre d'exemples. De ces travaux sont déduites les performances que l'on peut espérer obtenir avec un réacteur catalytique à lit fluidisé, ainsi que les règles de tendances à suivre pour y parvenir. - La deuxième partie, intitulée Stratégie n'utilisant que des expériences de laboratoire , propose une stratégie expérimentale permettant d'obtenir en laboratoire les informations nécessaires pour passer directement à l'échelle industrielle avec des expériences suggérées en partie par les résultats exposés dans le premier article. Les relations expérimentales établies lors de ces études montrent que les propriétés d'un lit fluidisé ne dépendent (mis à part quelquefois le diamètre du réacteur que d'un paramètre appelé vitesse minimum de fluidisation de comportement . - La troisième partie est intitulée Études théoriques, réalités expérimentales, suggestions . Les bulles des lits fluidisés ont fait l'objet de très nombreux travaux, dont les résultats sont très souvent explicités sous la forme de modèles mécanistiques à un paramètre qui est le diamètre des bulles. Pour confronter ces modèles à l'expérience, une relation est établie entre le diamètre des bulles et la vitesse minimum de fluidisation de comportement. Des suggestions sont alors faites pour améliorer les modèles, et l'on propose des conclusions générales sur les lits fluidisés. The firsts catalytic fluidized beds appear near 1942 in petroleum industry and near 1960 in chemical industry. We only consider very high performances chemical fluidized bed reactors (> 99%. In the past, they were developed through the use of very expensive pilot plants of about 0. 5 m diameter and 10 in high. We will demonstrate that direct scale up from laboratory data is possible. This possibility gives also a simple method to improve catalysts used into operating units and opens fluidized bed technique to products that need only low production. Presentation is made with three articles:- In the first, Studies, Models, Learning from Pilot Plants : after a description of the major scale-up problems, studies to solve then are summarized. Then scale-up works of two processes with the use of about 0. 5 m diameter pilot plant are given. From the results it is deduced the possible performances of a catalytic fluidized bed and how to operate to obtain then. - In the second*, Scale up with Only Laboratory Data , it is experimentally demonstrated that the information's scale-up can be obtained in a laboratory. A strategy to obtain them is suggested. An another result of theses experimental studies is that all physical properties of catalytic fluidized bed depends of only one parameter. It is called comportment incipient fluidization velocity . - In the third*, Theoretical Studies, Experimental Reality, Suggestionsresults of theoretical studies about bubbles of the fluidized beds are summarize
International Nuclear Information System (INIS)
The effect of sub-sterilizing dose of gamma radiation (125 Gy) alone or in joint with different concentrations of tafla leaves extract Nerium oleander on the histology and histochemistry of the larval male reproductive system were studied. The treatment caused histopathological changes in the testes including necrosis of spermatocytes, retardation of sperm maturation, bursting of sperm bundles and the vacuolated area resulting from depletion of spermatogonia that increased in size.Histochemical studies showed that protein contents and RNA were increased while DNA content was decreased in male gonads.
Tafel, Külliki, 1979-
2008-01-01
Tartu, Turu ja Bergen koostasid Põhjamaade Innovatsioonikeskuse projekti "Nordic Model for Creative Industries Development Center" raames oma linnade loomemajanduse arendamise dokumendi. Võrreldakse valminud dokumente
Tafel, Külliki
2006-01-01
Äriühingute valitsemine postsotsialistlikes riikides - teoreetilised dilemmad, eripärad, uurimisvõimalused. Skeemid: Internal and external relations of corporate governanace; The changing context of corporate governance
Energy Technology Data Exchange (ETDEWEB)
Fracassi, G.; Grattieri, W.; Insinga, F.; Malafarina, L.; Mazzoni, M.
1991-12-31
This paper presents a procedure developed by ENEL (Italian National Electricity Board) for the automatic prediction of territorial loads in its national distribution system. This procedure is based on an extrapolation method incorporating annual power consumption historical series, disaggregated by consuming sector, divided by voltage level, and further subdivided by zone. An analysis is made of the method used to transform the annual power consumption figures into suitable data to be input into distribution network planning studies. Attention is also focussed on the method for assigning determined loads to each particular network node. The paper indicates how forecasted power consumption figures are substituted by actual figures as they become available so as to improve prediction accuracy. A discussion is made of the `final use` technique for the planning of urban network expansions. Data representing the evolution of power consumption in major Italian cities is used to give an indication of national trends.
Primary standardization of activity using the coincidence method based on analogue instrumentation
International Nuclear Information System (INIS)
Widely implemented at national metrology institutes (NMIs), the coincidence method is a technique to assay a wide variety of radionuclides which decay through two or more types of radiation. Through a survey of the literature, this paper seeks to describe the main aspects of one of the most powerful direct methods available in radionuclide metrology. The basics of coincidence counting and the efficiency extrapolation method are covered. The problem of non-linearities in the extrapolation curve is also considered. The main characteristics of variants to the conventional coincidence instrumentation are presented. (author)
Gangodagamage, Chandana; Rowland, Joel C.; Hubbard, Susan S.; Brumby, Steven P.; Liljedahl, Anna K.; Wainwright, Haruko; Wilson, Cathy J.; Altmann, Garrett L.; Dafflon, Baptiste; Peterson, John; Ulrich, Craig; Tweedie, Craig E.; Wullschleger, Stan D.
2014-08-01
Landscape attributes that vary with microtopography, such as active layer thickness (ALT), are labor intensive and difficult to document effectively through in situ methods at kilometer spatial extents, thus rendering remotely sensed methods desirable. Spatially explicit estimates of ALT can provide critically needed data for parameterization, initialization, and evaluation of Arctic terrestrial models. In this work, we demonstrate a new approach using high-resolution remotely sensed data for estimating centimeter-scale ALT in a 5 km2 area of ice-wedge polygon terrain in Barrow, Alaska. We use a simple regression-based, machine learning data-fusion algorithm that uses topographic and spectral metrics derived from multisensor data (LiDAR and WorldView-2) to estimate ALT (2 m spatial resolution) across the study area. Comparison of the ALT estimates with ground-based measurements, indicates the accuracy (r2 = 0.76, RMSE ±4.4 cm) of the approach. While it is generally accepted that broad climatic variability associated with increasing air temperature will govern the regional averages of ALT, consistent with prior studies, our findings using high-resolution LiDAR and WorldView-2 data, show that smaller-scale variability in ALT is controlled by local eco-hydro-geomorphic factors. This work demonstrates a path forward for mapping ALT at high spatial resolution and across sufficiently large regions for improved understanding and predictions of coupled dynamics among permafrost, hydrology, and land-surface processes from readily available remote sensing data.
Groothuis, Floris A; Heringa, Minne B; Nicol, Beate; Hermens, Joop L M; Blaauboer, Bas J; Kramer, Nynke I
2015-06-01
Challenges to improve toxicological risk assessment to meet the demands of the EU chemical's legislation, REACH, and the EU 7th Amendment of the Cosmetics Directive have accelerated the development of non-animal based methods. Unfortunately, uncertainties remain surrounding the power of alternative methods such as in vitro assays to predict in vivo dose-response relationships, which impedes their use in regulatory toxicology. One issue reviewed here, is the lack of a well-defined dose metric for use in concentration-effect relationships obtained from in vitro cell assays. Traditionally, the nominal concentration has been used to define in vitro concentration-effect relationships. However, chemicals may differentially and non-specifically bind to medium constituents, well plate plastic and cells. They may also evaporate, degrade or be metabolized over the exposure period at different rates. Studies have shown that these processes may reduce the bioavailable and biologically effective dose of test chemicals in in vitro assays to levels far below their nominal concentration. This subsequently hampers the interpretation of in vitro data to predict and compare the true toxic potency of test chemicals. Therefore, this review discusses a number of dose metrics and their dependency on in vitro assay setup. Recommendations are given on when to consider alternative dose metrics instead of nominal concentrations, in order to reduce effect concentration variability between in vitro assays and between in vitro and in vivo assays in toxicology. PMID:23978460
International Nuclear Information System (INIS)
Estimates and techniques that are valid to calculate the linear extrapolation distance for an infinitely long circular cylindrical absorbing region are reviewed. Two estimates, in particular, are put into consideration, that is the most probable and the value resulting from an approximate technique based on matching the integral transport equation inside the absorber with the diffusion approximation in the surrounding infinite scattering medium. Consequently, the effective diffusion parameters and the blackness of the cylinder are derived and subjected to comparative studies. A computer code is set up to calculate and compare the different parameters, which is useful in reactor analysis and serves to establish a beneficial estimates that are amenable to direct application to reactor design codes
International Nuclear Information System (INIS)
We report the mass measurement of the short-lived 12Be nuclide (T1/2=21.5 ms) performed using the Penning trap mass spectrometer TITAN at TRIUMF. Our mass excess value of 25 078.0(2.1) keV is in agreement with previous measurements, but is a factor of 7 more precise than the Atomic Mass Evaluation of 2003. To address an unresolved discussion on the spin assignment of isospin T=2 states in 12C and 12O, we reevaluate the isobaric mass multiplet equation for the lowest lying T=2 multiplet in the A=12 system and use the extracted parameters to extrapolate from the known excited 2+ and 0+ states in 12Be. Though this analysis favors the second known T=2 state in 12C to be 2+, 0+ cannot be excluded.
Energy Technology Data Exchange (ETDEWEB)
Montero Prieto, M.; Vidania Munoz, R. de
1994-07-01
In this work, we analyzed different approaches, assayed in order to numerically describe the systemic behaviour of Beryllium. The experimental results used in this work, were previously obtained by Furchner et al. (1973), using Sprague-Dawley rats, and others animal species. Furchner's work includes the obtained model for whole body retention in rats, but not for each target organ. In this work we present the results obtained by modeling the kinetic behaviour of Beryllium in several target organs. The results of this kind of models were used in order to establish correlations among the estimated kinetic constants. The parameters of the model were extrapolated to humans and, finally, compared with others previously published. (Author) 12 refs.
International Nuclear Information System (INIS)
This report addresses questions that arose after having completed a detailed study of a simulant-material experimental investigation of flow dynamics in the Upper Core Structures during a Core Disruptive Accident of a Liquid-Metal Fast Breeder Reactor. The main findings of the experiments were about the reduction of work potential of the expanding fuel by the presence of the Upper Core Structures. This report describes how the experimental data can be extrapolated to prototypic conditions, which phenomena modelled in code predictions by SIMMER-II are different for simulant and prototypic transients, and how the experimental results compare to effects of prototypic phenomena which could not be modelled in the experiment. (orig.)
Detectors for LEP: methods and techniques
International Nuclear Information System (INIS)
This note surveys detection methods and techniques of relevance for the LEP physics programme. The basic principles of the detector physics are sketched, as recent improvement in understanding points towards improvements and also limitations in performance. Development and present status of large detector systems is presented and permits some conservative extrapolations. State-of-the-art techniques and technologies are presented and their potential use in the LEP physics programme assessed. (Auth.)
International Nuclear Information System (INIS)
Concentrations and organ distribution patterns of alpha-emitting isotopes of U (238U and 234U), Th (232Th, 230Th, and 228Th), and Pu (239,240Pu) were determined for beagle dogs of our colony. The dogs were exposed to environmental levels of U and Th isotopes through ingestion (food and water) and inhalation to stimulate environmental exposures of the general human population. The organ distribution patterns of these radionuclides in beagles are compared to patterns in humans to determine if it is appropriate to extrapolate organ content data from beagles to humans. The results indicated that approximately 80% of the U and Th accumulated in bone in both species. The organ content percentages of these radionuclides in soft tissues such as liver, kidney, etc. of both species were comparable. The human lung contained higher percentages of U and Th than the beagle lung, perhaps because the longer life span of humans resulted in a longer exposure time. If the U and Th content of dog lung is normalized to an exposure time of 58 y and 63 y, median ages of the U and Th study populations, respectively, the lung content for both species is comparable. The organ content of 239,240Pu in humans and beagles differed slightly. In the beagle, the liver contained more than 60%, and the skeleton contained less than 40% of the Pu body content. In humans, the liver contained approximately 37%, and the skeleton contained approximately 58% of the body content. This difference may have been dy content. This difference may have been due to differences in the mode of intake of Pu in each species or to differences in the chemical form of Pu. In general, the results suggest that the beagle may be an appropriate experimental animal from which to extrapolate data to humans with reference to the percentage of U, Th, and Pu found in the organs
Peterson, J. B., Jr.; Mann, M. J.; Sorrells, R. B., III; Sawyer, W. C.; Fuller, D. E.
1980-01-01
The results of calculations necessary to extrapolate performance data on an XB-70-1 wind tunnel model to full scale at Mach numbers from 0.76 to 2.53 are presented. The extrapolation was part of a joint program to evaluate performance prediction techniques for large flexible supersonic airplanes similar to a supersonic transport. The extrapolation procedure included: interpolation of the wind tunnel data at the specific conditions of the flight test points; determination of the drag increments to be applied to the wind tunnel data, such as spillage drag, boundary layer trip drag, and skin friction increments; and estimates of the drag items not represented on the wind tunnel model, such as bypass doors, roughness, protuberances, and leakage drag. In addition, estimates of the effects of flexibility of the airplane were determined.
Directory of Open Access Journals (Sweden)
Tynan Anna
2012-09-01
Full Text Available Abstract Background Male circumcision (MC has been shown to reduce the risk of HIV acquisition among heterosexual men, with WHO recommending MC as an essential component of comprehensive HIV prevention programs in high prevalence settings since 2007. While Papua New Guinea (PNG has a current prevalence of only 1%, the high rates of sexually transmissible diseases and the extensive, but unregulated, practice of penile cutting in PNG have led the National Department of Health (NDoH to consider introducing a MC program. Given public interest in circumcision even without active promotion by the NDoH, examining the potential health systems implications for MC without raising unrealistic expectations presents a number of methodological issues. In this study we examined health systems lessons learned from a national no-scalpel vasectomy (NSV program, and their implications for a future MC program in PNG. Methods Fourteen in-depth interviews were conducted with frontline health workers and key government officials involved in NSV programs in PNG over a 3-week period in February and March 2011. Documentary, organizational and policy analysis of HIV and vasectomy services was conducted and triangulated with the interviews. All interviews were digitally recorded and later transcribed. Application of the WHO six building blocks of a health system was applied and further thematic analysis was conducted on the data with assistance from the analysis software MAXQDA. Results Obstacles in funding pathways, inconsistent support by government departments, difficulties with staff retention and erratic delivery of training programs have resulted in mixed success of the national NSV program. Conclusions In an already vulnerable health system significant investment in training, resources and negotiation of clinical space will be required for an effective MC program. Focused leadership and open communication between provincial and national government, NGOs and community is necessary to assist in service sustainability. Ensuring clear policy and guidance across the entire sexual and reproductive health sector will provide opportunities to strengthen key areas of the health system.
Scientific Electronic Library Online (English)
Joel, Negin; Robert G, Cumming.
2010-11-01
Full Text Available OBJETIVO: Cuantificar el número de casos y la prevalencia de la infección por el virus de la inmunodeficiencia humana (VIH) entre los adultos de mayor edad en el África subsahariana. MÉTODOS: Se han analizado los datos procedentes de las Encuestas demográficas y de salud (EDS). Aunque en estos estud [...] ios todas las mujeres entrevistadas son menores de 50 años, 18 de estas encuestas contenían datos sobre la infección por VIH en hombres con una edad igual o superior a los 50 años. Para calcular el porcentaje de adultos de mayor edad (es decir, personas de 50 o más años de edad) con positividad al VIH (VIH+), se extrapolaron los datos procedentes del Programa Conjunto de las Naciones Unidas sobre el VIH/SIDA sobre la cantidad estimada de personas con el VIH y sobre la prevalencia de la infección por este virus entre los adultos con edades comprendidas entre 15 y 49 años. RESULTADOS: En 2007, en el África subsahariana había unos 3 millones de personas de 50 años o mayores con el VIH. La prevalencia de la infección por el VIH en este grupo fue del 4,0%, en comparación con el 5,0% correspondiente al grupo con edades comprendidas entre 15 y 49 años. De la cantidad aproximada de 21 millones de personas > 15 años con VIH en el África subsahariana, el 14,3% tenía 50 años de edad o más. CONCLUSIÓN: Para poder reflejar mejor la mayor supervivencia de las personas con VIH y el envejecimiento de la población VIH+, se deben ampliar los indicadores de la prevalencia de la infección por el VIH, de manera que incluyan a las personas mayores de 49 años. Se sabe poco sobre la morbilidad asociada y el comportamiento sexual de los adultos VIH+ de mayor edad o acerca de los factores biológicos y culturales que aumentan el riesgo de transmisión. Los servicios relacionados con el VIH deben orientarse mejor para responder a las necesidades crecientes de los adultos de edad más avanzada que se ven afectados por esta enfermedad. Abstract in english OBJECTIVE: To quantify the number of cases and prevalence of human immunodeficiency virus (HIV) infection among older adults in sub-Saharan Africa. METHODS: We reviewed data from Demographic and Health Surveys (DHS). Although in these surveys all female respondents are [...] urveys contained data on HIV infection among men aged > 50 years. To estimate the percentage of older adults (i.e. people > 50 years of age) who were positive for HIV (HIV+), we extrapolated from data from the Joint United Nations Programme on HIV/AIDS on the estimated number of people living with HIV and on HIV infection prevalence among adults aged 15-49 years. FINDINGS: In 2007, approximately 3 million people aged > 50 years were living with HIV in sub-Saharan Africa. The prevalence of HIV infection in this group was 4.0%, compared with 5.0% among those aged 15-49 years. Of the approximately 21 million people in sub-Saharan Africa aged > 15 years that were HIV+, 14.3% were > 50 years old. CONCLUSION: To better reflect the longer survival of people living with HIV and the ageing of the HIV+ population, indicators of the prevalence of HIV infection should be expanded to include people > 49 years of age. Little is known about comorbidity and sexual behaviour among HIV+ older adults or about the biological and cultural factors that increase the risk of transmission. HIV services need to be better targeted to respond to the growing needs of older adults living with HIV.
Moore, R.; Shook, M.; Thornhill, K. L.; Winstead, E.; Anderson, B. E.
2013-12-01
Aircraft engine emissions constitute a tiny fraction of the global black carbon mass, but can have a disproportionate climatic impact because they are emitted high in the troposphere and in remote regions with otherwise low aerosol concentrations. Consequently, these particles are likely to strongly influence cirrus and contrail formation by acting as ice nuclei (IN). However, the ice nucleating properties of aircraft exhaust at relevant atmospheric conditions are not well known, and thus, the overall impact of aviation on cloud formation remains very uncertain. While a number of aircraft engine emissions studies have previously been conducted at sea level temperature and pressure (e.g., APEX, AAFEX-1 and 2), it unclear the extent to which exhaust emissions on the ground translate to emissions at cruise conditions with much lower inlet gas temperatures and pressures. To address this need, the NASA Alternative Fuel Effects on Contrails and Cruise Emissions (ACCESS) was conducted in February-April, 2013 to examine the aerosol and gas emissions from the NASA DC-8 under a variety of different fuel types, engine power, and altitude/meteorological conditions. Two different fuel types were studied: a traditional JP-8 fuel and a 50:50 blend of JP-8 and a camelina-based hydro-treated renewable jet (HRJ) fuel. Emissions were sampled using a comprehensive suite of gas- and aerosol-phase instrumentation integrated on an HU-25 Falcon jet that was positioned in the DC-8 exhaust plume at approximately 100-500m distance behind the engines. In addition, a four-hour ground test was carried out with sample probes positioned at 30 m behind each of the inboard engines. Measurements of aerosol concentration, size distribution, soot mass, and hygroscopicity were carried out along with trace gas measurements of CO2, NO, NO2, O3, and water vapor. NOx emissions were reconciled by employing the well-established Boeing method for normalizing engine fuel flow rates to STP; however, comparison of aerosol emissions between ground and altitude is less straight forward. The implications of these factors for developing new aviation emissions factors / inventories related to aerosol species will be discussed.
Scientific Electronic Library Online (English)
Jorge A, Calderón; Carmen P, Buitrago.
2007-12-01
Full Text Available La corrosión de recipientes fabricados en hojalata expuestos a diferentes soluciones fue evaluada usando técnicas eletroquímicas. Los recipientes con y sin la aplicación de barniz fueron expuestos a diferentes soluciones. La susceptibilidad a sufrir corrosión se evaluó utilizando voltametría cíclica [...] , curvas de polarización y espectroscopia de impedancia electroquímica. La posibilidad de formación de películas pasivas en la superfi cie de los recipientes se evaluó según la histéresis presente en el primer ciclo de las medidas de voltametría. Las curvas de polarización revelaron que el comportamiento del recubrimiento de estaño puede cambiar de anódico a catódico según la naturaleza de la solución en contacto con el recipiente, alertando sobre el riesgo de corrosión localizada. Mediante impedancia electroquímica se evaluó el efecto del uso de un aditivo en las soluciones o productos empacados en dos recipientes. Las medidas de impedancia mostraron un efecto perjudicial del aditivo utilizado y una rápida aparición de procesos corrosivos cuando se usó la solución modifi cada con el aditivo. Abstract in english Corrosion of lacquered tinplate cans in different solutions was assessed using electrochemical methods. Samples with and without lacquer coating were exposed to different solutions and their susceptibility to corrosion was evaluated using cyclic voltammetry, Tafel curves and electrochemical impedanc [...] e spectroscopy. The possible formation of a passive layer on the container surface was evaluated according to the kind of hysteresis presented in the fi rst cycle of voltammeter measurements. Tafel plots showed how the behaviour of the tin layer can change from anodic to cathodic depending on the nature of the solution in contact with it, revealing the risk of localized corrosion. The effect of one additive in the solutions on the electrochemical performance containers was evaluated by electrochemical impedance. The impedance showed a deleterious effect of the additive, and corrosion processes appeared more quickly in containers packed with solutions modifi ed with additive.
Some problems of absolute measurement of pure ?-nuclides by method of label
International Nuclear Information System (INIS)
Dependance od efficiency of nuclide to be measured on efficiency of label has been calculated on the base of theoretical modeling of the process of self-absorption, i.e., it is dependance on extrapolation of the label. These calculations have appeared to be useful for estimation of value of change of label efficiency and on the method of it's measurements. It has been shown that linear coefficient of extrapolation curve does not depend on the cource of selfabsorption but at the same time square-law coefficient has close connection to the self-absorption dispersion and hence to the method of sample preparation
Scientific Electronic Library Online (English)
Rodrigo Abensur, Athanazio; Samia Zahi, Rached; Ciro, Rohde; Regina Carvalho, Pinto; Frederico Leon Arrabal, Fernandes; Rafael, Stelmach.
2010-08-01
Full Text Available OBJETIVO: Conhecer o perfil de pacientes adultos com bronquiectasias, comparando portadores de fibrose cística (FC) com aqueles com bronquiectasias de outra etiologia, a fim de determinar se é racional extrapolar terapêuticas instituídas em fibrocísticos para aqueles com bronquiectasias de outras et [...] iologias. MÉTODOS: Análise retrospectiva dos prontuários de 87 pacientes adultos com diagnóstico de bronquiectasia em acompanhamento em nosso serviço. Pacientes com doença secundária a infecção por tuberculose corrente ou no passado foram excluídos. Foram avaliados dados clínicos, funcionais e terapêuticos dos pacientes. RESULTADOS: Dos 87 pacientes com bronquiectasias, 38 (43,7%) tinham diagnóstico confirmado de FC através de dosagem de sódio e cloro no suor ou análise genética, enquanto 49 (56,3%) apresentavam a doença por outra etiologia, 34 (39,0%) desses com bronquiectasia idiopática. Os pacientes com FC apresentavam média de idade ao diagnóstico mais baixa (14,2 vs. 24,2 anos; p Abstract in english OBJECTIVE: To profile the characteristics of adult patients with bronchiectasis, drawing comparisons between cystic fibrosis (CF) patients and those with bronchiectasis from other causes in order to determine whether it is rational to extrapolate the bronchiectasis treatment given to CF patients to [...] those with bronchiectasis from other causes. METHODS: A retrospective analysis of the medical charts of 87 patients diagnosed with bronchiectasis and under follow-up treatment at our outpatient clinic. Patients who had tuberculosis (current or previous) were excluded. We evaluated the clinical, functional, and treatment data of the patients. RESULTS: Of the 87 patients with bronchiectasis, 38 (43.7%) had been diagnosed with CF, through determination of sweat sodium and chloride concentrations or through genetic analysis, whereas the disease was due to another etiology in 49 (56.3%), of whom 34 (39.0%) had been diagnosed with idiopathic bronchiectasis. The mean age at diagnosis was lower in the patients with CF than in those without (14.2 vs. 24.2 years; p
International Nuclear Information System (INIS)
On the basis of a general descriptive framework which takes into account the intensity factor and the time distribution of radiation, a detailed justification for which is to be found in earlier publications, the three fundamental problems mentioned in the title of this paper can be approached in a new way. If the biological effect e for a given dose D delivered at different radiation intensities phi is studied, we find that the curve e=f(phi) can exhibit non-monotonic shapes. This type of phenomenon is known in pharmacology and toxicology and may well exist also for low- or medium-intensity radiation effects. Extrapolation of the effects of a given dose between high and low radiation intensities phi is usually carried out by means of an empirical linear or linear-quadratic formulation. This procedure is insufficiently justified from a theoretical point of view. It is shown here that the effects can be written in the form e=k(phi)D and that the factor of proportionality k(phi) is a generally very complicated function of phi. Hence, the usual extrapolation procedures cannot deal with certain ranges of values of phi within which the effects observed at a given dose may be greater than when the dose is delivered at higher intensity. The problem of thresholds is actually far more difficult than the current literature on the subject would suggest. It is shown here, on the basis of considerations of qualitative dynamics, that several types of threshold must be defined, starl types of threshold must be defined, starting with a threshold for the radiation intensity phi. All these thresholds are interrelated hierarchically in fairly complex ways which must be studied case by case. These results show that it is illusory to attempt to define a universal notion of threshold in terms of dose. The conceptual framework used in the proposed approach proves also to be very illuminating for other studies in progress, particularly in the investigation of phenomena associated with ageing and carcinogenesis. (author)
Energy Technology Data Exchange (ETDEWEB)
Alvarez R, J.T.; Morales P, R
1992-06-15
The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, ({sup 90} Sr/{sup 90} Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)
Energy Technology Data Exchange (ETDEWEB)
Miliordos, Evangelos; Xantheas, Sotiris S.
2015-03-07
We report the variation of the binding energy of the formic acid dimer at the CCSD(T)/ Complete Basis Set limit and examine the validity of the BSSE-correction, previously challenged by Kalescky, Kraka and Cremer [J. Chem. Phys. 140 (2014) 084315]. Our best estimate of D0=14.3±0.1 kcal/mol is in excellent agreement with the experimental value of 14.22±0.12 kcal/mol. The BSSE correction is indeed valid for this system since it exhibits the expected behavior of decreasing with increasing basis set size and its inclusion produces the same limit (within 0.1 kcal/mol) as the one obtained from extrapolation of the uncorrected binding energy. This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences. Pacific Northwest National Laboratory (PNNL) is a multiprogram national laboratory operated for DOE by Battelle. A portion of this research was performed using the Molecular Science Computing Facility (MSCF) in EMSL, a national scientific user facility sponsored by the Department of Energy’s Office of Biological and Environmental Research and located at PNNL.
Homburg, Frank; Freudenthaler, Volker; Jaeger, Horst
1997-10-01
A ground-based lidar system with an accurate scanning capability has been built up for detailed investigations of jet airplane condensation trails. The scanning of contrails including their small scale structures requires a complex and fast system control In particular the positioning of the scanning mount is time critical. A CCD camera is therefore integrated with the lidar system for the definitive capture of the target. The whole system is controlled by a software package, which has been developed with the PC based operating system Microsoft Windows. By this means a complete contrail scan can be performed within 20 to 30 sec, which is fast enough to investigate contrails even in the vortex regime. The combination of lidar measurements of a contrail cross section with data acquired simultaneously by the CCD camera allows the determination of the spatial spread of contrails and to extrapolate the optical depth determined by lidar within the 40 degree viewing angle of the CCD camera. This technique will be used to validate and possibly improve existing algorithm which determine contrails in AVHRR satellite images.
Escher, Beate I; Cowan-Ellsberry, Christina E; Dyer, Scott; Embry, Michelle R; Erhardt, Susan; Halder, Marlies; Kwon, Jung-Hwan; Johanning, Karla; Oosterwijk, Mattheus T T; Rutishauser, Sibylle; Segner, Helmut; Nichols, John
2011-07-18
Binding of hydrophobic chemicals to colloids such as proteins or lipids is difficult to measure using classical microdialysis methods due to low aqueous concentrations, adsorption to dialysis membranes and test vessels, and slow kinetics of equilibration. Here, we employed a three-phase partitioning system where silicone (polydimethylsiloxane, PDMS) serves as a third phase to determine partitioning between water and colloids and acts at the same time as a dosing device for hydrophobic chemicals. The applicability of this method was demonstrated with bovine serum albumin (BSA). Measured binding constants (K(BSAw)) for chlorpyrifos, methoxychlor, nonylphenol, and pyrene were in good agreement with an established quantitative structure-activity relationship (QSAR). A fifth compound, fluoxypyr-methyl-heptyl ester, was excluded from the analysis because of apparent abiotic degradation. The PDMS depletion method was then used to determine partition coefficients for test chemicals in rainbow trout (Oncorhynchus mykiss) liver S9 fractions (K(S9w)) and blood plasma (K(bloodw)). Measured K(S9w) and K(bloodw) values were consistent with predictions obtained using a mass-balance model that employs the octanol-water partition coefficient (K(ow)) as a surrogate for lipid partitioning and K(BSAw) to represent protein binding. For each compound, K(bloodw) was substantially greater than K(S9w), primarily because blood contains more lipid than liver S9 fractions (1.84% of wet weight vs 0.051%). Measured liver S9 and blood plasma binding parameters were subsequently implemented in an in vitro to in vivo extrapolation model to link the in vitro liver S9 metabolic degradation assay to in vivo metabolism in fish. Apparent volumes of distribution (V(d)) calculated from the experimental data were similar to literature estimates. However, the calculated binding ratios (f(u)) used to relate in vitro metabolic clearance to clearance by the intact liver were 10 to 100 times lower than values used in previous modeling efforts. Bioconcentration factors (BCF) predicted using the experimental binding data were substantially higher than the predicted values obtained in earlier studies and correlated poorly with measured BCF values in fish. One possible explanation for this finding is that chemicals bound to proteins can desorb rapidly and thus contribute to metabolic turnover of the chemicals. This hypothesis remains to be investigated in future studies, ideally with chemicals of higher hydrophobicity. PMID:21604782
Kumar, Sanjeev; Samuel, Koppara; Subramanian, Ramaswamy; Braun, Matthew P; Stearns, Ralph A; Chiu, Shuet-Hing Lee; Evans, David C; Baillie, Thomas A
2002-12-01
Diclofenac is eliminated predominantly (approximately 50%) as its 4'-hydroxylated metabolite in humans, whereas the acyl glucuronide (AG) pathway appears more important in rats (approximately 50%) and dogs (>80-90%). However, previous studies of diclofenac oxidative metabolism in human liver microsomes (HLMs) have yielded pronounced underprediction of human in vivo clearance. We determined the relative quantitative importance of 4'-hydroxy and AG pathways of diclofenac metabolism in rat, dog, and human liver microsomes. Microsomal intrinsic clearance values (CL(int) = V(max)/K(m)) were determined and used to extrapolate the in vivo blood clearance of diclofenac in these species. Clearance of diclofenac was accurately predicted from microsomal data only when both the AG and the 4'-hydroxy pathways were considered. However, the fact that the AG pathway in HLMs accounted for ~75% of the estimated hepatic CL(int) of diclofenac is apparently inconsistent with the 4'-hydroxy diclofenac excretion data in humans. Interestingly, upon incubation with HLMs, significant oxidative metabolism of diclofenac AG, directly to 4'-hydroxy diclofenac AG, was observed. The estimated hepatic CL(int) of this pathway suggested that a significant fraction of the intrahepatically formed diclofenac AG may be converted to its 4'-hydroxy derivative in vivo. Further experiments indicated that this novel oxidative reaction was catalyzed by CYP2C8, as opposed to CYP2C9-catalyzed 4'-hydroxylation of diclofenac. These findings may have general implications in the use of total (free + conjugated) oxidative metabolite excretion for determining primary routes of drug clearance and may question the utility of diclofenac as a probe for phenotyping human CYP2C9 activity in vivo via measurement of its pharmacokinetics and total 4'-hydroxy diclofenac urinary excretion. PMID:12438516
Production and characterization of TI/PbO2 electrodes by a thermal-electrochemical method
Directory of Open Access Journals (Sweden)
Laurindo Edison A.
2000-01-01
Full Text Available Looking for electrodes with a high overpotential for the oxygen evolution reaction (OER, useful for the oxidation of organic pollutants, Ti/PbO2 electrodes were prepared by a thermal-electrochemical method and their performance was compared with that of electrodeposited electrodes. The open-circuit potential for these electrodes in 0.5 mol L-1 H2SO4 presented quite stable similar values. X-ray diffraction analyses showed the thermal-electrochemical oxide to be a mixture of ort-PbO, tetr-PbO and ort-PbO2. On the other hand, the electrodes obtained by electrodeposition were in the tetr-PbO2 form. Analyses by scanning electron microscopy showed that the basic morphology of the thermal-electrochemical PbO2 is determined in the thermal step, being quite distinct from that of the electrodeposited electrodes. Polarization curves in 0.5 mol L-1 H2SO4 showed that in the case of the thermal-electrochemical PbO2 electrodes the OER was shifted to more positive potentials. However, the values of the Tafel slopes, quite high, indicate that passivating films were possibly formed on the Ti substrates, which could eventually explain the somewhat low current values for OER.
Report on the uncertainty methods study
International Nuclear Information System (INIS)
The Uncertainty Methods Study (UMS) Group, following a mandate from CSNI, has compared five methods for calculating the uncertainty in the predictions of advanced 'best estimate' thermal-hydraulic codes: the Pisa method (based on extrapolation from integral experiments) and four methods identifying and combining input uncertainties. Three of these, the GRS, IPSN and ENUSA methods, use subjective probability distributions, and one, the AEAT method, performs a bounding analysis. Each method has been used to calculate the uncertainty in specified parameters for the LSTF SB-CL-18 5% cold leg small break LOCA experiment in the ROSA-IV Large Scale Test Facility (LSTF). The uncertainty analysis was conducted essentially blind and the participants did not use experimental measurements from the test as input apart from initial and boundary conditions. Participants calculated uncertainty ranges for experimental parameters including pressurizer pressure, primary circuit inventory and clad temperature (at a specified position) as functions of time
Directory of Open Access Journals (Sweden)
Rodrigo Abensur Athanazio
2010-08-01
Full Text Available OBJETIVO: Conhecer o perfil de pacientes adultos com bronquiectasias, comparando portadores de fibrose cística (FC com aqueles com bronquiectasias de outra etiologia, a fim de determinar se é racional extrapolar terapêuticas instituídas em fibrocísticos para aqueles com bronquiectasias de outras etiologias. MÉTODOS: Análise retrospectiva dos prontuários de 87 pacientes adultos com diagnóstico de bronquiectasia em acompanhamento em nosso serviço. Pacientes com doença secundária a infecção por tuberculose corrente ou no passado foram excluídos. Foram avaliados dados clínicos, funcionais e terapêuticos dos pacientes. RESULTADOS: Dos 87 pacientes com bronquiectasias, 38 (43,7% tinham diagnóstico confirmado de FC através de dosagem de sódio e cloro no suor ou análise genética, enquanto 49 (56,3% apresentavam a doença por outra etiologia, 34 (39,0% desses com bronquiectasia idiopática. Os pacientes com FC apresentavam média de idade ao diagnóstico mais baixa (14,2 vs. 24,2 anos; p OBJECTIVE: To profile the characteristics of adult patients with bronchiectasis, drawing comparisons between cystic fibrosis (CF patients and those with bronchiectasis from other causes in order to determine whether it is rational to extrapolate the bronchiectasis treatment given to CF patients to those with bronchiectasis from other causes. METHODS: A retrospective analysis of the medical charts of 87 patients diagnosed with bronchiectasis and under follow-up treatment at our outpatient clinic. Patients who had tuberculosis (current or previous were excluded. We evaluated the clinical, functional, and treatment data of the patients. RESULTS: Of the 87 patients with bronchiectasis, 38 (43.7% had been diagnosed with CF, through determination of sweat sodium and chloride concentrations or through genetic analysis, whereas the disease was due to another etiology in 49 (56.3%, of whom 34 (39.0% had been diagnosed with idiopathic bronchiectasis. The mean age at diagnosis was lower in the patients with CF than in those without (14.2 vs. 24.2 years; p < 0.05. The prevalence of symptoms (cough, expectoration, hemoptysis, and wheezing was similar between the groups. Colonization by Pseudomonas aeruginosa or Staphylococcus aureus was more common in the CF patients (82.4 vs. 29.7% and 64.7 vs. 5.4%, respectively. CONCLUSIONS: The causes and clinical manifestations of bronchiectasis are heterogeneous, and it is important to identify the differences. It is crucial that these differences be recognized so that new strategies for the management of patients with bronchiectasis can be developed.
Automatic numerical integration methods for Feynman integrals through 3-loop
de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.
2015-05-01
We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.
An improved Compton scattering method for determination of concentration of solutions.
Priyada, P; Ramar, R; Shivaramu
2012-10-01
An improved Compton scattering method for determination of concentration of low-Z solutions is presented. The Monte Carlo (MC) numerical simulation of the scattering phenomena is done using the MCNP code. A unique non-linear extrapolation method is followed in correcting the scattered intensity for self-absorption and multiple scattering. The density ratios obtained using non linear extrapolated scattered intensity values are free from self-absorption and multiple scattering and agree well with the standard ones within experimental errors. The sensitivity study of transmission and scattering methods for determination of concentration of solutions having closer attenuation parameters at 661.6 keV is carried out to predict the range of effectiveness and suitability of these methods. The slopes (sensitivity/unit concentration) of the curves obtained from scattering method are higher by a factor of 1.26 compared to those of the transmission method in the measured range of concentrations. PMID:22871448
Splitting methods for Levitron Problems
Geiser, Juergen
2012-01-01
In this paper we describe splitting methods for solving Levitron, which is motivated to simulate magnetostatic traps of neutral atoms or ion traps. The idea is to levitate a magnetic spinning top in the air repelled by a base magnet. The main problem is the stability of the reduced Hamiltonian, while it is not defined at the relative equilibrium. Here it is important to derive stable numerical schemes with high accuracy. For the numerical studies, we propose novel splitting schemes and analyze their behavior. We deal with a Verlet integrator and improve its accuracy with iterative and extrapolation ideas. Such a Hamiltonian splitting method, can be seen as geometric integrator and saves computational time while decoupling the full equation system. Experiments based on the Levitron model are discussed.
Alternating proximal gradient method for nonnegative matrix factorization
Xu, Yangyang
2011-01-01
Nonnegative matrix factorization has been widely applied in face recognition, text mining, as well as spectral analysis. This paper proposes an alternating proximal gradient method for solving this problem. With a uniformly positive lower bound assumption on the iterates, any limit point can be proved to satisfy the first-order optimality conditions. A Nesterov-type extrapolation technique is then applied to accelerate the algorithm. Though this technique is at first used fo...
Application of the Normalized Full Gradient (NFG) Method to Resistivity Data
AYDIN, AL?
2010-01-01
This paper proposes the application of the normalized full gradient (NFG) method to resistivity studies and illustrates that the method can greatly reduce the time and work load needed in detecting buried bodies using resistivity measurement. The NFG method calculates resistivity values at desired electrode offsets by extrapolation of a function of resistivity measurements (i.e. the gradient) to other depth levels using resistivity measurements done at one electrode offset only. The performan...
Modified method for obtaining the critical cooling rate for vitrification of polymers
Scientific Electronic Library Online (English)
Claudia, Canova; Benjamim de Melo, Carvalho.
2007-12-01
Full Text Available Due to the relevance of the critical cooling rate, Rc, for glasses, Barandiarán and Colmenero (BC) developed a method for calculating Rc as a function of the crystallization temperature on cooling obtained from thermal analysis. The critical cooling rate is obtained by the extrapolation method to co [...] nditions of infinity undercooling. However, for polymers, there is a strong reason for modifying the original BC method. In this case, the extrapolation must be extended only to the undercooling associated to the glass transition temperature, Tg, because no crystallization can occur below this temperature. Following this modified method (MBC) proposed by the present authors, the critical cooling rate for PP, PEEK, P10MS and PET were determined. The results showed that the new values are much lower than those obtained by the original BC method.
An ESDIRK Method with Sensitivity Analysis Capabilities
DEFF Research Database (Denmark)
Kristensen, Morten Rode; JØrgensen, John Bagterp
2004-01-01
A new algorithm for numerical sensitivity analysis of ordinary differential equations (ODEs) is presented. The underlying ODE solver belongs to the Runge-Kutta family. The algorithm calculates sensitivities with respect to problem parameters and initial conditions, exploiting the special structure of the sensitivity equations. A key feature is the reuse of information already computed for the state integration, hereby minimizing the extra effort required for sensitivity integration. Through case studies the new algorithm is compared to an extrapolation method and to the more established BDF based approaches. Several advantages of the new approach are demonstrated, especially when frequent discontinuities are present, which renders the new algorithm particularly suitable for dynamic optimization purposes.
International Nuclear Information System (INIS)
Highlights: ? FSW demonstrated higher corrosion resistance than GTAW of 6061 Al alloy. ? FSW and GTAW both demonstrated poorer corrosion behavior than the base metal. ? FSW produced ?1–2 ?m equiaxed grains in joint region and ?150 ?m in base metal. ? GTAW resulted in semi-cast dendritic structure. ? T6 heat treatment improved corrosion resistance of both FSW and GTAW joints. -- Abstract: Wrought aluminum sheets with thickness of 13 mm were square butt-welded by friction stir welding (FSW) and gas tungsten arc welding (GTAW) methods. Corrosion behavior of the welding zone was probed by Tafel polarization curve. Optical metallography (OM) and scanning electron microscopy together with energy dispersive spectroscopy (SEM-EDS) were used to determine morphology and semi-quantitative analysis of the welded zone. FSW resulted in equiaxed grains of about 1–2 ?m, while GTAW caused dendritic structure of the welded region. Resistance to corrosion was greater for the FSW grains than the GTAW structure. In both cases, susceptibility to corrosion attack was greater in the welded region than the base metal section. T6 heat treatment resulted in shifting of the corrosion potential towards bigger positive values. This effect was stronger in the welded regions than the base metal section.
[Scenario analysis--a method for long-term planning].
Stavem, K
2000-01-10
Scenarios are known from the film industry, as detailed descriptions of films. This has given name to scenario analysis, a method for long term planning using descriptions of composite future pictures. This article is an introduction to the scenario method. Scenarios describe plausible, not necessarily probable, developments. They focus on problems and questions that decision makers must be aware of and prepare to deal with, and the consequences of alternative decisions. Scenarios are used in corporate and governmental planning, and they can be useful and complementary to traditional planning and extrapolation of past experience. The method is particularly useful in a rapidly changing world with shifting external conditions. PMID:10815501
The direct current method for measuring charged membrane conductance
International Nuclear Information System (INIS)
This paper deals with a method for measuring electrical resistance in charged membranes. The method is based on the application of a step change in direct current and on the analysis of the potential transient subsequent to the application of the current step. Membrane electrical resistance was determined by an extrapolation to zero time of potential differences measured after the current step. Experimental results obtained with commercial ion-exchange membranes were in good agreement with those computed from the Fick equation. The method developed gives more accurate values with a standard deviation lower than traditional techniques and allows the resistance of an asymmetrical membrane to be determined in both current directions. (orig.)
Summary of existing uncertainty methods
International Nuclear Information System (INIS)
A summary of existing and most used uncertainty methods is presented, and the main features are compared. One of these methods is the order statistics method based on Wilks' formula. It is applied in safety research as well as in licensing. This method has been first proposed by GRS for use in deterministic safety analysis, and is now used by many organisations world-wide. Its advantage is that the number of potential uncertain input and output parameters is not limited to a small number. Such a limitation was necessary for the first demonstration of the Code Scaling Applicability Uncertainty Method (CSAU) by the United States Regulatory Commission (USNRC). They did not apply Wilks' formula in their statistical method propagating input uncertainties to obtain the uncertainty of a single output variable, like peak cladding temperature. A Phenomena Identification and Ranking Table (PIRT) was set up in order to limit the number of uncertain input parameters, and consequently, the number of calculations to be performed. Another purpose of such a PIRT process is to identify the most important physical phenomena which a computer code should be suitable to calculate. The validation of the code should be focused on the identified phenomena. Response surfaces are used in some applications replacing the computer code for performing a high number of calculations. The second well known uncertainty method is the Uncertainty Methodology Based on Accuracy Extrapolation (UMAE) and the follow-up method 'Code with the Capability of Internal Assessment of Uncertainty (CIAU)' developed by the University Pisa. Unlike the statistical approaches, the CIAU does compare experimental data with calculation results. It does not consider uncertain input parameters. Therefore, the CIAU is highly dependent on the experimental database. The accuracy gained from the comparison between experimental data and calculated results are extrapolated to obtain the uncertainty of the system code predictions for a nuclear power plant. A high effort is needed to provide the data base for deviations between experiment and calculation results in CIAU. That time and resource consuming process has been performed only by University of Pisa for the codes CATHARE and RELAP5 up to now. The data base is available only there. That is the reason why this method is only used by University of Pisa. (author)
El-Sayed, Abdel-Rahman; Ibrahim, Eslam M. M.; Mohran, Hossnia S.; Ismael, Mohamed; Shilkamy, Hoda Abdel-Shafy
2015-05-01
Effect of indium alloying in various concentrations with lead on both microhardness and crystallite structure of lead-indium alloy was investigated. The corrosion behavior of lead-indium alloys in 4 M H2SO4 acid solution was investigated by Tafel plot and electrochemical impedance spectroscopy (EIS) methods. The results of both Tafel plot extrapolation and EIS measurements exhibited the same trend. Generally, the corrosion resistance of the alloy is more significant compared with that observed for pure lead. This study shows that the addition of 0.5 pct In to Pb decreases the corrosion. However, with a further increase of alloying In, the corrosion rate of alloy starts to increase up to 5 pct In compared with that of Pb-0.5 pct In alloy. Then the corrosion rate decreases gradually with the increase in the percentage of In up to 15 pct. The values of activation energy ( E a) supported this trend of the corrosion rate which is obtained for Pb and Pb-In alloys. X-ray diffraction data exhibited broadness of peaks, which is due to lattice distortion or grain refinement. Clearly the peaks shift to higher angles for Pb-15 pct In alloy which can be attributed to changes in lattice structure of Pb. Scanning electron microscope images confirmed that the microstructure is changed with indium alloying content. The solute content tends to refine the microstructure array.
Mechanical Properties and Corrosion Behavior of Low Carbon Steel Weldments
Directory of Open Access Journals (Sweden)
Mohamed Mahdy
2013-01-01
Full Text Available This research involves studying the mechanical properties and corrosion behavior of ?low carbon steel? (0.077wt% C before and after welding using Arc, MIG and TIG welding. The mechanical properties include testing of microhardness, tensile strength, the results indicate that microhardness of TIG, MIG welding is more than arc welding, while tensile strength in arc welding more than TIG and MIG.The corrosion behavior of low carbon weldments was performed by potentiostat at scan rate 3mV.sec-1 in 3.5% NaCl to show the polarization resistance and calculate the corrosion rate from data of linear polarization by ?Tafel extrapolation method?. The results indicate that the TIG welding increase the corrosion current density and anodic Tafel slop, while decrease the polarization resistance compared with unwelded low carbon steel. Cyclic polarization were measured to show resistance of specimens to pitting corrosion and to calculate the forward and reveres potentials. The results show shifting the forward, reverse and pitting potentials toward active direction for weldments samples compared with unwelded sample.
Large deviations and asymptotic methods in finance
Gatheral, Jim; Gulisashvili, Archil; Jacquier, Antoine; Teichmann, Josef
2015-01-01
Topics covered in this volume (large deviations, differential geometry, asymptotic expansions, central limit theorems) give a full picture of the current advances in the application of asymptotic methods in mathematical finance, and thereby provide rigorous solutions to important mathematical and financial issues, such as implied volatility asymptotics, local volatility extrapolation, systemic risk and volatility estimation. This volume gathers together ground-breaking results in this field by some of its leading experts. Over the past decade, asymptotic methods have played an increasingly important role in the study of the behaviour of (financial) models. These methods provide a useful alternative to numerical methods in settings where the latter may lose accuracy (in extremes such as small and large strikes, and small maturities), and lead to a clearer understanding of the behaviour of models, and of the influence of parameters on this behaviour. Graduate students, researchers and practitioners will find th...
Energy Technology Data Exchange (ETDEWEB)
Castillo, Jhonny Antonio Benavente
2011-07-01
The metrological coherence among standard systems is a requirement for assuring the reliability of dosimetric quantities measurements in ionizing radiation field. Scientific and technologic improvements happened in beta radiation metrology with the installment of the new beta secondary standard BSS2 in Brazil and with the adoption of the internationally recommended beta reference radiations. The Dosimeter Calibration Laboratory of the Development Center for Nuclear Technology (LCD/CDTN), in Belo Horizonte, implemented the BSS2 and methodologies are investigated for characterizing the beta radiation fields by determining the field homogeneity, the accuracy and uncertainties in the absorbed dose in air measurements. In this work, a methodology to be used for verifying the metrological coherence among beta radiation fields in standard systems was investigated; an extrapolation chamber and radiochromic films were used and measurements were done in terms of absorbed dose in air. The reliability of both the extrapolation chamber and the radiochromic film was confirmed and their calibrations were done in the LCD/CDTN in {sup 90}Sr/{sup 90}Y, {sup 85}Kr and {sup 147}Pm beta radiation fields. The angular coefficients of the extrapolation curves were determined with the chamber; the field mapping and homogeneity were obtained from dose profiles and isodose with the radiochromic films. A preliminary comparison between the LCD/CDTN and the Instrument Calibration Laboratory of the Nuclear and Energy Research Institute / Sao Paulo (LCI/IPEN) was carried out. Results with the extrapolation chamber measurements showed in terms of absorbed dose in air rates showed differences between both laboratories up to de -I % e 3%, for {sup 90}Sr/{sup 90}Y, {sup 85}Kr and {sup 147}Pm beta radiation fields, respectively. Results with the EBT radiochromic films for 0.1, 0.3 and 0.15 Gy absorbed dose in air, for the same beta radiation fields, showed differences up to 3%, -9% and -53%. The beta radiation field mappings with radiochromic films in both BSS2 showed that some of them were not geometrically aligned. (author)
Energy Technology Data Exchange (ETDEWEB)
Yarmukhamedov, R. [Institute of Nuclear Physics, Academy of Sciences of Uzbekistan, 100214 Tashkent (Uzbekistan)
2014-05-09
The basic methods of the determination of asymptotic normalization coefficient for A+a?B of astrophysical interest are briefly presented. The results of the application of the specific asymptotic normalization coefficients derived within these methods for the extrapolation of the astrophysical S factors to experimentally inaccessible energy regions (E ? 25 keV) for the some specific radiative capture A(a,?)B reactions of the pp-chain and the CNO cycle are presented.
International Nuclear Information System (INIS)
X-ray diffraction, Raman spectroscopy and Optical absorption estimates of the thickness of graphene multi layer stacks (number of graphene layers) are presented for three different growth techniques. The objective of this work was focused on comparison and reconciliation of the two already widely used methods for thickness estimates (Raman and Absorption) with the calibration of the X-ray method as far as Scherer constant K is concerned and X-ray based Wagner-Aqua extrapolation method
Wear and Corrosion Behavior of CoNiCrAlY Bond Coats
Rathod, W. S.; Khanna, A. S.; Rathod, R. C.; Sapate, S. G.
2014-07-01
The present study focusses on the wear and microstructural properties of CoNiCrAlY coatings fabricated on AISI 316L stainless steel substrate by using the (HVOF) and (CGDS) methods. A triobiological test was performed on the samples in order to understand the wear behaviour of thermally sprayed coatings. The microstructures of as-sprayed and worn out coatings were investigated by scanning electron microscopy. Coating hardness measurements were performed with nanoindentation. HVOF coating revealed lower hardness value in comparison with CGDS. Studies depicted better wear resistance of the CGDS sprayed with He, when compared to CGDS N2 and HVOF processing. Potentiodynamic polarization curves and tafel extrapolation experiments were carried at 7.5 pH value using 3.5 % NaCl as an electrolyte. Electrochemical studies depicted better corrosion resistance of the He processed coating when compared to N2 and HVOF processing.
Dissolution of chromium in sulfuric acid
Directory of Open Access Journals (Sweden)
J. P. POPIC
2002-11-01
Full Text Available By combining electrochemical corrosion rate measurements and spectrophotometric analysis of the electrolyte it was shown that at room temperature chromium dissolves in deaerated 0.1 M Na2SO4 + H2SO4 (pH 1 solution as Cr(II and Cr(III ions in he ratio Cr(II : Cr(III @ 7 : 1. This process was stable over 4 h without any detectable change. The total corrosion rate of chromium calculated from the analytical data is about 12 times higher, than that determined electrochemically by cathodic Tafel line extrapolation to the corrosion potential. This finding was confirmed by applying the weight-loss method for the determination of the corrosion rate. This enormous difference between these experimentally determined corrosion rates can be explained by the rather fast, ?anomalous? dissolution process proposed by Kolotyrkin and coworkers (chemical reaction of Cr with H2O molecules occurring simultaneously with the electrochemical corrosion process.
International Nuclear Information System (INIS)
The effect of small addition of Al on the electrochemical performances was investigated by open circuit potential and Tafel Extrapolation method. The results show that open circuit potential reveals as-cast Mg containing Ca alloys with minor content of Al maintained highly negative potential with the range of -1.68 to -1.63 VSCE in comparison to both pure Mg (-1.60VSCE) and commercial high potential Mg content. Corrosion rate for the as-cast samples remains higher (30-17 mpy) than pure Mg (3 mpy) and commercial high potential Mg anode (14 mpy). The increasing small content of Al results in the reduction of corrosion rate significantly. Therefore, it proves that the performance of Mg containing Ca alloy is strongly influenced by the concentration of Al. (author)
Energy Technology Data Exchange (ETDEWEB)
Rodrigues, Josianne L.; Silva, Paulo R.O.; Santos, Raquel G.; Ferreira, Andrea V., E-mail: jlr@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)
2011-07-01
Thiosemicarbazones have attracted great pharmacological interest because of their biological properties, such as cytotoxic activity against multiple strains of human tumors. Due to the excellent properties of {sup 64}Cu, the copper complex N(4)-ortho-toluyl-2-acetylpyridine thiosemicarbazone (({sup 64}Cu)(H2Ac4oT)Cl) was developed for tumor detection by positron emission tomography. The radiopharmaceuticals were produced in the nuclear reactor TRIGA-IPR-R1 from CDTN. At the present work, ({sup 64}Cu)(H2Ac4oT)Cl biokinetic data (evaluated in mice bearing Ehrlich tumor) were treated by MIRD formalism to perform Internal Dosimetry studies. Doses in several organs of mice were determinate, as well as in implanted tumor, for ({sup 64}Cu)(H2Ac4oT)Cl. Doses results obtained for animal model were extrapolated to humans assuming a similar concentration ratio among various tissues between mouse and human. In the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from {sup 64}Cu in the tissue were considered in dose calculations. (author)
Full scale assessment of pansharpening methods and data products
Aiazzi, B.; Alparone, L.; Baronti, S.; Carlà, R.; Garzelli, A.; Santurri, L.
2014-10-01
Quality assessment of pansharpened images is traditionally carried out either at degraded spatial scale by checking the synthesis property ofWald's protocol or at the full spatial scale by separately checking the spectral and spatial consistencies. The spatial distortion of the QNR protocol and the spectral distortion of Khan's protocol may be combined into a unique quality index, referred to as hybrid QNR (HQNR), that is calculated at full scale. Alternatively, multiscale measurements of indices requiring a reference, like SAM, ERGAS and Q4, may be extrapolated to yield a quality measurement at the full scale of the fusion product, where a reference does not exist. Experiments on simulated P?eiades data, of which reference originals at full scale are available, highlight that quadratic polynomials having three-point support, i.e. fitting three measurements at as many progressively doubled scales, are adequate. Q4 is more suitable for extrapolation than ERGAS and SAM. The Q4 value predicted from multiscale measurements and the Q4 value measured at full scale thanks to the reference original, differ by very few percents for six different state-of-the-art methods that have been compared. HQNR is substantially comparable to the extrapolated Q4.
Cao, Le; Wei, Bing
2014-08-25
Finite-difference time-domain (FDTD) algorithm with a new method of plane wave excitation is used to investigate the RCS (Radar Cross Section) characteristics of targets over layered half space. Compare with the traditional excitation plane wave method, the calculation memory and time requirement is greatly decreased. The FDTD calculation is performed with a plane wave incidence, and the RCS of far field is obtained by extrapolating the currently calculated data on the output boundary. However, methods available for extrapolating have to evaluate the half space Green function. In this paper, a new method which avoids using the complex and time-consuming half space Green function is proposed. Numerical results show that this method is in good agreement with classic algorithm and it can be used in the fast calculation of scattering and radiation of targets over layered half space. PMID:25321273
Interpolation method for forcing SCF convergence
International Nuclear Information System (INIS)
A continuous path connecting two SCF trial functions can be defined in a variety of ways. We introduce one which is, in a sense, the minimal path between these states. Corresponding occupied and virtual orbital pairs are defined. Variation of the SCF energy along the minimal path is fitted by a cubic polynomial using information supplied by the normal SCF calculation. The cubic fit is usually found to be an excellent approximation to the true energy function. Interpolation, or extrapolation, along the path effectively overcomes problems due to use of a poor initial trial function, and always converges. The final approach to convergence can be slow. Various acceleration methods, applicable to the end-game problem, have been proposed in the literature. Our interpolation method insures convergence when combined with almost any of these alternative methods for the closed-shell problem
Method of self-similar factor approximants
International Nuclear Information System (INIS)
The method of self-similar factor approximants is completed by defining the approximants of odd orders, constructed from the power series with the largest term of an odd power. It is shown that the method provides good approximations for transcendental functions. In some cases, just a few terms in a power series make it possible to reconstruct a transcendental function exactly. Numerical convergence of the factor approximants is checked for several examples. A special attention is paid to the possibility of extrapolating the behavior of functions, with arguments tending to infinity, from the related asymptotic series at small arguments. Applications of the method are thoroughly illustrated by the examples of several functions, nonlinear differential equations, and anharmonic models
Clark, Joseph Warren
2012-01-01
In turbulent business environments, change is rapid, continuous, and unpredictable. Turbulence undermines those adaptive problem solving methods that generate solutions by extrapolating from what worked (or did not work) in the past. To cope with this challenge, organizations utilize trial-based problem solving (TBPS) approaches in which they…
Internal Error Propagation in Explicit Runge--Kutta Methods
Ketcheson, David I.
2014-09-11
In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.
Interpolation methods and their use in radiation protection
International Nuclear Information System (INIS)
The presentation summarizes results of using various interpolation methods for getting spatial data from point measurements. These methods were evaluated within the State Office for Nuclear Safety (SONS) Science and Research Project No. 2/2008 'Methods and Measures to Limit Generation and Liquidation of Consequences of Radioactive Matter Misuse by Terrorists'. Several field tests in which the short life-time radioactive matter was released by explosion were realized and the measured data were processed. The essential goal is to find the most realistic method for radiation events assessment. Within the research project, three methods were used: Multilevel B-Spline, Triangulation and Kriging, using freely available SAGA GIS software. The best solution for this sort of radiation events appears to be the Multilevel B-Spline method. It is quick and produces good quality output data comparable with the much slower Kriging method and allows extrapolation in contrast to Triangulation. (author)
Tafel, Külliki
2006-01-01
Tippjuhtide mõju organisatsioonisistele suhetele ning juhimistegevusele. Skeemid: The content and overlap of the terms of corporate governance and management; The theoretical framework for the study; The degree of involvement of the board of directors in the strategic management process; Framework for treatment of the owner-CEO-employee chain of relations
International Nuclear Information System (INIS)
The method is based on perturbation of the reactor cell from a few up to few tens of percent. Measurements were performed for square lattice calls of zero power reactors Anna, NORA and RB, with metal uranium and uranium oxide fuel elements, water, heavy water and graphite moderators. Character and functional dependence of perturbations were obtained from the experimental results. Zero perturbation was determined by extrapolation thus obtaining the real physical neutron flux distribution in the reactor cell. Simple diffusion theory for partial plate cell perturbation was developed for verification of the perturbation method. The results of these calculation proved that introducing the perturbation sample in the fuel results in flattening the thermal neutron density dependent on the amplitude of the applied perturbation. Extrapolation applied for perturbed distributions was found to be justified
Bambynek, M
2002-01-01
The prototype of a primary standard has been developed, built and tested, which enables the realization of the unit of the absorbed dose to water for beta brachytherapy sources. In the course of the development of the prototype, the recommendations of the American Association of Physicists in Medicine (AAPM) Task Group 60 (TG60) and the Deutsche Gesellschaft fuer Medizinische Physik (DGMP) Arbeitskreis 18 (AK18) were taken into account. The prototype is based on a new multi-electrode extrapolation chamber (MEC) which meets, in particular, the requirements on high spatial resolution and small uncertainty. The central part of the MEC is a segmented collecting electrode which was manufactured in the clean room center of PTB by means of electron beam lithography on a wafer. A precise displacement device consisting of three piezoelectric macrotranslators has been incorporated to move the wafer collecting electrode against the entrance window. For adjustment of the wafer collecting electrode parallel to the entranc...
Standardization of Y-90 by tracing method
Energy Technology Data Exchange (ETDEWEB)
Nascimento, Tatiane S.; Koskinas, Marina F.; Matos, Izabela T.; Yamazaki, Ione M.; Dias, Mauro S., E-mail: tatianenas@usp.br, E-mail: koskinas@ipen.br, E-mail: izabelateles@gmail.br, E-mail: iomay1621@yahoo.com.br, E-mail: msdias@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Rajput, Mohamed U., E-mail: usman_rajput60@hotmail.com [Pakistan Institute of Nuclear Science and Technology, Nilore, Islamabad (Pakistan)
2013-07-01
This paper describes the procedure followed by the Nuclear Metrology Laboratory (LMN) at IPEN - CNEN/SP, in Sao Paulo, Brazil, for the standardization of the {sup 90}Y, undertaken by the tracer technique using the {sup 24}Na beta-gamma emitter as the tracer. The measurements were carried out, by means of a 4???? coincidence system. For the observed events registration was used the TAC method. A Monte Carlo simulation for generating the extrapolation curve was applied to obtain the correction of the efficiency for determining the solution activity. The correction factor of the efficiency was also calculated by means of a semi-empirical formula. The {sup 90}Y activity results obtained by both methods were compared. (author)
Wavefield reconstruction methods for reverse time migration
International Nuclear Information System (INIS)
During pre-stack reverse time migration (RTM), the shot and receiver wavefields are extrapolated separately along opposite directions, which means the shot wavefield should be saved and it is a bottleneck of RTM. The random boundary condition (RBC) method could be used to reconstruct the shot wavefield to solve this problem. The disadvantage of RBC is that the free surface boundary condition (FSBC) should be used because the RBC at the surface boundary will induce severe noise all through the imaging profile. The use of FSBC is also harmful because the reflections from the surface will generate imaging illusions. In this paper, we use two different boundary conditions, which use an absorbing boundary condition on the upper boundary, to perfectly reconstruct the shot wavefield. The new schemes could solve the free surface boundary problem and would not demand much memory. The numerical examples prove the efficiency of these methods. (paper)
A new method study biodegradation kinetics of anorganic trace pollutants by activated sludge
Temmink, H.; Klapwijk, A.
2003-01-01
A reliable prediction of the behaviour of organic trace compounds in activated sludge plants requires an accurate input of the biodegradation kinetics. Often these kinetics are extrapolated from the results of standardised biodegradation tests. However, these tests generally are not designed to yield kinetic information and do not reflect the conditions in activated sludge plants. To overcome these problems a new test method was developed which is referred to as a ‘by-pass’ test. The test...
Production and characterization of TI/PbO2 electrodes by a thermal-electrochemical method
Scientific Electronic Library Online (English)
Edison A., Laurindo; Nerilso, Bocchi; Romeu C., Rocha-Filho.
2000-08-01
Full Text Available Visando obter eletrodos com alto sobrepotencial para a reação de desprendimento de oxigênio (RDO), úteis para a oxidação de poluentes orgânicos, prepararam-se eletrodos de Ti/PbO2 por um método térmico-eletroquímico e compararam-se seus desempenhos com o de eletrodepositados. O potencial de circuito [...] aberto em solução de H2SO4 0,5 mol L-1 para esses eletrodos apresentou valores bastante estáveis, próximos entre si, na faixa de potenciais para a região de estabilidade de PbO2 em diagramas de Pourbaix. Análises por difração de raios X mostraram que o óxido térmico-eletroquímico é uma mistura de ort-PbO, tetr-PbO e ort-PbO2. Já os eletrodos produzidos por eletrodeposição se apresentaram mais provavelmente na forma tetr-PbO2. Micrografias obtidas por microscopia eletrônica de varredura mostraram que a morfologia básica do PbO2 térmico-eletroquímico é determinada na etapa térmica, sendo bem distinta da dos eletrodos eletrodepositados. Curvas de polarização, em H2SO4 0,5 mol L-1, mostraram que no caso dos eletrodos de Ti/PbO2 térmico-eletroquímicos a RDO foi deslocada para potenciais mais positivos. Entretanto, os valores dos coeficientes de Tafel, bastante altos, indicam que possivelmente houve formação de filmes passivantes sobre os substratos de Ti, o que pode eventualmente explicar os valores de corrente algo baixos para a RDO. Abstract in english Looking for electrodes with a high overpotential for the oxygen evolution reaction (OER), useful for the oxidation of organic pollutants, Ti/PbO2 electrodes were prepared by a thermal-electrochemical method and their performance was compared with that of electrodeposited electrodes. The open-circuit [...] potential for these electrodes in 0.5 mol L-1 H2SO4 presented quite stable similar values. X-ray diffraction analyses showed the thermal-electrochemical oxide to be a mixture of ort-PbO, tetr-PbO and ort-PbO2. On the other hand, the electrodes obtained by electrodeposition were in the tetr-PbO2 form. Analyses by scanning electron microscopy showed that the basic morphology of the thermal-electrochemical PbO2 is determined in the thermal step, being quite distinct from that of the electrodeposited electrodes. Polarization curves in 0.5 mol L-1 H2SO4 showed that in the case of the thermal-electrochemical PbO2 electrodes the OER was shifted to more positive potentials. However, the values of the Tafel slopes, quite high, indicate that passivating films were possibly formed on the Ti substrates, which could eventually explain the somewhat low current values for OER.
Error estimation in the histogram Monte Carlo method
Newman, M E J
1999-01-01
We examine the sources of error in the histogram reweighting method for Monte Carlo data analysis. We demonstrate that, in addition to the standard statistical error which has been studied elsewhere, there are two other sources of error, one arising through correlations in the reweighted samples, and one arising from the finite range of energies sampled by a simulation of finite length. We demonstrate that while the former correction is usually negligible by comparison with statistical fluctuations, the latter may not be, and give criteria for judging the range of validity of histogram extrapolations based on the size of this latter correction.
Scientific Electronic Library Online (English)
Francisco Javier, Ozores Suárez.
2013-09-01
Full Text Available Introducción: la excursión sistólica del plano lateral del anillo tricuspídeo (TAPSE) es un parámetro útil en la evaluación de la función del ventrículo derecho en pacientes pediátricos. Objetivos: mostrar los valores normales del TAPSE en niños cubanos según grupos etarios, y describir su relación [...] con la edad, gasto del ventrículo izquierdo, tiempo de aceleración pulmonar y la fracción de eyección del ventrículo izquierdo. Métodos: se realizó un estudio prospectivo en el que se incluyeron 102 niños normales, a cuya medición del TAPSE se les realizó adaptando el programa para la mensuración de la distancia entre el punto E y el septum interventricular. Resultados: el TAPSE medio fue de 19,4 mm (DS±6) con valores medios en la primera semana de 9,5 mm hasta 21,2 a los 5 años y 24,1 en niños mayores. Se encontró correlación positiva significativa entre el TAPSE y la edad (r= 0,679) descrita por la ecuación TAPSE= 13,2787 + 5,2354 log (X). Se mostraron los valores del TAPSE en 5 grupos de edades. Se encontró también una correlación significativa entre el TAPSE, el tiempo de aceleración pulmonar y el gasto sistólico del ventrículo izquierdo. Conclusiones: existen 5 grupos etarios bien definidos, los mayores cambios del TAPSE se presentan antes de los 5 años de edad, y se encontró una relación logarítmica entre el TAPSE, la edad y el tiempo de aceleración pulmonar. Se recomienda el programa utilizado como alternativa en la medición del TAPSE. Abstract in english Introduction: the tricuspid annular plane systolic excursion (TAPSE) is a useful parameter to evaluate the right ventricular function in pediatric patients. Objectives: to show the normal values of TAPSE in Cuban children by age groups, and to describe their relationship with the age, the left ventr [...] icular output, the pulmonary acceleration time and the ejection fraction of the left ventricle. Methods: a prospective study included 102 normal children to whom TAPSE was measured by adapting the program for distance mensuration between point E and the interventricular septum. Results: average TAPSE was 19.4 mm (DS±6) with mean values equal to 9.5 mm in the first week up to 21.2 mm at 5 years and 24.1 in older children. There was significant positive correlation between TAPSE figures and age (r= 0.679) described in equation TAPSE= 13.2787 + 5.2354 log (X). The TAPSE values were presented in five age groups. It was also found that there was significant correlation among TAPSE, pulmonary acceleration time and systolic output of the left ventricle. Conclusions: there exist five well-defined age groups, the major changes occur before 5 years of age and log relation was found among TAPSE, age and pulmonary acceleration time. The used program is recommended as an alternative to measure TAPSE.
Method of continued fractions for on- and off-shell t matrix of local and nonlocal potentials
International Nuclear Information System (INIS)
The method of continued fractions recently proposed by the authors is generalized to an off-shell t-matrix calculation for any nonlocal nonsymmetric interaction. The efficiency of the method is demonstrated for some examples in nuclear physics. The method is not only very efficient, but yields very accurate results when it is combined with the Romberg extrapolation method. A new separable approximation of a potential and the off-shell t matrix is proposed in connection with the method of continued fractions
Higher order methods for burnup calculations with Bateman solutions
International Nuclear Information System (INIS)
Highlights: ? Average microscopic reaction rates need to be estimated at each step. ? Traditional predictor-corrector methods use zeroth and first order predictions. ? Increasing predictor order greatly improves results. ? Increasing corrector order does not improve results. - Abstract: A group of methods for burnup calculations solves the changes in material compositions by evaluating an explicit solution to the Bateman equations with constant microscopic reaction rates. This requires predicting representative averages for the one-group cross-sections and flux during each step, which is usually done using zeroth and first order predictions for their time development in a predictor-corrector calculation. In this paper we present the results of using linear, rather than constant, extrapolation on the predictor and quadratic, rather than linear, interpolation on the corrector. Both of these are done by using data from the previous step, and thus do not affect the stepwise running time. The methods were tested by implementing them into the reactor physics code Serpent and comparing the results from four test cases to accurate reference results obtained with very short steps. Linear extrapolation greatly improved results for thermal spectra and should be preferred over the constant one currently used in all Bateman solution based burnup calculations. The effects of using quadratic interpolation on the corrector were, on the other hand, predominantly negative, althhand, predominantly negative, although not enough so to conclusively decide between the linear and quadratic variants.
International Nuclear Information System (INIS)
Historically, geophysical methods have been used extensively to successfully explore the subsurface for petroleum, gas, mineral, and geothermal resources. Their application, however, for site characterization, and monitoring the performance of near surface waste sites or repositories has been somewhat limited. Presented here is an overview of the geophysical methods that could contribute to defining the subsurface heterogeneity and extrapolating point measurements at the surface and in boreholes to volumetric descriptions in a fractured rock. In addition to site characterization a significant application of geophysical methods may be in performance assessment and in monitoring the repository to determine if the performance is as expected
On the equivalence of LIST and DIIS methods for convergence acceleration.
Garza, Alejandro J; Scuseria, Gustavo E
2015-04-28
Self-consistent field extrapolation methods play a pivotal role in quantum chemistry and electronic structure theory. We, here, demonstrate the mathematical equivalence between the recently proposed family of LIST methods [Wang et al., J. Chem. Phys. 134, 241103 (2011); Y. K. Chen and Y. A. Wang, J. Chem. Theory Comput. 7, 3045 (2011)] and the general form of Pulay's DIIS [Chem. Phys. Lett. 73, 393 (1980); J. Comput. Chem. 3, 556 (1982)] with specific error vectors. Our results also explain the differences in performance among the various LIST methods. PMID:25933749
On the equivalence of LIST and DIIS methods for convergence acceleration
Garza, Alejandro J.; Scuseria, Gustavo E.
2015-04-01
Self-consistent field extrapolation methods play a pivotal role in quantum chemistry and electronic structure theory. We, here, demonstrate the mathematical equivalence between the recently proposed family of LIST methods [Wang et al., J. Chem. Phys. 134, 241103 (2011); Y. K. Chen and Y. A. Wang, J. Chem. Theory Comput. 7, 3045 (2011)] and the general form of Pulay's DIIS [Chem. Phys. Lett. 73, 393 (1980); J. Comput. Chem. 3, 556 (1982)] with specific error vectors. Our results also explain the differences in performance among the various LIST methods.
International Nuclear Information System (INIS)
A method is presented for determining the challenge to an air cleaning system resulting from an accidental explosion in a process cell of a fuel cycle facility. In many safety analyses, this quantity is estimated by multiplying the volume of the process cell by the maximum concentration of airborne material that is reasonably stable to agglomeration and sedimentation. Particle sizes are inferred from the assumption of concentration stability. The suggested method is based on extrapolation of data obtained for the explosive dispersal of chemical agents. Application of the extrapolated information to fuel cycle facilities results in an estimate of total material airborne as well as particle size distribution. An important variable is the weight ratio of inert material to that of explosive (mass ratio). As the mass ratio is expected to be high in fuel cycle facilities, the method predicts that airborne material will have size distributions that have relatively large mean values following which substantial settling will occur. An illustrative calculation that takes mass ratio and settling into account suggests that total filter challenge may be greater than previously estimated, but that the fraction of that challenge that is smaller than 10 micro-meters may be very low. For use in safety analyses, the method requires experimental validation of the extrapolation of reference data to the conditions existing in a fuel cycle facility
Eng, Alex Yong Sheng; Ambrosi, Adriano; Sofer, Zden?k; Šimek, Petr; Pumera, Martin
2014-12-23
Beyond MoS2 as the first transition metal dichalcogenide (TMD) to have gained recognition as an efficient catalyst for the hydrogen evolution reaction (HER), interest in other TMD nanomaterials is steadily beginning to proliferate. This is particularly true in the field of electrochemistry, with a myriad of emerging applications ranging from catalysis to supercapacitors and solar cells. Despite this rise, current understanding of their electrochemical characteristics is especially lacking. We therefore examine the inherent electroactivities of various chemically exfoliated TMDs (MoSe2, WS2, WSe2) and their implications for sensing and catalysis of the hydrogen evolution and oxygen reduction reactions (ORR). The TMDs studied are found to possess distinctive inherent electroactivities and together with their catalytic effects for the HER are revealed to strongly depend on the chemical exfoliation route and metal-to-chalcogen composition particularly in MoSe2. Despite its inherent activity exhibiting large variations depending on the exfoliation procedure, it is also the most efficient HER catalyst with a low overpotential of -0.36 V vs RHE (at 10 mA cm(-2) current density) and fairly low Tafel slope of ?65 mV/dec after BuLi exfoliation. In addition, it demonstrates a fast heterogeneous electron transfer rate with a k0obs of 9.17×10(-4) cm s(-1) toward ferrocyanide, better than that seen for conventional glassy carbon electrodes. Knowledge of TMD electrochemistry is essential for the rational development of future applications; inherent TMD activity may potentially limit certain purposes, but intended objectives can nonetheless be achieved by careful selection of TMD compositions and exfoliation methods. PMID:25453501
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente Jose
2011-11-01
This report explores some important considerations in devising a practical and consistent framework and methodology for utilizing experiments and experimental data to support modeling and prediction. A pragmatic and versatile 'Real Space' approach is outlined for confronting experimental and modeling bias and uncertainty to mitigate risk in modeling and prediction. The elements of experiment design and data analysis, data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. Rationale is given for the various choices underlying the Real Space end-to-end approach. The approach adopts and refines some elements and constructs from the literature and adds pivotal new elements and constructs. Crucially, the approach reflects a pragmatism and versatility derived from working many industrial-scale problems involving complex physics and constitutive models, steady-state and time-varying nonlinear behavior and boundary conditions, and various types of uncertainty in experiments and models. The framework benefits from a broad exposure to integrated experimental and modeling activities in the areas of heat transfer, solid and structural mechanics, irradiated electronics, and combustion in fluids and solids.
International Nuclear Information System (INIS)
The austenitic stainless steel X6CrNi 1811 (DIN 1.4948) which is used as a structural material for the German Fast Breeder Reactor SNR 300 was creep-tested in a temperature range of 550-7500 under base material condition and as welded material. The results of the creep rupture strength and creep behaviour up to >= 30 000 hrs support experimentally the extrapolation up to operating times >= 105 hours. The microstructure in the basic material of the broken test-specimens was studied by light and transmission electron microscopy and partly evaluated in a quantitative form. All precipitates found at the grain boundaries and inside the grains belong to the type M23C6. Their nucleation is bound to the defects of the crystal lattice and hence depends on the configuration and movement of the dislocations. Therefore a mutual correlation between the creep process and the nucleation of precipitates exists. This explains the divergence of the plot log epsilonsub(min) vs. log sigmasub(o) from a straight line and influences the ductility values. The precipitates at the grain boundaries, which nucleate in competition with the matrix precipitates favour the intercrystalline rupture. (orig.)
Model Mixing for Long-Term Extrapolation.
Czech Academy of Sciences Publication Activity Database
Ettler, P.; Kárný, Miroslav; Nedoma, Petr
Vienna : ARGESIM-ARGE Simulation News, 2007. s. 275-275. [EUROSIM Congress on Modelling and Simulation /6./. 09.09.2007-13.09.2007, Ljubljana] R&D Projects: GA AV ?R 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Simulation * Modelling * Estimation * Multiple models Subject RIV: BB - Applied Statistics, Operational Research http://as.utia.cz/publications/2007/EttKarNed_07b.pdf
Wavefield Extrapolation in Pseudo-depth Domain
Ma, Xuxin
2011-12-11
Wave-equation based seismic migration and inversion tools are widely used by the energy industry to explore hydrocarbon and mineral resources. By design, most of these techniques simulate wave propagation in a space domain with the vertical axis being depth measured from the surface. Vertical depth is popular because it is a straightforward mapping of the subsurface space. It is, however, not computationally cost-effective because the wavelength changes with local elastic wave velocity, which in general increases with depth in the Earth. As a result, the sampling per wavelength also increases with depth. To avoid spatial aliasing in deep fast media, the seismic wave is oversampled in shallow slow media and therefore increase the total computation cost. This issue is effectively tackled by using the vertical time axis instead of vertical depth. This is because in a vertical time representation, the "wavelength" is essentially time period for vertical rays. This thesis extends the vertical time axis to the pseudo-depth axis, which features distance unit while preserving the properties of the vertical time representation. To explore the potentials of doing wave-equation based imaging in the pseudo-depth domain, a Partial Differential Equation (PDE) is derived to describe acoustic wave in this new domain. This new PDE is inherently anisotropic because the use of a constant vertical velocity to convert between depth and vertical time. Such anisotropy results in lower reflection coefficients compared with conventional space domain modeling results. This feature is helpful to suppress the low wavenumber artifacts in reverse-time migration images, which are caused by the widely used cross-correlation imaging condition. This thesis illustrates modeling acoustic waves in both conventional space domain and pseudo-depth domain. The numerical tool used to model acoustic waves is built based on the lowrank approximation of Fourier integral operators. To investigate the potential of seismic imaging in the pseudo-depth domain, examples of zero-offset migration are implemented in pseudo-depth domain and compared with conventional space domain imaging results.
Model Mixing for Long-Term Extrapolation.
Czech Academy of Sciences Publication Activity Database
Ettler, P.; Kárný, Miroslav; Nedoma, Petr
Vienna : ARGESIM-ARGE Simulation News, 2007, s. 1-6. ISBN 978-3-901608-32-2. [EUROSIM Congress on Modelling and Simulation /6./. Ljubljana (SI), 09.09.2007-13.09.2007] R&D Projects: GA AV ?R 1ET100750401; GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : Simulation * Modelling * Estimation * Multiple models Subject RIV: BB - Applied Statistics, Operational Research http://as.utia.cz/publications/2007/EttKarNed_07.pdf
Masdi Muhammad; Mohd Amin A. Majid; Asad Ali
2012-01-01
The objective of this study is to use the wall thickness data for degradation analysis of feed gas filter vessel and compare the results with remaining life evaluation method provided in API 510. The exponential model for degradation fitted best to the degradation (wall thickness) data. Extrapolation of model gave the failure time for each thickness measurement location, thus providing a failure data set to be analyzed for the reliability function. The results obtained show that the deg...
International Nuclear Information System (INIS)
TiO2–NiO nanocomposite thin films were deposited on the 316L stainless steel using sol–gel method by a dip coating technique. Different techniques such as differential thermal analysis, thermogravimetric analysis, X-ray diffraction, Fourier transform infrared spectrometry, scanning electron microscopy and scanning probe microscopy were carried out in order to characterize the structure of the coatings. The corrosion resistance of the coatings was evaluated by using Tafel polarization and electrochemical impedance spectroscopy tests of uncoated and coated specimens in a 3.5% NaCl solution at room temperature. It was found that to obtain desirable structure in coatings, the coatings should be calcined at 600 °C for one and half hour. NiTiO3, anatase and rutile were the phases obtained in different calcination conditions in air atmosphere. The results of corrosion tests indicated that with increasing the dipping times from 2 to 4, the corrosion current density first decreases but when increasing the dipping times to 6, it increases. Also the corrosion current density decreased from 186.7 nA.cm?2 (uncoated steel) to 34.21 nA.cm?2 (80%TiO2–20%NiO) and corrosion potential increased from ? 150.2 mV (uncoated steel) to ? 107.3 mV (67%TiO2–33%NiO). - Highlights: ? TiO2–NiO thin films were deposited on the 316L stainless steel using sol–gel method. ? Different compositions, anncompositions, annealing times and temperatures resulted in various phases. ? Films having different compositions showed various surface morphologies. ? Films having a composition of 80%TiO2–20%NiO showed a good corrosion protection.
Alternating proximal gradient method for nonnegative matrix factorization
Xu, Yangyang
2011-01-01
Nonnegative matrix factorization has been widely applied in face recognition, text mining, as well as spectral analysis. This paper proposes an alternating proximal gradient method for solving this problem. With a uniformly positive lower bound assumption on the iterates, any limit point can be proved to satisfy the first-order optimality conditions. A Nesterov-type extrapolation technique is then applied to accelerate the algorithm. Though this technique is at first used for convex program, it turns out to work very well for the non-convex nonnegative matrix factorization problem. Extensive numerical experiments illustrate the efficiency of the alternating proximal gradient method and the accleration technique. Especially for real data tests, the accelerated method reveals high superiority to state-of-the-art algorithms in speed with comparable solution qualities.
Methods for measuring arctic and alpine shrub growth : a review
DEFF Research Database (Denmark)
Myers-Smith, Isla; Hallinger, Martin
2015-01-01
Shrubs have increased in abundance and dominance in arctic and alpine regions in recent decades. This often dramatic change, likely due to climate warming, has the potential to alter both the structure and function of tundra ecosystems. The analysis of shrub growth is improving our understanding of tundra vegetation dynamics and environmental changes. However, dendrochronological methods developed for trees, need to be adapted for the morphology and growth eccentricity of shrubs. Here, we review current and developing methods to measure radial and axial growth, estimate age, and assess growth dynamics in relation to environmental variables. Recent advances in sampling methods, analysis and applications have improved our ability to investigate growth and recruitment dynamics of shrubs. However, to extrapolate findings to the biome scale, future dendroecologicalwork will require improved approaches that better address variation in growth within parts of the plant, among individuals within populations and between species.
Directory of Open Access Journals (Sweden)
Sie S. T.
2006-11-01
Full Text Available Research and development studies in a laboratory are necessarily conducted on a scale which is orders of magnitude smaller than that in commercial practice. In the case of the development and commercialization of an unprecedented novel process technology, available laboratory results have to be translated into envisaged technology on a commercial scale, i. e. the problem is that of scaling-up. However, in many circumstances the commercial technology is more or less defined as far as type of reactor is concerned and laboratory studies are concerned with the generation of predictive information on the behaviour of new catalysts, alternative feedstocks, etc. , in such a reactor. In many cases the complexity of feed composition and reaction kinetics preclude the prediction to be made on the basis of a combination of fundamental kinetic data and computer models, so that there is no other option than to simulate the commercial reactor on a laboratory scale, i. e. the problem is that of scaling-down. From the point of view of R & D Defficiency, the scale of the laboratory experiments should be as small as possible without detracting from the meaningfulness of the results. In the present paper some problems in the scaling-down of a trickle-flow reactor as applied in hydrotreating processes to kinetically equivalent laboratory reactors of different sizes will be discussed. Two main aspects relating to inequalities in fluid dynamics resulting from the differences in scale will be treated in more detail, viz. deviations from ideal plug flow and non ideal wetting or irrigation of the catalyst particles. Although a laboratory reactor can never be a true small-scale replica of a commercial trickle-flow reactor in all respects, it can nevertheless be made to provide representative data as far as the catalytic conversion aspects are concerned. By ressorting to measures such as catalyst bed dilution with fine catalytically inert material it proves possible to carry out meaningful process research on hydrotreating processes on the scale of micro-reactors. Les études et mises au point effectuées en laboratoire sont nécessairement effectuées à plus petite échelle que les réalisations commerciales. Dans le cas de la mise au point et de la commercialisation de la technologie d'un procédé nouveau, il faudra traduire les résultats obtenus en laboratoire pour la technologie envisagée à l'échelle commerciale; le problème est donc l'extrapolation vers le haut. Cependant, bien souvent, la technologie commerciale, pour ce qui touche au type de réacteur, est plus ou moins bien définie et les études de laboratoire s'attachent à produire des données permettant de prévoir le comportement qu'auront dans ce réacteur des catalyseurs nouveaux, de matières premières de substitution, etc. Dans bien des cas, étant donné la complexité de la composition de la matière première et la cinétique de réaction, il est impossible de mener la prévision en s'appuyant sur les données cinétiques et les modèles informatiques, de sorte qu'il n'y a pas d'autre solution que la simulation du réacteur commercial à l'échelle du laboratoire; le problème est donc l'extrapolation vers le bas. Du point de vue de l'efficacité des études de recherche et développement, pour les expériences en laboratoire, l'échelle devra être aussi petite que possible sans nuire à la signification des résultats. Le présent article examine certains problèmes liés à l'extrapolation vers le bas d'un réacteur à écoulement ruisselant telle qu'elle est appliquée dans les procédés d'hydrotraitement à des réacteurs de laboratoire de tailles différentes cinétiquement équivalents. Deux aspects principaux relatifs à des inégalités de dynamique des fluides résultant de différences d'échelle sont décrits plus en détail, i. e. les écarts par rapport à un écoulement idéal donnant lieu à un effet bouchon et au mouillage ou à l'irrigation non-idéal des particules de catalyseur. Bien qu'un réacteur de laboratoire ne puisse jam
Virtual Power Extraction Method of Designing Starting Control Law of Turbofan Engine
Directory of Open Access Journals (Sweden)
Yuchun Chen
2009-07-01
Full Text Available Virtual power extraction method (VPEM of designing starting control law of turbofan engine was presented, and the computer program was developed. The VPEM of designing starting control law is based on the principle of VPEM of designing acceleration control law of turbofan engine, and combined with the method of extrapolation of component maps. The starting control law of some turbofan engine at single flight state and in whole starting envelope was designed by using the program, some computing results were analyzed. Computing results showed that VPEM is accurate and effective for designing starting control law of turbofan engine.
International Nuclear Information System (INIS)
The homotopy perturbation method is used to formulate a new analytic solution of the neutron diffusion equation both for a sphere and a hemisphere of fissile material. Different boundary conditions are investigated; including zero flux on boundary, zero flux on extrapolated boundary, and radiation boundary condition. The interaction between two hemispheres with opposite flat faces is also presented. Numerical results are provided for one-speed fast neutrons in 235U. A comparison with Bessel function based solutions demonstrates that the homotopy perturbation method can exactly reproduce the results. The computational implementation of the analytic solutions was found to improve the numeric results when compared to finite element calculations.
Corrosion Behavior of Arc Weld and Friction Stir Weld in Al 6061-T6 Alloys
International Nuclear Information System (INIS)
For the evaluation of corrosion resistance of Al 6061-T6 Alloy, Tafel method and immersion test was performed with Friction Stir Weld(FSW) and Gas Metal Arc Weld(GMAW). The Tafel and immersion test results indicated that GMA weld was severely attacked compared with those of friction stir weld. It may be mainly due to the galvanic corrosion mechanism act on the GMA weld
Guan, Yongtao; Li, Yehua; Sinha, Rajita
2011-01-01
In a cocaine dependence treatment study, we use linear and nonlinear regression models to model posttreatment cocaine craving scores and first cocaine relapse time. A subset of the covariates are summary statistics derived from baseline daily cocaine use trajectories, such as baseline cocaine use frequency and average daily use amount. These summary statistics are subject to estimation error and can therefore cause biased estimators for the regression coefficients. Unlike classical measurement error problems, the error we encounter here is heteroscedastic with an unknown distribution, and there are no replicates for the error-prone variables or instrumental variables. We propose two robust methods to correct for the bias: a computationally efficient method-of-moments-based method for linear regression models and a subsampling extrapolation method that is generally applicable to both linear and nonlinear regression models. Simulations and an application to the cocaine dependence treatment data are used to illustrate the efficacy of the proposed methods. Asymptotic theory and variance estimation for the proposed subsampling extrapolation method and some additional simulation results are described in the online supplementary material. PMID:21984854
Reconstructing 3D x-ray CT images of polymer gel dosimeters using the zero-scan method
International Nuclear Information System (INIS)
In this study x-ray CT has been used to produce a 3D image of an irradiated PAGAT gel sample, with noise-reduction achieved using the 'zero-scan' method. The gel was repeatedly CT scanned and a linear fit to the varying Hounsfield unit of each pixel in the 3D volume was evaluated across the repeated scans, allowing a zero-scan extrapolation of the image to be obtained. To minimise heating of the CT scanner's x-ray tube, this study used a large slice thickness (1 cm), to provide image slices across the irradiated region of the gel, and a relatively small number of CT scans (63), to extrapolate the zero-scan image. The resulting set of transverse images shows reduced noise compared to images from the initial CT scan of the gel, without being degraded by the additional radiation dose delivered to the gel during the repeated scanning. The full, 3D image of the gel has a low spatial resolution in the longitudinal direction, due to the selected scan parameters. Nonetheless, important features of the dose distribution are apparent in the 3D x-ray CT scan of the gel. The results of this study demonstrate that the zero-scan extrapolation method can be applied to the reconstruction of multiple x-ray CT slices, to provide useful 2D and 3D images of irradiated dosimetry gels.
International Nuclear Information System (INIS)
In a high accuracy residual stress measurement by the neutron diffraction method, it is important to obtain the d0(hkl), the lattice spacing of strain free in the measurement material, exactly. We attempted new method to measure the d0(hkl) by neutron diffraction while sample is rotating at random as an easy method. In this study, the d0(hkl) of the extra super duralumin with texture was measured by our method. This technique is a method of calculating d0(hkl) from a0 obtained by the extrapolation method. To examine the reliability of this new method, the a0 obtained by new technique was compared with the a0 obtained by the conventional powder neutron diffraction. Their differences were 8 x 10-5 nm or less. This result confirms that a0 and d0(hkl) with enough accuracy are obtained by this new technique for the neutron residual stress analysis. (author)
The Cn method for approximation of the Boltzmann equation
International Nuclear Information System (INIS)
In a new method of approximation of the Boltzmann equation, one starts from a particular form of the equation which involves only the angular flux at the boundary of the considered medium and where the space variable does not appear explicitly. Expanding in orthogonal polynomials the angular flux of neutrons leaking from the medium and making no assumption about the angular flux within the medium, very good approximations to several classical plane geometry problems, i.e. the albedo of slabs and the transmission by slabs, the extrapolation length of the Milne problem, the spectrum of neutrons reflected by a semi-infinite slowing down medium. The method can be extended to other geometries. (authors)
A New Method for Polar Field Interpolation
Sun, X.; Liu, Y.; Hoeksema, J. T.; Hayashi, K.; Zhao, X.
2011-05-01
The photospheric magnetic field in the Sun's polar region is not well observed compared to the low-latitude regions. Data are periodically missing due to the Sun's tilt angle, and the noise level is high due to the projection effect on the line-of-sight (LOS) measurement. However, the large-scale characteristics of the polar magnetic field data are known to be important for global modeling. This report describes a new method for interpolating the photospheric field in polar regions that has been tested on MDI synoptic maps (1996 - 2009). This technique, based on a two-dimensional spatial/temporal interpolation and a simple version of the flux transport model, uses a multi-year series of well-observed, smoothed north (south) pole observations from each September (March) to interpolate for missing pixels at any time of interest. It is refined by using a spatial smoothing scheme to seamlessly incorporate this filled-in data into the original observation starting from lower latitudes. For recent observations, an extrapolated polar field correction is required. Scaling the average flux density from the prior observations of slightly lower latitudes is found to be a good proxy of the future polar field. This new method has several advantages over some existing methods. It is demonstrated to improve the results of global models such as the Wang-Sheeley-Arge (WSA) model and MHD simulation, especially during the sunspot minimum phase.
Corrosion Behavior of Pulsed Gas Tungsten Arc Weldments in Power Plant Carbon Steel
Kumaresh Babu, S. P.; Natarajan, S.
2007-10-01
Welding plays an essential role in fabrication of components such as boiler drum, pipe work, heat exchangers, etc., used in power plants. Gas tungsten arc welding (GTAW) is mainly used for welding of boiler components. Pulsed GTAW is another process widely used where high quality and precision welds are required. In all arc-welding processes, the intense heat produced by the arc and the associated local heating and cooling lead to varied corrosion behavior and several metallurgical phase changes. Since the occurrence of corrosion is due to electrochemical potential gradient developed in the adjacent site of a weld metal, it is proposed to study the effects of welding on the corrosion behavior of these steels. This paper describes the experimental work carried out to evaluate and compare corrosion and its inhibition in SA 516 Gr.70 carbon steel by pulsed GTAW process in HCl medium at 0.1, 0.5, and 1.0 M concentrations. The parent metal, weld metal and heat affected zone are chosen as regions of exposure for the study made at room temperature (R.T.) and at 100 °C. Electrochemical polarization techniques such as Tafel line extrapolation (Tafel), linear polarization resistance (LPR), and ac impedance method have been used to measure the corrosion current. The role of hexamine and mixed inhibitor (thiourea + hexamine in 0.5 M HCl), each at 100 ppm concentration is studied in these experiments. Microstructural observation, surface characterization, and morphology using SEM and XRD studies have been made on samples exposed at 100 °C in order to highlight the nature and extent of film formation.
International Nuclear Information System (INIS)
Vibronic levels in the strong-coupling limit are not easy to obtain since vibrational and electronic components are extremely mixed. Usual methods based on extrapolations from the adiabatic limit fail. In the present paper we want to compare the Lanczos method and the Glauber states method applied to a case were the strong-coupling limit is approached. The application considers parameters valid suited to the system Zn Se: Fe2 for a more realistic interpretation. Stability of the solutions is found and discussed. Advantages and disadvantages of both methods are also discussed. (Author)
Energy Technology Data Exchange (ETDEWEB)
Rivera-Iratchet, J.; Orue, M.A. de [Departamento de Fisica, Universidad de Concepcion. Concepcion, Chile (Chile); Vogel, E.E. [Deparatmento de Fisica, Universidad de la Frontera. temuco, Chile (Chile); Bevilacqua, G.; Martinelli, L. [Instituto Nazionale di Fisica della Materia. Dipartamento di Fisica dell` Universita di Piazza Torricelli 2, 56126 Pisa, Italy (Italy)
1998-12-01
Vibronic levels in the strong-coupling limit are not easy to obtain since vibrational and electronic components are extremely mixed. Usual methods based on extrapolations from the adiabatic limit fail. In the present paper we want to compare the Lanczos method and the Glauber states method applied to a case were the strong-coupling limit is approached. The application considers parameters valid suited to the system Zn Se: Fe{sup 2} for a more realistic interpretation. Stability of the solutions is found and discussed. Advantages and disadvantages of both methods are also discussed. (Author)
Li, Qingfeng; Gang, Xiao; Hjuler, Hans Aage; Berg, Rolf W.; Bjerrum, Niels
2009-01-01
The reduction of gaseous oxygen on carbon supported platinum electrodes has been studied at 150 degrees C with polarization and potential decay measurements. The electrolyte was either 100 weight percent phosphoric acid or that acid with a fluorinated additive, potassium perfluorohexanesulfonate (C6F13SO3K). The pseudo-Tafel curves of the overpotential vs. log (ii(L)/(i(L) - i)) show a two-slope behavior, probably due to different adsorption mechanisms. The potential relaxations as functions ...
International Nuclear Information System (INIS)
The objective of this paper was an overview of possible heating methods for the annealing process. The methods surveyed were: (1) Electric resistance radiant heating, (2) Direct fired combustion heating, and (3) Indirect combustion radiant heating. Advantages and disadvantages of each method were discussed, as was the data acquisition system, and the position of the paper was that indirect combustion radiant heating was the technology of choice
International Nuclear Information System (INIS)
The maintenance method applied at the Hague is summarized. The method was developed in order to solve problems relating to: the different specialist fields, the need for homogeneity in the maintenance work, the equipment diversity, the increase of the materials used at the Hague's new facilities. The aim of the method is to create a knowhow formalism, to facilitate maintenance, to ensure the running of the operations and to improve the estimation of the maintenance cost. One of the method's difficulties is the demonstration of the profitability of the maintenance operations
Electrical Conductivity of High-Pressure Liquid Hydrogen by Quantum Monte Carlo Methods
Lin, Fei; Morales, Miguel A.; Delaney, Kris T.; Pierleoni, Carlo; Martin, Richard M.; Ceperley, D. M.
2009-12-01
We compute the electrical conductivity for liquid hydrogen at high pressure using Monte Carlo techniques. The method uses coupled electron-ion Monte Carlo simulations to generate configurations of liquid hydrogen. For each configuration, correlated sampling of electrons is performed in order to calculate a set of lowest many-body eigenstates and current-current correlation functions of the system, which are summed over in the many-body Kubo formula to give ac electrical conductivity. The extrapolated dc conductivity at 3000 K for several densities shows a liquid semiconductor to liquid-metal transition at high pressure. Our results are in good agreement with shock-wave data.
Electrical conductivity of high-pressure liquid hydrogen by quantum Monte Carlo methods.
Lin, Fei; Morales, Miguel A; Delaney, Kris T; Pierleoni, Carlo; Martin, Richard M; Ceperley, D M
2009-12-18
We compute the electrical conductivity for liquid hydrogen at high pressure using Monte Carlo techniques. The method uses coupled electron-ion Monte Carlo simulations to generate configurations of liquid hydrogen. For each configuration, correlated sampling of electrons is performed in order to calculate a set of lowest many-body eigenstates and current-current correlation functions of the system, which are summed over in the many-body Kubo formula to give ac electrical conductivity. The extrapolated dc conductivity at 3000 K for several densities shows a liquid semiconductor to liquid-metal transition at high pressure. Our results are in good agreement with shock-wave data. PMID:20366267
B-physics from the ratio method with Wilson twisted mass fermions
Carrasco, N; Frezzotti, R; Gimenez, V; Lubicz, G Herdoiza V; Martinelli, G; Michael, C; Palao, D; Rossi, G C; Sanfilippo, F; Shindler, A; Simula, S; Tarantino, C
2012-01-01
We present a precise lattice QCD determination of the b-quark mass, of the B and Bs decay constants and first preliminary results for the B-mesons bag parameter. Simulations are performed with Nf = 2 Wilson twisted mass fermions at four values of the lattice spacing and the results are extrapolated to the continuum limit. Our calculation benefits from the use of improved interpolating operators for the B-mesons and employs the so-called ratio method. The latter allows a controlled interpolation at the b-quark mass between the relativistic data around and above the charm quark mass and the exactly known static limit.
Measurement of dynamic parameter of sub-critical facility with Rossi-? method
International Nuclear Information System (INIS)
This paper presents the Rossi-a method for the micro-noise analysis in the reactor physics experiments, which is applied in the study of measurement of the kinetic parameter, ? eigen value, of Tsinghua University's Sub-Critical Facility. The measurement is performed under extremely low flux condition, and ?c, the critical ? eigenvalue, is obtained by the way of extrapolation. Meanwhile, the theoretical calculation has been carried out based on MCNP and TMCC codes. Results from measurement and calculation agree well with each other. (authors)
Application of the incremental harmonic balance method to cubic non-linearity systems
Cheung, Y. K.; Chen, S. H.; Lau, S. L.
1990-07-01
In this paper, the formulation of the incremental harmonic balance (IHB) method is derived for a general system of differential equations with cubic non-linearity, which governs a wide range of engineering problems such as large-amplitude vibration of beams or plates. An incremental arc-length method combined with a cubic extrapolation technique is adopted to trace the response curve automatically. The stability of its periodic solutions can also be evaluated from the IHB formulation by multi-variable Floquet theory. Hsu's method is adopted for computing the transition matrix at the end of one period. Two numerical examples are presented which demonstrate the effectiveness of the IHB method and Hsu's method in treating the non-linear vibration of a strongly non-linear multiple degree of freedom system.
Mendelson, A.
1977-01-01
Two advances in the numerical techniques of utilizing the BIE method are presented. The boundary unknowns are represented by parabolas over each interval which are integrated in closed form. These integrals are listed for easy use. For problems involving crack tip singularities, these singularities are included in the boundary integrals so that the stress intensity factor becomes just one more unknown in the set of boundary unknowns thus avoiding the uncertainties of plotting and extrapolating techniques. The method is applied to the problems of a notched beam in tension and bending, with excellent results.
Mendelson, A.
1977-01-01
Two advances in the numerical techniques of utilizing the BIE method are presented. The boundary unknowns are represented by parabolas over each interval which are integrated in closed form. These integrals are listed for easy use. For problems involving crack tip singularities, these singularities are included in the boundary integrals so that the stress intensity factor becomes just one more unknown in the set of boundary unknowns thus avoiding the uncertainties of plotting and extrapolating techniques. The method is applied to the problems of a notched beam in tension and bending, with excellent results.
Directory of Open Access Journals (Sweden)
TobiasKoch
2014-04-01
Full Text Available One of the key interests in the social sciences is the investigation of change and stability of a given attribute. Although numerous models have been proposed in the past for analyzing longitudinal data including multilevel and/or latent variable modeling approaches, only few modeling approaches have been developed for studying the construct validity in longitudinal multitrait-multimethod (MTMM measurement designs. The aim of the present study was to extend the spectrum of current longitudinal modeling approaches for MTMM analysis. Specifically, a new longitudinal multilevel CFA-MTMM model for measurement designs with structurally different and interchangeable methods (called Latent-State-Combination-Of-Methods model, LS-COM is presented. Interchangeable methods are methods that are randomly sampled from a set of equivalent methods (e.g., multiple student ratings for teaching quality, whereas structurally different methods are methods that cannot be easily replaced by one another (e.g., teacher, self-ratings, principle ratings. Results of a simulation study indicate that the parameters and standard errors in the LS-COM model are well recovered even in conditions with only 5 observations per estimated model parameter. The advantages and limitations of the LS-COM model relative to other longitudinal MTMM modeling approaches are discussed.
An ancient modeling method for amylose
Structures of polysaccharides can be predicted based on crystal structures of disaccharides that contain the relevant monosaccharide residues and linkages. Simple geometric extrapolation of the atomic coordinates of the individual monomer residues, the linkage torsion angles and the glycosidic bond ...
DEFF Research Database (Denmark)
McLaughlin, W.L.; Miller, A.
2003-01-01
Chemical and physical radiation dosimetry methods, used for the measurement of absorbed dose mainly during the practical use of ionizing radiation, are discussed with respect to their characteristics and fields of application.
Lin, YuPo J.; Hestekin, Jamie; Arora, Michelle; St. Martin, Edward J.
2004-09-28
An electrodeionization method for continuously producing and or separating and/or concentrating ionizable organics present in dilute concentrations in an ionic solution while controlling the pH to within one to one-half pH unit method for continuously producing and or separating and/or concentrating ionizable organics present in dilute concentrations in an ionic solution while controlling the pH to within one to one-half pH unit.
A review of experimental methods for determining residual creep life
International Nuclear Information System (INIS)
Experimental methods available for determining how much creep life remains at a particular time in the high temperature service of a component are reviewed. After a brief consideration of the limitations of stress rupture extrapolation techniques, the application of post-exposure creep testing is considered. Ways of assessing the effect of microstructural degradation on residual life are then reviewed. It is pointed out that while this type of work will be useful for certain materials, there are other materials in which 'mechanical damage' such as cavitation will be more important. Cavitation measurement techniques are therefore reviewed. The report ends with a brief consideration of the use of crack growth measurements in assessing the residual life of cracked components. (author)
Scattering from finite size methods in lattice QCD
International Nuclear Information System (INIS)
Using two flavors of maximally twisted mass fermions, we calculate the S-wave pion-pion scattering length in the isospin I=2 channel and the P-wave pion-pion scattering phase in the isospin I=1 channel. In the former channel, the lattice calculations are performed at pion masses ranging from 270 MeV to 485 MeV. We use chiral perturbation theory at next-to-leading order to extrapolate our results. At the physical pion mass, we find m?aI=2??=-0.04385(28)(38) for the scattering length. In the latter channel, the calculation is currently performed at a single pion mass of 391 MeV. Making use of finite size methods, we evaluate the scattering phase in both the center of mass frame and the moving frame. The effective range formula is employed to fit our results, from which the rho resonance mass and decay width are evaluated. (orig.)
Standardization of Ca-45 radioactive solution by tracing method
Scientific Electronic Library Online (English)
Cláudia Regina Ponte, Ponge-Ferreira; Marina Fallone, Koskinas; Mauro da Silva, Dias.
2004-09-01
Full Text Available The procedure followed by the Laboratório de Metrologia Nuclear (LMN) at the IPEN, in São Paulo, for the standardization of the 45Ca is described. The activity measurement was carried out in a 4pb-gamma coincidence system, by the tracing method. The radionuclide chosen as th [...] e beta-gamma emitting tracer nuclide was 60Co because of its end-point beta-ray energy which is close to 45Ca. Six sources were prepared using a 1:1 ratio (beta-pure and beta-gamma) dropped directly on the Collodion film, and other two solutions of 45Ca + 60Co were mixed previously using a 1:1 and 1:2 ratio before making the radioactive sources. The activity of the solution was determined by the extrapolation technique. The events were registered using a Time to Amplitude Converter (TAC) associated with a Multi-channel Analyzer.
International Nuclear Information System (INIS)
The chemelex method is one of the indirect electrical heating methods which was developed by the Raychem Corporation. This heating method is applied to process pipings, valves, pumps, tanks and measuring instruments to prevent freezing or to keep temperature of fluids. The heater element consists of lead wires, tape-shaped electric resistance material, polyurethane and polyolefin coatings and outer flexible metallic braid. There are two special features in the Chemelex method: 1) the heaters are of parallel circuit type, and 2) the heaters have the self-controlling characteristic. The thermal output decreases and the resistance increases following on the temperature rise. Accordingly, the heaters do not overheat, and the thermostat for preventing the overheating is not needed. Also, there is no formation of hot spot and no burning out due to the self-controlling characteristic. The list of products is presented. The specifications are as follows: voltage 100 V and 200 V, power output 10 -- 50 W/m, breaker capacity 0.08 -- 0.54 A/m at start up, permissible maximum temperature 65 deg C and 107 deg C for type ATV and PTV, respectively, at continuous operation, and permissible ?-ray irradiation fluence 2 x 108 rad and 1 x 107 rad for type ATV and PTV, respectively. The output characteristics in relation to temperature rise and the results of aging test are explained. The method of calculating the output at the design is shown with the example calcule design is shown with the example calculation. The application of the Chemelex heaters, the electrical source apparatus and the temperature control method are explained in detail. The recent example of applying this Chemelex method to LNG tanks is introduced. (Nakai, Y.)
Re, Matteo; Valentini, Giorgio
2012-03-01
Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been proposed to explain the characteristics and the successful application of ensembles to different application domains. For instance, Allwein, Schapire, and Singer interpreted the improved generalization capabilities of ensembles of learning machines in the framework of large margin classifiers [4,177], Kleinberg in the context of stochastic discrimination theory [112], and Breiman and Friedman in the light of the bias-variance analysis borrowed from classical statistics [21,70]. Empirical studies showed that both in classification and regression problems, ensembles improve on single learning machines, and moreover large experimental studies compared the effectiveness of different ensemble methods on benchmark data sets [10,11,49,188]. The interest in this research area is motivated also by the availability of very fast computers and networks of workstations at a relatively low cost that allow the implementation and the experimentation of complex ensemble methods using off-the-shelf computer platforms. However, as explained in Section 26.2 there are deeper reasons to use ensembles of learning machines, motivated by the intrinsic characteristics of the ensemble methods. The main aim of this chapter is to introduce ensemble methods and to provide an overview and a bibliography of the main areas of research, without pretending to be exhaustive or to explain the detailed characteristics of each ensemble method. The paper is organized as follows. In the next section, the main theoretical and practical reasons for combining multiple learners are introduced. Section 26.3 depicts the main taxonomies on ensemble methods proposed in the literature. In Section 26.4 and 26.5, we present an overview of the main supervised ensemble methods reported in the literature, adopting a simple taxonomy, originally proposed in Ref. [201]. Applications of ensemble methods are only marginally considered, but a specific section on some relevant applications of ensemble methods in astronomy and astrophysics has been added (Section 26.6). The conclusion (Section 26.7) ends this pap
Szulc, Stefan
1965-01-01
Statistical Methods provides a discussion of the principles of the organization and technique of research, with emphasis on its application to the problems in social statistics. This book discusses branch statistics, which aims to develop practical ways of collecting and processing numerical data and to adapt general statistical methods to the objectives in a given field.Organized into five parts encompassing 22 chapters, this book begins with an overview of how to organize the collection of such information on individual units, primarily as accomplished by government agencies. This text then
Halberstam, Heine
2013-01-01
Derived from the techniques of analytic number theory, sieve theory employs methods from mathematical analysis to solve number-theoretical problems. This text by a noted pair of experts is regarded as the definitive work on the subject. It formulates the general sieve problem, explores the theoretical background, and illustrates significant applications.""For years to come, Sieve Methods will be vital to those seeking to work in the subject, and also to those seeking to make applications,"" noted prominent mathematician Hugh Montgomery in his review of this volume for the Bulletin of the Ameri
Olivia Worland (Purdue University; Biological Sciences)
2008-07-09
The scientific method starts with asking a question you would like to answer. Next, background research helps form the basis of your hypothesis, or what you think the outcome of your experiment will be given your background research. You would then design an experiment and test your hypothesis in your experiment. Lastly, you would analyze the outcome of your experiment and revise your hypothesis and/or experiment to test again later. The discoveries of DNA and the human genome are in part due to good use of the scientific method and a sharing of information between scientists.
Reid, M M; Campbell, Amity C; Elliott, B C
2012-02-01
Tennis stroke mechanics have attracted considerable biomechanical analysis, yet current filtering practice may lead to erroneous reporting of data near the impact of racket and ball. This research had three aims: (1) to identify the best method of estimating the displacement and velocity of the racket at impact during the tennis serve, (2) to demonstrate the effect of different methods on upper limb kinematics and kinetics and (3) to report the effect of increased noise on the most appropriate treatment method. The tennis serves of one tennis player, fit with upper limb and racket retro-reflective markers, were captured with a Vicon motion analysis system recording at 500 Hz. The raw racket tip marker displacement and velocity were used as criterion data to compare three different endpoint treatments and two different filters. The 2nd-order polynomial proved to be the least erroneous extrapolation technique and the quintic spline filter was the most appropriate filter. The previously performed "smoothing through impact" method, using a quintic spline filter, underestimated the racket velocity (9.1%) at the time of impact. The polynomial extrapolation method remained effective when noise was added to the marker trajectories. PMID:21975124
DEFF Research Database (Denmark)
Ernst, Erik
2005-01-01
The world of programming has been conquered by the procedure call mechanism, including object-oriented method invocation which is a procedure call in context of an object. This paper presents an alternative, method mixin invocations, that is optimized for flexible creation of composite behavior, where traditional invocation is optimized for as-is reuse of existing behavior. Tight coupling reduces flexibility, and traditional invocation tightly couples transfer of information and transfer of control. Method mixins decouple these two kinds of transfer, thereby opening the doors for new kinds of abstraction and reuse. Method mixins use shared name spaces to transfer information between caller and callee, as opposed to traditional invocation which uses parameters and returned results. This relieves the caller from dependencies on the callee, and it allows direct transfer of information further down the call stack, e.g., to a callee's callee. The mechanism has been implemented in the programming language gbeta. Variants of the mechanism could be added to almost any imperative programming language.
Functional renormalization group methods in quantum chromodynamics
International Nuclear Information System (INIS)
We apply functional Renormalization Group methods to Quantum Chromodynamics (QCD). First we calculate the mass shift for the pion in a finite volume in the framework of the quark-meson model. In particular, we investigate the importance of quark effects. As in lattice gauge theory, we find that the choice of quark boundary conditions has a noticeable effect on the pion mass shift in small volumes. A comparison of our results to chiral perturbation theory and lattice QCD suggests that lattice QCD has not yet reached volume sizes for which chiral perturbation theory can be applied to extrapolate lattice results for low-energy observables. Phase transitions in QCD at finite temperature and density are currently very actively researched. We study the chiral phase transition at finite temperature with two approaches. First, we compute the phase transition temperature in infinite and in finite volume with the quark-meson model. Though qualitatively correct, our results suggest that the model does not describe the dynamics of QCD near the finite-temperature phase boundary accurately. Second, we study the approach to chiral symmetry breaking in terms of quarks and gluons. We compute the running QCD coupling for all temperatures and scales. We use this result to determine quantitatively the phase boundary in the plane of temperature and number of quark flavors and find good agreement with lattice results. (orig.)
Some useful statistical methods for model validation.
Marcus, A H; Elias, R W
1998-12-01
Although formal hypothesis tests provide a convenient framework for displaying the statistical results of empirical comparisons, standard tests should not be used without consideration of underlying measurement error structure. As part of the validation process, predictions of individual blood lead concentrations from models with site-specific input parameters are often compared with blood lead concentrations measured in field studies that also report lead concentrations in environmental media (soil, dust, water, paint) as surrogates for exposure. Measurements of these environmental media are subject to several sources of variability, including temporal and spatial sampling, sample preparation and chemical analysis, and data entry or recording. Adjustments for measurement error must be made before statistical tests can be used to empirically compare environmental data with model predictions. This report illustrates the effect of measurement error correction using a real dataset of child blood lead concentrations for an undisclosed midwestern community. We illustrate both the apparent failure of some standard regression tests and the success of adjustment of such tests for measurement error using the SIMEX (simulation-extrapolation) procedure. This procedure adds simulated measurement error to model predictions and then subtracts the total measurement error, analogous to the method of standard additions used by analytical chemists. PMID:9860913
Some useful statistical methods for model validation.
Marcus, A H; Elias, R W
1998-01-01
Although formal hypothesis tests provide a convenient framework for displaying the statistical results of empirical comparisons, standard tests should not be used without consideration of underlying measurement error structure. As part of the validation process, predictions of individual blood lead concentrations from models with site-specific input parameters are often compared with blood lead concentrations measured in field studies that also report lead concentrations in environmental media (soil, dust, water, paint) as surrogates for exposure. Measurements of these environmental media are subject to several sources of variability, including temporal and spatial sampling, sample preparation and chemical analysis, and data entry or recording. Adjustments for measurement error must be made before statistical tests can be used to empirically compare environmental data with model predictions. This report illustrates the effect of measurement error correction using a real dataset of child blood lead concentrations for an undisclosed midwestern community. We illustrate both the apparent failure of some standard regression tests and the success of adjustment of such tests for measurement error using the SIMEX (simulation-extrapolation) procedure. This procedure adds simulated measurement error to model predictions and then subtracts the total measurement error, analogous to the method of standard additions used by analytical chemists. Images Figure 1 Figure 3 PMID:9860913
Finite-size scaling method for the Berezinskii–Kosterlitz–Thouless transition
International Nuclear Information System (INIS)
We test an improved finite-size scaling method for reliably extracting the critical temperature TBKT of a Berezinskii–Kosterlitz–Thouless (BKT) transition. Using known single-parameter logarithmic corrections to the spin stiffness ?s at TBKT in combination with the Kosterlitz–Nelson relation between the transition temperature and the stiffness, ?s(TBKT) = 2TBKT/?, we define a size-dependent transition temperature TBKT(L1,L2) based on a pair of system sizes L1,L2, e.g., L2 = 2L1. We use Monte Carlo data for the standard two-dimensional classical XY model to demonstrate that this quantity is well behaved and can be reliably extrapolated to the thermodynamic limit using the next expected logarithmic correction beyond the ones included in defining TBKT(L1,L2). For the Monte Carlo calculations we use GPU (graphical processing unit) computing to obtain high-precision data for L up to 512. We find that the sub-leading logarithmic corrections have significant effects on the extrapolation. Our result TBKT = 0.8935(1) is several error bars above the previously best estimates of the transition temperature, TBKT ? 0.8929. If only the leading log-correction is used, the result is, however, consistent with the lower value, suggesting that previous works have underestimated TBKT because of the neglect of sub-leading logarithms. Our method is easy to implement in practice and should be applicable to generic BKT transitions. (paper)
Energy Technology Data Exchange (ETDEWEB)
Solano, R.; Schirra, M.; Rivas, M. de la; Barroso, S.; Seith, B.
1982-07-01
The austenitic stainless steel X6crni1811 (Din 1.4948) used as a structure material for the German Fast Breeder Reactor SNR 300 was creep tested in a temperature range of 550-650 degree centigree material condition as well as welded material condition. The main point of this program (Extrapolation-Program) lies in the knowledge of the creep-rupture-strength and creep-behaviour up to 3 x 10{sup 4} hours higher temperatures in order to extrapolated up to {>=}10{sup 5} hours for operating temperatures. In order to study the stress dependency of the minimum creep rate additional tests were carried out of 550 degree centigree - 750 degree centigree. The present report describes the state in the running program with test-times of 23.000 hours and results from tests up to 55.000 hours belonging to other parallel programs are taken into account. Besides the creep-rupture behaviour it is also made a study of ductility between 550 and 750 degree centigree. Extensive metallographic examinations have been made to study the fracture behaviour and changes in structure. (Author)
Dahlquist, Germund
2012-01-01
""Substantial, detailed and rigorous . . . readers for whom the book is intended are admirably served."" - MathSciNet (Mathematical Reviews on the Web), American Mathematical Society.Practical text strikes fine balance between students' requirements for theoretical treatment and needs of practitioners, with best methods for large- and small-scale computing. Prerequisites are minimal (calculus, linear algebra, and preferably some acquaintance with computer programming). Text includes many worked examples, problems, and an extensive bibliography.
International Nuclear Information System (INIS)
The subject is discussed under the headings: interaction of nuclear radiation with matter; detection of nuclear radiation; methods of detection, and measurement of radionuclides (alpha, beta and gamma spectroscopy); nuclear statistics and treatment of data; measurement of U series disequilibria (U concentration and 234U/238U, 230Th/234U, and 230Th/232Th activity ratios; 231Pa/235U activity ratio; Ra and Rn; Pb, Bi and Po). (U.K.)
Tümmler, Burkhard
2014-01-01
Genotyping allows for the identification of bacterial isolates to the strain level and provides basic information about the evolutionary biology, population biology, taxonomy, ecology, and genetics of bacteria. Depending on the underlying question and available resources, Pseudomonas aeruginosa strains may be typed by anonymous fingerprinting techniques or electronically portable sequence-based typing methods such as multiple locus variable number tandem repeat (VNTR) analysis (MLVA), multilocus sequence typing, or oligonucleotide microarray. Macrorestriction fragment pattern analysis is a genotyping method that is globally applicable to all bacteria and hence has been and still is the reference method for strain typing in bacteriology. Agarose-embedded chromosomal DNA is cleaved with a rare-cutting restriction endonuclease and the generated 20-70 fragments are then separated by pulsed-field gel electrophoresis. The chapter provides a detailed step-by-step manual for SpeI genome fingerprinting of Pseudomonas chromosomes that has been optimized for SpeI fragment pattern analysis of P. aeruginosa. PMID:24818895
Failure Analysis of Wind Turbines by Probability Density Evolution Method
DEFF Research Database (Denmark)
Sichani, Mahdi Teimouri; Nielsen, SØren R.K.
2013-01-01
The aim of this study is to present an efficient and accurate method for estimation of the failure probability of wind turbine structures which work under turbulent wind load. The classical method for this is to fit one of the extreme value probability distribution functions to the extracted maxima of the response of wind turbine. However this approach may contain high amount of uncertainty due to the arbitrariness of the data and the distributions chosen. Therefore less uncertain methods are meaningful in this direction. The most natural approach in this respect is Monte Carlo (MC) simulation. This is not practical due to its excessive computational load. This problem can alternatively be tackled if the evolution of the probability density function (PDF) of the response process can be realized. The evolutionary PDF can then be integrated on the boundaries of the problem. For this reason we propose to use the Probability Density Evolution Method (PDEM). PDEM can alternatively be used to obtain the distribution of the extreme values of the response process by simulation. This approach requires less computational effort than integrating the evolution of the PDF; but may be less accurate. In this paper we present the results of failure probability estimation using PDEM. The results will then be compared to extrapolated values obtained from extreme value distribution fits to the sample response values. The results approve the feasibility of this approach for reliability analysis of wind turbines however they convey the potential for improving accuracy of the method in low probability areas
International Nuclear Information System (INIS)
Purpose: To compare the surface dose (SD) measured using a PTW 30-360 extrapolation chamber with different commonly used dosimeters (Ds): parallel plate ion chambers (ICs): RMI-449 (Attix), Capintec PS-033, PTW 30-329 (Markus) and Memorial; TLD chips (cTLD), TLD powder (pTLD), optically stimulated (OSLs), radiochromic (EXR2) and radiographic (EDR2) films, and to provide an intercomparison correction to Ds for each of them. Methods: Investigations were performed for a 6 MV x-ray beam (Varian Clinac 2300, 10x10 cm2 open field, SSD = 100 cm). The Ds were placed at the surface of the solid water phantom and at the reference depth dref=1.7cm. The measurements for cTLD, OSLs, EDR2 and EXR2 were corrected to SD using an extrapolation method (EM) indexed to the baseline PTW 30-360 measurements. A consistent use of the EM involved: 1) irradiation of three Ds stacked on top of each other on the surface of the phantom; 2) measurement of the relative dose value for each layer; and, 3) extrapolation of these values to zero thickness. An additional measurement was performed with externally exposed OSLs (eOSLs), that were rotated out of their protective housing. Results: All single Ds measurements overestimated the SD compared with the extrapolation chamber, except for Attix IC. The closest match to the true SD was measured with the Attix IC (? 0.1%), followed by pTLD (0.5%), Capintec (4.5%), Memorial (7.3%), Markus (10%), cTLD (11.8%), eOSL (12.8%), EXR2 (14%), ED.8%) and OSL (26%). The EM method of correction for SD worked well for all Ds, except the unexposed OSLs. Conclusion: This EM cross calibration of solid state detectors with an extrapolation or Attix chamber can provide thickness corrections for cTLD, eOSLs, EXR2, and EDR2. Standard packaged OSLs were not found to be simply corrected
Tracer Methods for Calibrating Iron-55
International Nuclear Information System (INIS)
A coincidence device, consisting of a 2? proportional counter operating under argon-methane pressure and a sodium iodide crystal, is used to study two tracer methods, one of which employs 54Mn and the other 59Fe. (a) 54Mn. A known mass of a solution of 54Mn, previously calibrated, is mixed with a known mass of the 55Fe solution to be measured. The mixture is counted by coincidences between the X-rays of 55Fe+ 54Mn and the ?-rays of 54Mn. With the constant Pk?k of 54Mn, the self-absorption YM due to the 54Mn is determined by calculating the C/B ratio (coincidence rate to ?-ray rate). By using different sources the content of carrier in the manganese was varied and the function N/mF = F(YM) plotted on a graph. N/mF is the count rate per unit mass of 55Fe solution. The straight line obtained is extrapolated to YM = O. (b) 59Fe. The method is based on the measurement of sources without self-absorption (made by electrolytic deposition). Electrolytic sources and weighed sources are prepared from a mixture of 59Fe and 55Fe. 59Fe is used to determine the equivalent mass of the electrolytic sources. The measurements are made simultaneously in a gamma detector of high stability and in a proportional counter for X-rays. Corrections are m counter for X-rays. Corrections are made for the ?, ?-effect of 5'9Fe in the counter. The count rate of the 55Fe solution is thus obtained per unit mass. The two methods are cross-checked against each other with an uncertainty of the order of 1.5%. (author)
A simple and accurate method for high-temperature PEM fuel cell characterization
Kulikovsky, Andrei; Wannek, Christoph; Oetjen, Hans-Friedrich
2010-01-01
Abstract A set of basic parameters for any polymer electrolyte membrane fuel cell (PEMFC) includes the Tafel slope $b$ and the exchange current density $j_*$ of the cathode catalyst, the oxygen diffusion coefficient $D_b$ in the cathode gas--diffusion layer and the cell resistivity $R_{cell}$. Based on the analytical model of a PEMFC (A.A.Kulikovsky. {\\em Electrochimica Acta} {\\bf 49} (2004) 617), we propose a two--step procedure allowing to evaluate these parameters for a high--te...
Corrosion rate measurements of welded parts of stainless steel using electrochemical method
International Nuclear Information System (INIS)
Corrosion rates of three types of welded 304 stainless steel specimens immersed in 1 N H2SO4 solution at room temperature were obtained from potentiodynamic polarization curves, Tafel plots and polarization resistance curves. Corrosion rate measurements by using AC impedance technique were also carried out in order to investigate whether these values agreed fairly well with the values obtained from other electrochemical techniques. Corrosion rates of welded specimens were increased with increasing frequency of GTAW and decreasing current density of GTAW. It was concluded that an A.C. impedance technique could be successfully utilized for determining a local corrosion rate of welded parts. (Author)
A comparison between the fission matrix method, the diffusion model and the transport model
International Nuclear Information System (INIS)
The fission matrix method may be used to solve the critical eigenvalue problem in a Monte Carlo simulation. This method gives us access to the different eigenvalues and eigenvectors of the transport or fission operator. We propose to compare the results obtained via the fission matrix method with those of the diffusion model, and an approximated transport model. To do so, we choose to analyse the mono-kinetic and continuous energy cases for a Godiva-inspired critical sphere. The first five eigenvalues are computed with TRIPOLI-4R and compared to the theoretical ones. An extension of the notion of the extrapolation distance is proposed for the modes other than the fundamental one. (authors)
A method for measuring element fluxes in an undisturbed soil: nitrogen and carbon from earthworms
International Nuclear Information System (INIS)
Data on chemical cycles, as nitrogen or carbon cycles, are extrapolated to the fields or ecosystems without the possibility for checking conclusions; i.e. from scientific knowledge (para-ecology). A new method, by natural introduction of an earthworm compartment into an undisturbed soil, with earthworms labelled both by isotopes (15N, 14C) and by staining is described. This method allows us to measure fluxes of chemicals. The first results, gathered during the improvement of the method in partly artificial conditions, are cross-checked with other data given by direct observation in the field. Measured flux (2.2 mg N/g fresh mass empty gut/day/15 0C) is far more important than para-ecological estimations; animal metabolism plays directly an important role in nitrogen and carbon cycles. (author)
A comparison between the fission matrix method, the diffusion model and the transport model
Energy Technology Data Exchange (ETDEWEB)
Dehaye, B.; Hugot, F. X.; Diop, C. M. [Commissariat a l' Energie Atomique et aux Energies Alternatives, Direction de l' Energie Nucleaire, Departement de Modelisation des Systemes et Structures, CEA DEN/DM2S, PC 57, F-91191 Gif-sur-Yvette cedex (France)
2013-07-01
The fission matrix method may be used to solve the critical eigenvalue problem in a Monte Carlo simulation. This method gives us access to the different eigenvalues and eigenvectors of the transport or fission operator. We propose to compare the results obtained via the fission matrix method with those of the diffusion model, and an approximated transport model. To do so, we choose to analyse the mono-kinetic and continuous energy cases for a Godiva-inspired critical sphere. The first five eigenvalues are computed with TRIPOLI-4{sup R} and compared to the theoretical ones. An extension of the notion of the extrapolation distance is proposed for the modes other than the fundamental one. (authors)
Methods for determining atypical gate valve thrust requirements
International Nuclear Information System (INIS)
Evaluating the performance of rising stem, wedge type, gate valves used in nuclear power plant is not a problem when the valves can be design-basis tested and their operability margins determined diagnostically. The problem occurs when they cannot be tested because of plant system limitations or when they can be tested only at some less-than-design-basis condition. To evaluate the performance of these valves requires various analytical and/or extrapolation methods by which the design-basis stem thrust requirement can be determined. This has been typically accomplished with valve stem thrust models used to calculate the requirements or by extrapolating the results from a less-than-design-basis test. The stem thrust models used by the nuclear industry to determine the opening or closing stem thrust requirements for these gate valves have generally assumed that the highest load the valve experiences during closure (but before seating) is at flow isolation and during unwedging or before flow initiation in the opening direction. However, during full-scale valve testing conducted for the USNRC, several of the valves produced stem thrust histories that showed peak closing stem forces occurring before flow isolation in the closing direction and after flow initiation in the opening direction. All of the valves that exhibited this behavior in the closing direction also showed signs of internal damage. Initially, we dismissed the early peak in the closing stem thrust requirementeak in the closing stem thrust requirement as damage-induced and labeled it nonpredictable behavior. Opening responses were not a priority in our early research, so that phenomenon was set aside for later evaluation
International Nuclear Information System (INIS)
Several stages of physical tests have to be considered when a PWR reactor starts up. To determine the main physical parameters of the reactor core, means at disposal are: neutron flux measuring units, the reactivity meter, the in-core instrumentation (flux and temperature), the boron meter, and the system to know the position of the control elements. Each mean is reviewed: principle, design, position of detectors... Then, the methods to measure the main physical parameters are presented: boron concentration, efficiency of control elements and differential efficiency of boron, measurement of the isotherm coefficient of temperature, flux maps, and calibration of the units of neutron power measurement
Turner, Nicholas J
2010-04-01
New methods continue to be developed for the dynamic kinetic resolution (DKR) and deracemisation of racemic chiral compounds, in particular alcohols, amines and amino acids. Many of the DKR processes involve the combination of an enantioselective enzyme, often a lipase or protease, with a metal racemisation catalyst. A greater range of ruthenium-based racemisation catalysts is now available with some showing good activity for the racemisation of amines that are more difficult to epimerise than the corresponding alcohols. In terms of deracemisation processes, further improvements have been achieved with the deracemisation of alcohols, using combinations of stereocomplementary ketoreductases, and additionally transaminases have been applied to the deracemisation of racemic amines. PMID:20044298
International Nuclear Information System (INIS)
A method for decontamination of steel components contaminated with radioactive material comprises melting a mass of material, including a proportion of contaminated steel and slag forming material, to form a volume of molten steel and a volume of slag. The radioactive material originally present in the steel migrates to the slag which is then separated from the steel. The amount of slag forming material utilised may be selected such that the radioactivity of the resulting slag is sufficiently low to permit unrestricted handling and disposal of the slag. (author)
DEFF Research Database (Denmark)
Zhuravlev, Fedor Technical University of Denmark,
A method of conducting radiofluorination of a substrate, comprising the steps of: (a) contacting an aqueous solution of [18F] fluoride with a polymer supported phosphazene base for sufficient time for trapping of [18F] fluoride on the polymer supported phosphazene base; and (b) contacting a solution of the substrate with the polymer supported phosphazene base having [18F] fluoride trapped thereon obtained in step (a) for sufficient time for a radiofluorination reaction to take place; an apparatus for conducting radiofluorination; use of the apparatus; and an apparatus for production of a dose of a radiotracer for administration to a patient.
Directory of Open Access Journals (Sweden)
K. F. Khaled
2008-04-01
Full Text Available A new safe corrosion inhibitor namely N-(5,6-diphenyl-4,5-dihydro-[1,2,4]triazin- 3-yl-guanidine (NTG has been synthesized and its inhibitive performance towards the corrosion of mild steel in 1 M hydrochloric acid and 0.5 M sulphuric acid has been investigated. Corrosion inhibition was studied by chemical method (weight loss and electrochemical techniques include Tafel extrapolation method and electrochemical impedance spectroscopy (EIS. These studies have shown that NTG was a very good inhibitor in acid media and the inhibition efficiency up to 99% and 96% in 1M HCl and 0.5M H2SO4, respectively. Polarization measurements reveal that the investigated inhibitor is cathodic in 1M HCl and mixed-type in 0.5M H2SO4. Activation energies of the corrosion process in absence and presence of NTG were obtained by measuring the temperature dependence of the corrosion current density. Data obtained from EIS were analyzed to model the corrosion inhibition process through equivalent circuit. Comparable results were obtained by different chemical and electrochemical methods used. The adsorption of the inhibitor on the metal surface in the acid solution was found to obey Langmuir's adsorption isotherm.
Structural Reliability of Wind Turbine Blades : Design Methods and Evaluation
DEFF Research Database (Denmark)
Dimitrov, Nikolay Krasimirov
2013-01-01
In the past decade the use of wind energy has expanded significantly, transforming a niche market into a practically mainstream energy generation industry. With the advance of turbine technology the search for more efficient solutions has lead to increased focus on probabilistic modelling and design. Reliability-based analysis methods have the potential of being a valuable tool which can improve the state of knowledge by explaining the uncertainties, and form the probabilistic basis for calibration of deterministic design tools. The present thesis focuses on reliability-based design of wind turbine blades. The main purpose is to draw a clear picture of how reliability-based design of wind turbines can be done in practice. The objectives of the thesis are to create methodologies for efficient reliability assessment of composite materials and composite wind turbine blades, and to map the uncertainties in the processes, materials and external conditions that have an effect on the health of a composite structure. The study considers all stages in a reliability analysis, from defining models of structural components to obtaining the reliability index and calibration of partial safety factors. In a detailed demonstration of the process of estimating the reliability of a wind turbine blade and blade components, a number of probabilistic load and strength models are formulated, and the following scientific and practical questions are answered: a) What material, load and uncertainty models need to be used b) How can different failure modes be taken into account c) What reliability methods are most suitable for the particular task d) Are there any factors specific to wind turbines such as materials and operating conditions that need to be taken into account e) Are there ways for improvement by developing new models and standards or carrying out tests The following aspects are covered in detail: ? The probabilistic aspects of ultimate strength of composite laminates are addressed. Laminated plates are considered asa general structural reliability system where each layer in a laminate is a separate system component. Methods for solving the system reliability are discussed in an example problem. ? Probabilistic models for fatigue life of laminates and sandwich core are developed and calibrated against measurement data. A modified, nonlinear S-N relationship is formulated where the static strength of the material is included as a parameter. A Bayesian inference model predicting the fatigue resistance of face laminates based on the static and fatigue strength of individual lamina is developed. A series of tests of the fatigue life of balsa wood core material are carried out, and a probabilistic model for the fatigue strength of balsa core subjected to transverse shear loading is calibrated to the test data. ? A review study evaluates and compares several widely-used statistical extrapolation methods for their capability of modelling the short-term statistical distribution of blade loads and tip deflection. The best performing methods are selected, and several improvements are suggested, including a procedure for automatic determination of tail threshold level, which allows for efficient automated use of peaks-over-threshold methods. ? The problem of obtaining the long-term statistical distribution of load extremes is discussed by comparing the method of integrating extrapolated short-term statistical distributions against extrapolation of data directly sampled from the long-term distribution. The comparison is based on the long-term distribution of wind speed, turbulence, and wind shear, where a model of the wind shear distribution is specifically developed for the purpose. ? Uncertainties in load and material modelling are considered. A quantitative assessment of the in uence of a number of uncertainties is done based on modelled and measured data. ? Example analyses demonstrate the process of estimating the reliability against several modes of failure in two different structures. This includes reliability against blade-to
Energy Technology Data Exchange (ETDEWEB)
Patrick Gonzalez; Antonio Lara; Jorge Gayoso; Eduardo Neira; Patricio Romero; Leonardo Sotomayor
2005-07-14
Deforestation of temperate rainforests in Chile has decreased the provision of ecosystem services, including watershed protection, biodiversity conservation, and carbon sequestration. Forest conservation can restore those ecosystem services. Greenhouse gas policies that offer financing for the carbon emissions avoided by preventing deforestation require a projection of future baseline carbon emissions for an area if no forest conservation occurs. For a proposed 570 km{sup 2} conservation area in temperate rainforest around the rural community of Curinanco, Chile, we compared three methods to project future baseline carbon emissions: extrapolation from Landsat observations, Geomod, and Forest Restoration Carbon Analysis (FRCA). Analyses of forest inventory and Landsat remote sensing data show 1986-1999 net deforestation of 1900 ha in the analysis area, proceeding at a rate of 0.0003 y{sup -1}. The gross rate of loss of closed natural forest was 0.042 y{sup -1}. In the period 1986-1999, closed natural forest decreased from 20,000 ha to 11,000 ha, with timber companies clearing natural forest to establish plantations of non-native species. Analyses of previous field measurements of species-specific forest biomass, tree allometry, and the carbon content of vegetation show that the dominant native forest type, broadleaf evergreen (bosque siempreverde), contains 370 {+-} 170 t ha{sup -1} carbon, compared to the carbon density of non-native Pinus radiata plantations of 240 {+-} 60 t ha{sup -1}. The 1986-1999 conversion of closed broadleaf evergreen forest to open broadleaf evergreen forest, Pinus radiata plantations, shrublands, grasslands, urban areas, and bare ground decreased the carbon density from 370 {+-} 170 t ha{sup -1} carbon to an average of 100 t ha{sup -1} (maximum 160 t ha{sup -1}, minimum 50 t ha{sup -1}). Consequently, the conversion released 1.1 million t carbon. These analyses of forest inventory and Landsat remote sensing data provided the data to evaluate the three methods to project future baseline carbon emissions. Extrapolation from Landsat change detection uses the observed rate of change to estimate change in the near future. Geomod is a software program that models the geographic distribution of change using a defined rate of change. FRCA is an integrated spatial analysis of forest inventory, biodiversity, and remote sensing that produces estimates of forest biodiversity and forest carbon density, spatial data layers of future probabilities of reforestation and deforestation, and a projection of future baseline forest carbon sequestration and emissions for an ecologically-defined area of analysis. For the period 1999-2012, extrapolation from Landsat change detection estimated a loss of 5000 ha and 520,000 t carbon from closed natural forest; Geomod modeled a loss of 2500 ha and 250 000 t; FRCA projected a loss of 4700 {+-} 100 ha and 480,000 t (maximum 760,000 t, minimum 220,000 t). Concerning labor time, extrapolation for Landsat required 90 actual days or 120 days normalized to Bachelor degree level wages; Geomod required 240 actual days or 310 normalized days; FRCA required 110 actual days or 170 normalized days. Users experienced difficulties with an MS-DOS version of Geomod before turning to the Idrisi version. For organizations with limited time and financing, extrapolation from Landsat change provides a cost-effective method. Organizations with more time and financing could use FRCA, the only method where that calculates the deforestation rate as a dependent variable rather than assuming a deforestation rate as an independent variable. This research indicates that best practices for the projection of baseline carbon emissions include integration of forest inventory and remote sensing tasks from the beginning of the analysis, definition of an analysis area using ecological characteristics, use of standard and widely used geographic information systems (GIS) software applications, and the use of species-specific allometric equations and wood densities developed for local species.
Directory of Open Access Journals (Sweden)
Changxin Zhang
2013-01-01
Full Text Available The essay firstly defined the research ideas analyzing the town-village spatial structure in county. Then put forward the GIS-based research method analyzing and optimizing town-village spatial structure. It described the GIS-based researching process, that is to establish GIS database and DEM(terrain digital elevation model, then to elect indicators including TI (terrain index, DI (distribution index, CV (area coefficient of variation of Voronoi diagram, NNI (the nearest neighbor index and other basic Indicators and then using the GIS spatial analysis function to analyze the relationship between the settlements and topography and the roads, the water system and to finally optimize the town-village spatial structure.
International Nuclear Information System (INIS)
Two methods for determining the diffusion parameters of thermal neutrons for non-moderator and non-multiplicator media have been developped: The first one, which is a pulsed method, is based on thermal neutrons relaxation coefficients measurement in a moderator, with and without the medium of interest that plays the role of reflector. For the experimental results interpretation using the diffusion theory, a corrective factor which takes into account the neutron cooling by diffusion has been introduced. Its dependence on the empirically obtained relaxation coefficients is in a good agreement with the calculations made in P3L2 approximation. The difference between linear extrapolation lengths of the moderator and the reflector has been taken into account, by developping the scalar fluxes in Bessel function series which automatically satisfy the boundary conditions at the extra-polated surfaces of the two media. The obtained results for Iron are in a good agreement with those in the literature. The second method is time independent, based on the 'flux albedo' measurements interpretation (Concept introduced by Amaldi and Fermi) by P3 approximation in the one group trans-port theory. The independent sources are introduced in the Marshak boundary conditions. An angular albedo matrix has been used to deal with multiple reflections and to take into account the distortion of the current vector when entring a medium, after being reflected by this latter. The results obtained by this method are slightly different from those given in the literature. The analysis of the possible sources causing this discrepancy, particulary the radial distribution of flux in cylindrical geometry and the flux depression at medium-black body interface, has shown that the origin of this discrepancy is the neutron heating by diffusion. 47 figs., 20 tabs., 39 refs. (author)
Sharifalhoseini, Zahra; Entezari, Mohammad H
2015-10-01
The pure phase of the ZnO nanoparticles (NPs) as anticorrosive pigments was synthesized by the sonication method. The surfaces of the sono-synthesized nanoparticles were covered with the protective silica layer. The durability of the coated and uncoated ZnO NPs in the used electrolytic Ni bath was determined by flame atomic absorption spectrometry. In the present research the multicomponent Ni bath as the complex medium was replaced by the simple one. The used nickel-plating bath was just composed of the Ni salts (as the sources of the Ni(2+) ions) to better clarify the influence of the presence of the ZnO@SiO2 core-shell NPs on the stability of the medium. The effect of ZnO@SiO2 NPs incorporation on the morphology of the solid electroformed Ni deposit was studied by scanning electron microscopy (SEM). Furthermore, the influence of the co-deposited particles in the Ni matrix on the corrosion resistance of the Ni coating was evaluated by the electrochemical methods including linear polarization resistance (LPR) and Tafel extrapolation. PMID:26057943
Electrochemical characteristics of calcium-phosphatized AZ31 magnesium alloy in 0.9 % NaCl solution.
Hadzima, Branislav; Mhaede, Mansour; Pastorek, Filip
2014-05-01
Magnesium alloys suffer from their high reactivity in common environments. Protective layers are widely created on the surface of magnesium alloys to improve their corrosion resistance. This article evaluates the influence of a calcium-phosphate layer on the electrochemical characteristics of AZ31 magnesium alloy in 0.9 % NaCl solution. The calcium phosphate (CaP) layer was electrochemically deposited in a solution containing 0.1 M Ca(NO3)2, 0.06 M NH4H2PO4 and 10 ml l(-1) of H2O2. The formed surface layer was composed mainly of brushite [(dicalcium phosphate dihidrate (DCPD)] as proved by energy-dispersive X-ray analysis. The surface morphology was observed by scanning electron microscopy. Immersion test was performed in order to observe degradation of the calcium phosphatized surfaces. The influence of the phosphate layer on the electrochemical characteristics of AZ31, in 0.9 % NaCl solution, was evaluated by potentiodynamic measurements and electrochemical impedance spectroscopy. The obtained results were analysed by the Tafel-extrapolation method and equivalent circuits method. The results showed that the polarization resistance of the DCPD-coated surface is about 25 times higher than that of non-coated surface. The CaP electro-deposition process increased the activation energy of corrosion process. PMID:24477876
Mhaede, Mansour; Pastorek, Filip; Hadzima, Branislav
2014-06-01
Magnesium alloys are promising materials for biomedical applications because of many outstanding properties like biodegradation, bioactivity and their specific density and Young's modulus are closer to bone than the commonly used metallic implant materials. Unfortunately their fatigue properties and low corrosion resistance negatively influenced their application possibilities in the field of biomedicine. These problems could be diminished through appropriate surface treatments. This study evaluates the influence of a surface pre-treatment by shot peening and shot peening+coating on the corrosion properties of magnesium alloy AZ31. The dicalcium phosphate dihydrate coating (DCPD) was electrochemically deposited in a solution containing 0.1M Ca(NO3)2, 0.06M NH4H2PO4 and 10mL/L of H2O2. The effect of shot peening on the surface properties of magnesium alloy was evaluated by microhardness and surface roughness measurements. The influence of the shot peening and dicalcium phosphate dihydrate layer on the electrochemical characteristics of AZ31 magnesium alloy was evaluated by potentiodynamic measurements and electrochemical impedance spectroscopy in 0.9% NaCl solution at a temperature of 22±1°C. The obtained results were analyzed by the Tafel-extrapolation method and equivalent circuit method. The results showed that the application of shot peening process followed by DCPD coating improves the properties of the AZ31 surface from corrosion and mechanical point of view. PMID:24863232