WorldWideScience

Sample records for extrapolation techniques applied

  1. Radiographic film: surface dose extrapolation techniques

    International Nuclear Information System (INIS)

    Cheung, T.; Yu, P.K.N.; Butson, M.J.; Cancer Services, Wollongong, NSW; Currie, M.

    2004-01-01

    Full text: Assessment of surface dose delivered from radiotherapy x-ray beams for optimal results should be performed both inside and outside the prescribed treatment fields An extrapolation technique can be used with radiographic film to perform surface dose assessment for open field high energy x-ray beams. This can produce an accurate 2 dimensional map of surface dose if required. Results have shown that surface % dose can be estimated within ±3% of parallel plate ionisation chamber results with radiographic film using a series of film layers to produce an extrapolated result. Extrapolated percentage dose assessment for 10cm, 20cmand 30cm square fields was estimated to be 15% ± 2%, 29% ± 3% and 38% ± 3% at the central axis and relatively uniform across the treatment field. Corresponding parallel plate ionisation chamber measurement are 16%, 27% and 37% respectively. Surface doses are also measured outside the treatment field which are mainly due to scattered electron contamination. To achieve this result, film calibration curves must be irradiated to similar x-ray field sizes as the experimental film to minimize quantitative variations in film optical density caused by varying x-ray spectrum with field size. Copyright (2004) Australasian College of Physical Scientists and Engineers in Medicine

  2. Technique of Critical Current Density Measurement of Bulk Superconductor with Linear Extrapolation Method

    International Nuclear Information System (INIS)

    Adi, Wisnu Ari; Sukirman, Engkir; Winatapura, Didin S.

    2000-01-01

    Technique of critical current density measurement (Jc) of HTc bulk ceramic superconductor has been performed by using linear extrapolation with four-point probes method. The measurement of critical current density HTc bulk ceramic superconductor usually causes damage in contact resistance. In order to decrease this damage factor, we introduce extrapolation method. The extrapolating data show that the critical current density Jc for YBCO (123) and BSCCO (2212) at 77 K are 10,85(6) Amp.cm - 2 and 14,46(6) Amp.cm - 2, respectively. This technique is easier, simpler, and the use of the current flow is low, so it will not damage the contact resistance of the sample. We expect that the method can give a better solution for bulk superconductor application. Key words. : superconductor, critical temperature, and critical current density

  3. Multiparameter extrapolation and deflation methods for solving equation systems

    Directory of Open Access Journals (Sweden)

    A. J. Hughes Hallett

    1984-01-01

    Full Text Available Most models in economics and the applied sciences are solved by first order iterative techniques, usually those based on the Gauss-Seidel algorithm. This paper examines the convergence of multiparameter extrapolations (accelerations of first order iterations, as an improved approximation to the Newton method for solving arbitrary nonlinear equation systems. It generalises my earlier results on single parameter extrapolations. Richardson's generalised method and the deflation method for detecting successive solutions in nonlinear equation systems are also presented as multiparameter extrapolations of first order iterations. New convergence results are obtained for those methods.

  4. SU-F-T-579: Extrapolation Techniques for Small Field Dosimetry Using Gafchromic EBT3 Film

    Energy Technology Data Exchange (ETDEWEB)

    Morales, J [Chris OBrien Lifehouse, Camperdown, NSW (Australia)

    2016-06-15

    Purpose: The purpose of this project is to test an experimental approach using an extrapolation technique for Gafchromic EBT3 film for small field x-ray dosimetry. Methods: Small fields from a Novalis Tx linear accelerator with HD Multileaf Collimators with 6 MV was used. The field sizes ranged from 5 × 5 to 50 × 50 mm2 MLC fields and a range of circular cones of 4 to 30 mm2 diameters. All measurements were performed in water at an SSD of 100 cm and at a depth of 10 cm. The relative output factors (ROFs) were determined from an extrapolation technique developed to eliminate the effects of partial volume averaging in film scan by scanning films with high resolution (1200 DPI). The size of the regions of interest (ROI) was varied to produce a plot of ROFs versus ROI which was then extrapolated to zero ROI to determine the relative output factor. The results were compared with other solid state detectors with proper correction, namely, IBA SFD diode, PTW 60008 and PTW 60012 diode. Results: For the 4 mm cone, the extrapolated ROF had a value of 0.658 ± 0.014 as compared to 0.642 and 0.636 for 0.5 mm and 1 mm2 ROI analysis, respectively. This showed a change in output factor of 2.4% and 3.3% at this comparative ROI sizes. In comparison, the 25 mm cone had a difference in measured output factor of 0.3% and 0.5% between 0.5 and 1.0 mm, respectively compared to zero volume. For the fields defined by MLCs a difference of up to 2% for 5×5 mm2 was observed. Conclusion: A measureable difference can be seen in ROF based on the ROI when radiochromic film is used. Using extrapolation technique from high resolution scanning a good agreement can be achieved.

  5. Aitken extrapolation and epsilon algorithm for an accelerated solution of weakly singular nonlinear Volterra integral equations

    International Nuclear Information System (INIS)

    Mesgarani, H; Parmour, P; Aghazadeh, N

    2010-01-01

    In this paper, we apply Aitken extrapolation and epsilon algorithm as acceleration technique for the solution of a weakly singular nonlinear Volterra integral equation of the second kind. In this paper, based on Tao and Yong (2006 J. Math. Anal. Appl. 324 225-37.) the integral equation is solved by Navot's quadrature formula. Also, Tao and Yong (2006) for the first time applied Richardson extrapolation to accelerating convergence for the weakly singular nonlinear Volterra integral equations of the second kind. To our knowledge, this paper may be the first attempt to apply Aitken extrapolation and epsilon algorithm for the weakly singular nonlinear Volterra integral equations of the second kind.

  6. Super Resolution and Interference Suppression Technique applied to SHARAD Radar Data

    Science.gov (United States)

    Raguso, M. C.; Mastrogiuseppe, M.; Seu, R.; Piazzo, L.

    2017-12-01

    We will present a super resolution and interference suppression technique applied to the data acquired by the SHAllow RADar (SHARAD) on board the NASA's 2005 Mars Reconnaissance Orbiter (MRO) mission, currently operating around Mars [1]. The algorithms allow to improve the range resolution roughly by a factor of 3 and the Signal to Noise Ratio (SNR) by a several decibels. Range compression algorithms usually adopt conventional Fourier transform techniques, which are limited in the resolution by the transmitted signal bandwidth, analogous to the Rayleigh's criterion in optics. In this work, we investigate a super resolution method based on autoregressive models and linear prediction techniques [2]. Starting from the estimation of the linear prediction coefficients from the spectral data, the algorithm performs the radar bandwidth extrapolation (BWE), thereby improving the range resolution of the pulse-compressed coherent radar data. Moreover, the EMIs (ElectroMagnetic Interferences) are detected and the spectra is interpolated in order to reconstruct an interference free spectrum, thereby improving the SNR. The algorithm can be applied to the single complex look image after synthetic aperture processing (SAR). We apply the proposed algorithm to simulated as well as to real radar data. We will demonstrate the effective enhancement on vertical resolution with respect to the classical spectral estimator. We will show that the imaging of the subsurface layered structures observed in radargrams is improved, allowing additional insights for the scientific community in the interpretation of the SHARAD radar data, which will help to further our understanding of the formation and evolution of known geological features on Mars. References: [1] Seu et al. 2007, Science, 2007, 317, 1715-1718 [2] K.M. Cuomo, "A Bandwidth Extrapolation Technique for Improved Range Resolution of Coherent Radar Data", Project Report CJP-60, Revision 1, MIT Lincoln Laboratory (4 Dec. 1992).

  7. Extrapolation techniques evaluating 24 hours of average electromagnetic field emitted by radio base station installations: spectrum analyzer measurements of LTE and UMTS signals

    International Nuclear Information System (INIS)

    Mossetti, Stefano; Bartolo, Daniela de; Nava, Elisa; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina

    2017-01-01

    International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. (authors)

  8. EXTRAPOLATION TECHNIQUES EVALUATING 24 HOURS OF AVERAGE ELECTROMAGNETIC FIELD EMITTED BY RADIO BASE STATION INSTALLATIONS: SPECTRUM ANALYZER MEASUREMENTS OF LTE AND UMTS SIGNALS.

    Science.gov (United States)

    Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa

    2017-04-01

    International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Microscale and nanoscale strain mapping techniques applied to creep of rocks

    Science.gov (United States)

    Quintanilla-Terminel, Alejandra; Zimmerman, Mark E.; Evans, Brian; Kohlstedt, David L.

    2017-07-01

    Usually several deformation mechanisms interact to accommodate plastic deformation. Quantifying the contribution of each to the total strain is necessary to bridge the gaps from observations of microstructures, to geomechanical descriptions, to extrapolating from laboratory data to field observations. Here, we describe the experimental and computational techniques involved in microscale strain mapping (MSSM), which allows strain produced during high-pressure, high-temperature deformation experiments to be tracked with high resolution. MSSM relies on the analysis of the relative displacement of initially regularly spaced markers after deformation. We present two lithography techniques used to pattern rock substrates at different scales: photolithography and electron-beam lithography. Further, we discuss the challenges of applying the MSSM technique to samples used in high-temperature and high-pressure experiments. We applied the MSSM technique to a study of strain partitioning during creep of Carrara marble and grain boundary sliding in San Carlos olivine, synthetic forsterite, and Solnhofen limestone at a confining pressure, Pc, of 300 MPa and homologous temperatures, T/Tm, of 0.3 to 0.6. The MSSM technique works very well up to temperatures of 700 °C. The experimental developments described here show promising results for higher-temperature applications.

  10. An experimental extrapolation technique using the Gafchromic EBT3 film for relative output factor measurements in small x-ray fields

    Energy Technology Data Exchange (ETDEWEB)

    Morales, Johnny E., E-mail: johnny.morales@lh.org.au [Department of Radiation Oncology, Chris O’Brien Lifehouse, 119-143 Missenden Road, Camperdown, NSW 2050, Australia and School of Chemistry, Physics, and Mechanical Engineering, Queensland University of Technology, Level 4 O Block, Garden’s Point, QLD 4001 (Australia); Butson, Martin; Hill, Robin [Department of Radiation Oncology, Chris O’Brien Lifehouse, 119-143 Missenden Road, Camperdown, NSW 2050, Australia and Institute of Medical Physics, University of Sydney, NSW 2006 (Australia); Crowe, Scott B. [School of Chemistry, Physics, and Mechanical Engineering, Queensland University of Technology, Level 4 O Block, Garden’s Point, QLD 4001, Australia and Cancer Care Services, Royal Brisbane and Women’s Hospital, Butterfield Street, Herston, QLD 4029 (Australia); Trapp, J. V. [School of Chemistry, Physics, and Mechanical Engineering, Queensland University of Technology, Level 4 O Block, Garden’s Point, QLD 4001 (Australia)

    2016-08-15

    Purpose: An experimental extrapolation technique is presented, which can be used to determine the relative output factors for very small x-ray fields using the Gafchromic EBT3 film. Methods: Relative output factors were measured for the Brainlab SRS cones ranging in diameters from 4 to 30 mm{sup 2} on a Novalis Trilogy linear accelerator with 6 MV SRS x-rays. The relative output factor was determined from an experimental reducing circular region of interest (ROI) extrapolation technique developed to remove the effects of volume averaging. This was achieved by scanning the EBT3 film measurements with a high scanning resolution of 1200 dpi. From the high resolution scans, the size of the circular regions of interest was varied to produce a plot of relative output factors versus area of analysis. The plot was then extrapolated to zero to determine the relative output factor corresponding to zero volume. Results: Results have shown that for a 4 mm field size, the extrapolated relative output factor was measured as a value of 0.651 ± 0.018 as compared to 0.639 ± 0.019 and 0.633 ± 0.021 for 0.5 and 1.0 mm diameter of analysis values, respectively. This showed a change in the relative output factors of 1.8% and 2.8% at these comparative regions of interest sizes. In comparison, the 25 mm cone had negligible differences in the measured output factor between zero extrapolation, 0.5 and 1.0 mm diameter ROIs, respectively. Conclusions: This work shows that for very small fields such as 4.0 mm cone sizes, a measureable difference can be seen in the relative output factor based on the circular ROI and the size of the area of analysis using radiochromic film dosimetry. The authors recommend to scan the Gafchromic EBT3 film at a resolution of 1200 dpi for cone sizes less than 7.5 mm and to utilize an extrapolation technique for the output factor measurements of very small field dosimetry.

  11. Community assessment techniques and the implications for rarefaction and extrapolation with Hill numbers.

    Science.gov (United States)

    Cox, Kieran D; Black, Morgan J; Filip, Natalia; Miller, Matthew R; Mohns, Kayla; Mortimor, James; Freitas, Thaise R; Greiter Loerzer, Raquel; Gerwing, Travis G; Juanes, Francis; Dudas, Sarah E

    2017-12-01

    Diversity estimates play a key role in ecological assessments. Species richness and abundance are commonly used to generate complex diversity indices that are dependent on the quality of these estimates. As such, there is a long-standing interest in the development of monitoring techniques, their ability to adequately assess species diversity, and the implications for generated indices. To determine the ability of substratum community assessment methods to capture species diversity, we evaluated four methods: photo quadrat, point intercept, random subsampling, and full quadrat assessments. Species density, abundance, richness, Shannon diversity, and Simpson diversity were then calculated for each method. We then conducted a method validation at a subset of locations to serve as an indication for how well each method captured the totality of the diversity present. Density, richness, Shannon diversity, and Simpson diversity estimates varied between methods, despite assessments occurring at the same locations, with photo quadrats detecting the lowest estimates and full quadrat assessments the highest. Abundance estimates were consistent among methods. Sample-based rarefaction and extrapolation curves indicated that differences between Hill numbers (richness, Shannon diversity, and Simpson diversity) were significant in the majority of cases, and coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. Method validation highlighted the inability of the tested methods to capture the totality of the diversity present, while further supporting the notion of extrapolating abundances. Our results highlight the need for consistency across research methods, the advantages of utilizing multiple diversity indices, and potential concerns and considerations when comparing data from multiple sources.

  12. Dose rates from a C-14 source using extrapolation chamber and MC calculations

    International Nuclear Information System (INIS)

    Borg, J.

    1996-05-01

    The extrapolation chamber technique and the Monte Carlo (MC) calculation technique based on the EGS4 system have been studied for application for determination of dose rates in a low-energy β radiation field e.g., that from a 14 C source. The extrapolation chamber measurement method is the basic method for determination of dose rates in β radiation fields. Applying a number of correction factors and the stopping power ratio, tissue to air, the measured dose rate in an air volume surrounded by tissue equivalent material is converted into dose to tissue. Various details of the extrapolation chamber measurement method and evaluation procedure have been studied and further developed, and a complete procedure for the experimental determination of dose rates from a 14 C source is presented. A number of correction factors and other parameters used in the evaluation procedure for the measured data have been obtained by MC calculations. The whole extrapolation chamber measurement procedure was simulated using the MC method. The measured dose rates showed an increasing deviation from the MC calculated dose rates as the absorber thickness increased. This indicates that the EGS4 code may have some limitations for transport of very low-energy electrons. i.e., electrons with estimated energies less than 10 - 20 keV. MC calculations of dose to tissue were performed using two models: a cylindrical tissue phantom and a computer model of the extrapolation chamber. The dose to tissue in the extrapolation chamber model showed an additional buildup dose compared to the dose in the tissue model. (au) 10 tabs., 11 ills., 18 refs

  13. Rare event techniques applied in the Rasmussen study

    International Nuclear Information System (INIS)

    Vesely, W.E.

    1977-01-01

    The Rasmussen Study estimated public risks from commercial nuclear power plant accidents, and therefore the statistics of rare events had to be treated. Two types of rare events were specifically handled, those rare events which were probabilistically rare events and those which were statistically rare events. Four techniques were used to estimate probabilities of rare events. These techniques were aggregating data samples, discretizing ''continuous'' events, extrapolating from minor to catastrophic severities, and decomposing events using event trees and fault trees. In aggregating or combining data the goal was to enlarge the data sample so that the rare event was no longer rare, i.e., so that the enlarged data sample contained one or more occurrences of the event of interest. This aggregation gave rise to random variable treatments of failure rates, occurrence frequencies, and other characteristics estimated from data. This random variable treatment can be interpreted as being comparable to an empirical Bayes technique or a Bayesian technique. In the discretizing event technique, events of a detailed nature were grouped together into a grosser event for purposes of analysis as well as for data collection. The treatment of data characteristics as random variables helped to account for the uncertainties arising from this discretizing. In the severity extrapolation technique a severity variable was associated with each event occurrence for the purpose of predicting probabilities of catastrophic occurrences. Tail behaviors of distributions therefore needed to be considered. Finally, event trees and fault trees were used to express accident occurrences and system failures in terms of more basic events for which data existed. Common mode failures and general dependencies therefore needed to be treated. 2 figures

  14. Combining extrapolation with ghost interaction correction in range-separated ensemble density functional theory for excited states

    Science.gov (United States)

    Alam, Md. Mehboob; Deur, Killian; Knecht, Stefan; Fromager, Emmanuel

    2017-11-01

    The extrapolation technique of Savin [J. Chem. Phys. 140, 18A509 (2014)], which was initially applied to range-separated ground-state-density-functional Hamiltonians, is adapted in this work to ghost-interaction-corrected (GIC) range-separated ensemble density-functional theory (eDFT) for excited states. While standard extrapolations rely on energies that decay as μ-2 in the large range-separation-parameter μ limit, we show analytically that (approximate) range-separated GIC ensemble energies converge more rapidly (as μ-3) towards their pure wavefunction theory values (μ → +∞ limit), thus requiring a different extrapolation correction. The purpose of such a correction is to further improve on the convergence and, consequently, to obtain more accurate excitation energies for a finite (and, in practice, relatively small) μ value. As a proof of concept, we apply the extrapolation method to He and small molecular systems (viz., H2, HeH+, and LiH), thus considering different types of excitations such as Rydberg, charge transfer, and double excitations. Potential energy profiles of the first three and four singlet Σ+ excitation energies in HeH+ and H2, respectively, are studied with a particular focus on avoided crossings for the latter. Finally, the extraction of individual state energies from the ensemble energy is discussed in the context of range-separated eDFT, as a perspective.

  15. The ATLAS Track Extrapolation Package

    CERN Document Server

    Salzburger, A

    2007-01-01

    The extrapolation of track parameters and their associated covariances to destination surfaces of different types is a very frequent process in the event reconstruction of high energy physics experiments. This is amongst other reasons due to the fact that most track and vertex fitting techniques are based on the first and second momentum of the underlying probability density distribution. The correct stochastic or deterministic treatment of interactions with the traversed detector material is hereby crucial for high quality track reconstruction throughout the entire momentum range of final state particles that are produced in high energy physics collision experiments. This document presents the main concepts, the algorithms and the implementation of the newly developed, powerful ATLAS track extrapolation engine. It also emphasises on validation procedures, timing measurements and the integration into the ATLAS offline reconstruction software.

  16. General extrapolation model for an important chemical dose-rate effect

    International Nuclear Information System (INIS)

    Gillen, K.T.; Clough, R.L.

    1984-12-01

    In order to extrapolate material accelerated aging data, methodologies must be developed based on sufficient understanding of the processes leading to material degradation. One of the most important mechanisms leading to chemical dose-rate effects in polymers involves the breakdown of intermediate hydroperoxide species. A general model for this mechanism is derived based on the underlying chemical steps. The results lead to a general formalism for understanding dose rate and sequential aging effects when hydroperoxide breakdown is important. We apply the model to combined radiation/temperature aging data for a PVC material and show that this data is consistent with the model and that model extrapolations are in excellent agreement with 12-year real-time aging results from an actual nuclear plant. This model and other techniques discussed in this report can aid in the selection of appropriate accelerated aging methods and can also be used to compare and select materials for use in safety-related components. This will result in increased assurance that equipment qualification procedures are adequate

  17. Extrapolation Method for System Reliability Assessment

    DEFF Research Database (Denmark)

    Qin, Jianjun; Nishijima, Kazuyoshi; Faber, Michael Havbro

    2012-01-01

    of integrals with scaled domains. The performance of this class of approximation depends on the approach applied for the scaling and the functional form utilized for the extrapolation. A scheme for this task is derived here taking basis in the theory of asymptotic solutions to multinormal probability integrals......The present paper presents a new scheme for probability integral solution for system reliability analysis, which takes basis in the approaches by Naess et al. (2009) and Bucher (2009). The idea is to evaluate the probability integral by extrapolation, based on a sequence of MC approximations...... that the proposed scheme is efficient and adds to generality for this class of approximations for probability integrals....

  18. Higher Order Aitken Extrapolation with Application to Converging and Diverging Gauss-Seidel Iterations

    OpenAIRE

    Tiruneh, Ababu Teklemariam

    2013-01-01

    Aitken extrapolation normally applied to convergent fixed point iteration is extended to extrapolate the solution of a divergent iteration. In addition, higher order Aitken extrapolation is introduced that enables successive decomposition of high Eigen values of the iteration matrix to enable convergence. While extrapolation of a convergent fixed point iteration using a geometric series sum is a known form of Aitken acceleration, it is shown in this paper that the same formula can be used to ...

  19. Applying contemporary statistical techniques

    CERN Document Server

    Wilcox, Rand R

    2003-01-01

    Applying Contemporary Statistical Techniques explains why traditional statistical methods are often inadequate or outdated when applied to modern problems. Wilcox demonstrates how new and more powerful techniques address these problems far more effectively, making these modern robust methods understandable, practical, and easily accessible.* Assumes no previous training in statistics * Explains how and why modern statistical methods provide more accurate results than conventional methods* Covers the latest developments on multiple comparisons * Includes recent advanc

  20. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  1. Principles of animal extrapolation

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, E.J.

    1991-01-01

    Animal Extrapolation presents a comprehensive examination of the scientific issues involved in extrapolating results of animal experiments to human response. This text attempts to present a comprehensive synthesis and analysis of the host of biomedical and toxicological studies of interspecies extrapolation. Calabrese's work presents not only the conceptual basis of interspecies extrapolation, but also illustrates how these principles may be better used in selection of animal experimentation models and in the interpretation of animal experimental results. The book's theme centers around four types of extrapolation: (1) from average animal model to the average human; (2) from small animals to large ones; (3) from high-risk animal to the high risk human; and (4) from high doses of exposure to lower, more realistic, doses. Calabrese attacks the issues of interspecies extrapolation by dealing individually with the factors which contribute to interspecies variability: differences in absorption, intestinal flora, tissue distribution, metabolism, repair mechanisms, and excretion. From this foundation, Calabrese then discusses the heterogeneticity of these same factors in the human population in an attempt to evaluate the representativeness of various animal models in light of interindividual variations. In addition to discussing the question of suitable animal models for specific high-risk groups and specific toxicological endpoints, the author also examines extrapolation questions related to the use of short-term tests to predict long-term human carcinogenicity and birth defects. The book is comprehensive in scope and specific in detail; for those environmental health professions seeking to understand the toxicological models which underlay health risk assessments, Animal Extrapolation is a valuable information source.

  2. Functional differential equations with unbounded delay in extrapolation spaces

    Directory of Open Access Journals (Sweden)

    Mostafa Adimy

    2014-08-01

    Full Text Available We study the existence, regularity and stability of solutions for nonlinear partial neutral functional differential equations with unbounded delay and a Hille-Yosida operator on a Banach space X. We consider two nonlinear perturbations: the first one is a function taking its values in X and the second one is a function belonging to a space larger than X, an extrapolated space. We use the extrapolation techniques to prove the existence and regularity of solutions and we establish a linearization principle for the stability of the equilibria of our equation.

  3. A regularization method for extrapolation of solar potential magnetic fields

    Science.gov (United States)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  4. Standardization of I-125 solution by extrapolation of an efficiency wave obtained by coincidence X-(X-γ) counting method

    International Nuclear Information System (INIS)

    Iwahara, A.

    1989-01-01

    The activity concentration of 125 I was determined by X-(X-α) coincidence counting method and efficiency extrapolation curve. The measurement system consists of 2 thin NaI(T1) scintillation detectors which are horizontally movable on a track. The efficiency curve is obtained by symmetricaly changing the distance between the source and the detectors and the activity is determined by applying a linear efficiency extrapolation curve. All sum-coincidence events are included between 10 and 100 KeV window counting and the main source of uncertainty is coming from poor counting statistic around zero efficiency. The consistence of results with other methods shows that this technique can be applied to photon cascade emitters and are not discriminating by the detectors. It has been also determined the 35,5 KeV gamma-ray emission probability of 125 I by using a Gamma-X type high purity germanium detector. (author) [pt

  5. Proposition of Improved Methodology in Creep Life Extrapolation

    International Nuclear Information System (INIS)

    Kim, Woo Gon; Park, Jae Young; Jang, Jin Sung

    2016-01-01

    To design SFRs for a 60-year operation, it is desirable to have the experimental creep-rupture data for Gr. 91 steel close to 20 y, or at least rupture lives significantly higher than 10"5 h. This requirement arises from the fact that, for the creep design, a factor of 3 times for extrapolation is considered to be appropriate. However, obtaining experimental data close to 20 y would be expensive and also take considerable time. Therefore, reliable creep life extrapolation techniques become necessary for a safe design life of 60 y. In addition, it is appropriate to obtain experimental longterm creep-rupture data in the range 10"5 ∼ 2x10"5 h to improve the reliability of extrapolation. In the present investigation, a new function of a hyperbolic sine ('sinh') form for a master curve in time-temperature parameter (TTP) methods, was proposed to accurately extrapolate the long-term creep rupture stress of Gr. 91 steel. Constant values used for each parametric equation were optimized on the basis of the creep rupture data. Average stress values predicted for up to 60 y were evaluated and compared with those of French Nuclear Design Code, RCC-MRx. The results showed that the master curve of the 'sinh' function was a wider acceptance with good flexibility in the low stress ranges beyond the experimental data. It was clarified clarified that the 'sinh' function was reasonable in creep life extrapolation compared with polynomial forms, which have been used conventionally until now.

  6. Proposition of Improved Methodology in Creep Life Extrapolation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Woo Gon; Park, Jae Young; Jang, Jin Sung [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    To design SFRs for a 60-year operation, it is desirable to have the experimental creep-rupture data for Gr. 91 steel close to 20 y, or at least rupture lives significantly higher than 10{sup 5} h. This requirement arises from the fact that, for the creep design, a factor of 3 times for extrapolation is considered to be appropriate. However, obtaining experimental data close to 20 y would be expensive and also take considerable time. Therefore, reliable creep life extrapolation techniques become necessary for a safe design life of 60 y. In addition, it is appropriate to obtain experimental longterm creep-rupture data in the range 10{sup 5} ∼ 2x10{sup 5} h to improve the reliability of extrapolation. In the present investigation, a new function of a hyperbolic sine ('sinh') form for a master curve in time-temperature parameter (TTP) methods, was proposed to accurately extrapolate the long-term creep rupture stress of Gr. 91 steel. Constant values used for each parametric equation were optimized on the basis of the creep rupture data. Average stress values predicted for up to 60 y were evaluated and compared with those of French Nuclear Design Code, RCC-MRx. The results showed that the master curve of the 'sinh' function was a wider acceptance with good flexibility in the low stress ranges beyond the experimental data. It was clarified clarified that the 'sinh' function was reasonable in creep life extrapolation compared with polynomial forms, which have been used conventionally until now.

  7. Attenuated Total Reflectance Fourier transform infrared spectroscopy for determination of long chain free fatty acid concentration in oily wastewater using the double wavenumber extrapolation technique

    Science.gov (United States)

    Long Chain Free Fatty Acids (LCFFAs) from the hydrolysis of fat, oil and grease (FOG) are major components in the formation of insoluble saponified solids known as FOG deposits that accumulate in sewer pipes and lead to sanitary sewer overflows (SSOs). A Double Wavenumber Extrapolative Technique (DW...

  8. Applied ALARA techniques

    International Nuclear Information System (INIS)

    Waggoner, L.O.

    1998-01-01

    The presentation focuses on some of the time-proven and new technologies being used to accomplish radiological work. These techniques can be applied at nuclear facilities to reduce radiation doses and protect the environment. The last reactor plants and processing facilities were shutdown and Hanford was given a new mission to put the facilities in a safe condition, decontaminate, and prepare them for decommissioning. The skills that were necessary to operate these facilities were different than the skills needed today to clean up Hanford. Workers were not familiar with many of the tools, equipment, and materials needed to accomplish:the new mission, which includes clean up of contaminated areas in and around all the facilities, recovery of reactor fuel from spent fuel pools, and the removal of millions of gallons of highly radioactive waste from 177 underground tanks. In addition, this work has to be done with a reduced number of workers and a smaller budget. At Hanford, facilities contain a myriad of radioactive isotopes that are 2048 located inside plant systems, underground tanks, and the soil. As cleanup work at Hanford began, it became obvious early that in order to get workers to apply ALARA and use hew tools and equipment to accomplish the radiological work it was necessary to plan the work in advance and get radiological control and/or ALARA committee personnel involved early in the planning process. Emphasis was placed on applying,ALARA techniques to reduce dose, limit contamination spread and minimize the amount of radioactive waste generated. Progress on the cleanup has,b6en steady and Hanford workers have learned to use different types of engineered controls and ALARA techniques to perform radiological work. The purpose of this presentation is to share the lessons learned on how Hanford is accomplishing radiological work

  9. Applied ALARA techniques

    Energy Technology Data Exchange (ETDEWEB)

    Waggoner, L.O.

    1998-02-05

    The presentation focuses on some of the time-proven and new technologies being used to accomplish radiological work. These techniques can be applied at nuclear facilities to reduce radiation doses and protect the environment. The last reactor plants and processing facilities were shutdown and Hanford was given a new mission to put the facilities in a safe condition, decontaminate, and prepare them for decommissioning. The skills that were necessary to operate these facilities were different than the skills needed today to clean up Hanford. Workers were not familiar with many of the tools, equipment, and materials needed to accomplish:the new mission, which includes clean up of contaminated areas in and around all the facilities, recovery of reactor fuel from spent fuel pools, and the removal of millions of gallons of highly radioactive waste from 177 underground tanks. In addition, this work has to be done with a reduced number of workers and a smaller budget. At Hanford, facilities contain a myriad of radioactive isotopes that are 2048 located inside plant systems, underground tanks, and the soil. As cleanup work at Hanford began, it became obvious early that in order to get workers to apply ALARA and use hew tools and equipment to accomplish the radiological work it was necessary to plan the work in advance and get radiological control and/or ALARA committee personnel involved early in the planning process. Emphasis was placed on applying,ALARA techniques to reduce dose, limit contamination spread and minimize the amount of radioactive waste generated. Progress on the cleanup has,b6en steady and Hanford workers have learned to use different types of engineered controls and ALARA techniques to perform radiological work. The purpose of this presentation is to share the lessons learned on how Hanford is accomplishing radiological work.

  10. Surface dose extrapolation measurements with radiographic film

    International Nuclear Information System (INIS)

    Butson, Martin J; Cheung Tsang; Yu, Peter K N; Currie, Michael

    2004-01-01

    Assessment of surface dose delivered from radiotherapy x-ray beams for optimal results should be performed both inside and outside the prescribed treatment fields. An extrapolation technique can be used with radiographic film to perform surface dose assessment for open field high energy x-ray beams. This can produce an accurate two-dimensional map of surface dose if required. Results have shown that the surface percentage dose can be estimated within ±3% of parallel plate ionization chamber results with radiographic film using a series of film layers to produce an extrapolated result. Extrapolated percentage dose assessment for 10 cm, 20 cm and 30 cm square fields was estimated to be 15% ± 2%, 29% ± 3% and 38% ± 3% at the central axis and relatively uniform across the treatment field. The corresponding parallel plate ionization chamber measurements are 16%, 27% and 37%, respectively. Surface doses are also measured outside the treatment field which are mainly due to scattered electron contamination. To achieve this result, film calibration curves must be irradiated to similar x-ray field sizes as the experimental film to minimize quantitative variations in film optical density caused by varying x-ray spectrum with field size. (note)

  11. CT image construction of a totally deflated lung using deformable model extrapolation

    International Nuclear Information System (INIS)

    Sadeghi Naini, Ali; Pierce, Greg; Lee, Ting-Yim

    2011-01-01

    Purpose: A novel technique is proposed to construct CT image of a totally deflated lung from a free-breathing 4D-CT image sequence acquired preoperatively. Such a constructed CT image is very useful in performing tumor ablative procedures such as lung brachytherapy. Tumor ablative procedures are frequently performed while the lung is totally deflated. Deflating the lung during such procedures renders preoperative images ineffective for targeting the tumor. Furthermore, the problem cannot be solved using intraoperative ultrasound (U.S.) images because U.S. images are very sensitive to small residual amount of air remaining in the deflated lung. One possible solution to address these issues is to register high quality preoperative CT images of the deflated lung with their corresponding low quality intraoperative U.S. images. However, given that such preoperative images correspond to an inflated lung, such CT images need to be processed to construct CT images pertaining to the lung's deflated state. Methods: To obtain the CT images of deflated lung, we present a novel image construction technique using extrapolated deformable registration to predict the deformation the lung undergoes during full deflation. The proposed construction technique involves estimating the lung's air volume in each preoperative image automatically in order to track the respiration phase of each 4D-CT image throughout a respiratory cycle; i.e., the technique does not need any external marker to form a respiratory signal in the process of curve fitting and extrapolation. The extrapolated deformation field is then applied on a preoperative reference image in order to construct the totally deflated lung's CT image. The technique was evaluated experimentally using ex vivo porcine lung. Results: The ex vivo lung experiments led to very encouraging results. In comparison with the CT image of the deflated lung we acquired for the purpose of validation, the constructed CT image was very similar. The

  12. An Efficient Method of Reweighting and Reconstructing Monte Carlo Molecular Simulation Data for Extrapolation to Different Temperature and Density Conditions

    KAUST Repository

    Sun, Shuyu

    2013-06-01

    This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.

  13. An Efficient Method of Reweighting and Reconstructing Monte Carlo Molecular Simulation Data for Extrapolation to Different Temperature and Density Conditions

    KAUST Repository

    Sun, Shuyu; Kadoura, Ahmad Salim; Salama, Amgad

    2013-01-01

    This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.

  14. SU-D-204-02: BED Consistent Extrapolation of Mean Dose Tolerances

    Energy Technology Data Exchange (ETDEWEB)

    Perko, Z; Bortfeld, T; Hong, T; Wolfgang, J; Unkelbach, J [Massachusetts General Hospital, Boston, MA (United States)

    2016-06-15

    Purpose: The safe use of radiotherapy requires the knowledge of tolerable organ doses. For experimental fractionation schemes (e.g. hypofractionation) these are typically extrapolated from traditional fractionation schedules using the Biologically Effective Dose (BED) model. This work demonstrates that using the mean dose in the standard BED equation may overestimate tolerances, potentially leading to unsafe treatments. Instead, extrapolation of mean dose tolerances should take the spatial dose distribution into account. Methods: A formula has been derived to extrapolate mean physical dose constraints such that they are mean BED equivalent. This formula constitutes a modified BED equation where the influence of the spatial dose distribution is summarized in a single parameter, the dose shape factor. To quantify effects we analyzed 14 liver cancer patients previously treated with proton therapy in 5 or 15 fractions, for whom also photon IMRT plans were available. Results: Our work has two main implications. First, in typical clinical plans the dose distribution can have significant effects. When mean dose tolerances are extrapolated from standard fractionation towards hypofractionation they can be overestimated by 10–15%. Second, the shape difference between photon and proton dose distributions can cause 30–40% differences in mean physical dose for plans having the same mean BED. The combined effect when extrapolating proton doses to mean BED equivalent photon doses in traditional 35 fraction regimens resulted in up to 7–8 Gy higher doses than when applying the standard BED formula. This can potentially lead to unsafe treatments (in 1 of the 14 analyzed plans the liver mean dose was above its 32 Gy tolerance). Conclusion: The shape effect should be accounted for to avoid unsafe overestimation of mean dose tolerances, particularly when estimating constraints for hypofractionated regimens. In addition, tolerances established for a given treatment modality cannot

  15. The extrapolation of creep rupture data by PD6605 - An independent case study

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65 Fisher Avenue, Rugby, Warks CV22 5HW (United Kingdom)

    2011-04-15

    The worked example presented in BSI document PD6605-1:1998, to illustrate the selection, validation and extrapolation of a creep rupture model using statistical analysis, was independently examined. Alternative rupture models were formulated and analysed by the same statistical methods, and were shown to represent the test data more accurately than the original model. Median rupture lives extrapolated from the original and alternative models were found to diverge widely under some conditions of practical interest. The tests prescribed in PD6605 and employed to validate the original model were applied to the better of the alternative models. But the tests were unable to discriminate between the two, demonstrating that these tests fail to ensure reliability in extrapolation. The difficulties of determining when a model is sufficiently reliable for use in extrapolation are discussed and some proposals are made.

  16. Statistical modeling and extrapolation of carcinogenesis data

    International Nuclear Information System (INIS)

    Krewski, D.; Murdoch, D.; Dewanji, A.

    1986-01-01

    Mathematical models of carcinogenesis are reviewed, including pharmacokinetic models for metabolic activation of carcinogenic substances. Maximum likelihood procedures for fitting these models to epidemiological data are discussed, including situations where the time to tumor occurrence is unobservable. The plausibility of different possible shapes of the dose response curve at low doses is examined, and a robust method for linear extrapolation to low doses is proposed and applied to epidemiological data on radiation carcinogenesis

  17. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    Science.gov (United States)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  18. Efficient Wavefield Extrapolation In Anisotropic Media

    KAUST Repository

    Alkhalifah, Tariq; Ma, Xuxin; Waheed, Umair bin; Zuberi, Mohammad Akbar Hosain

    2014-01-01

    Various examples are provided for wavefield extrapolation in anisotropic media. In one example, among others, a method includes determining an effective isotropic velocity model and extrapolating an equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield. The effective isotropic velocity model can be based upon a kinematic geometrical representation of an anisotropic, poroelastic or viscoelastic wavefield. Extrapolating the equivalent propagation can use isotopic, acoustic or elastic operators based upon the determined effective isotropic velocity model. In another example, non-transitory computer readable medium stores an application that, when executed by processing circuitry, causes the processing circuitry to determine the effective isotropic velocity model and extrapolate the equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield. In another example, a system includes processing circuitry and an application configured to cause the system to determine the effective isotropic velocity model and extrapolate the equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield.

  19. Efficient Wavefield Extrapolation In Anisotropic Media

    KAUST Repository

    Alkhalifah, Tariq

    2014-07-03

    Various examples are provided for wavefield extrapolation in anisotropic media. In one example, among others, a method includes determining an effective isotropic velocity model and extrapolating an equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield. The effective isotropic velocity model can be based upon a kinematic geometrical representation of an anisotropic, poroelastic or viscoelastic wavefield. Extrapolating the equivalent propagation can use isotopic, acoustic or elastic operators based upon the determined effective isotropic velocity model. In another example, non-transitory computer readable medium stores an application that, when executed by processing circuitry, causes the processing circuitry to determine the effective isotropic velocity model and extrapolate the equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield. In another example, a system includes processing circuitry and an application configured to cause the system to determine the effective isotropic velocity model and extrapolate the equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield.

  20. Melting of “non-magic” argon clusters and extrapolation to the bulk limit

    International Nuclear Information System (INIS)

    Senn, Florian; Wiebke, Jonas; Schumann, Ole; Gohr, Sebastian; Schwerdtfeger, Peter; Pahl, Elke

    2014-01-01

    The melting of argon clusters Ar N is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Koči, and P. Schwerdtfeger, “Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations,” Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes

  1. One-step lowrank wave extrapolation

    KAUST Repository

    Sindi, Ghada Atif

    2014-01-01

    Wavefield extrapolation is at the heart of modeling, imaging, and Full waveform inversion. Spectral methods gained well deserved attention due to their dispersion free solutions and their natural handling of anisotropic media. We propose a scheme a modified one-step lowrank wave extrapolation using Shanks transform in isotropic, and anisotropic media. Specifically, we utilize a velocity gradient term to add to the accuracy of the phase approximation function in the spectral implementation. With the higher accuracy, we can utilize larger time steps and make the extrapolation more efficient. Applications to models with strong inhomogeneity and considerable anisotropy demonstrates the utility of the approach.

  2. Study of energy dependence of a extrapolation chamber in low energy X-rays beams

    International Nuclear Information System (INIS)

    Bastos, Fernanda M.; Silva, Teogenes A. da

    2014-01-01

    This work was with the main objective to study the energy dependence of extrapolation chamber in low energy X-rays to determine the value of the uncertainty associated with the variation of the incident radiation energy in the measures in which it is used. For studying the dependence of energy, were conducted comparative ionization current measurements between the extrapolation chamber and two ionization chambers: a chamber mammography, RC6M model, Radcal with energy dependence less than 5% and a 2575 model radioprotection chamber NE Technology; both chambers have very thin windows, allowing its application in low power beams. Measurements were made at four different depths of 1.0 to 4.0 mm extrapolation chamber, 1.0 mm interval, for each reference radiation. The study showed that there is a variable energy dependence on the volume of the extrapolation chamber. In other analysis, it is concluded that the energy dependence of extrapolation chamber becomes smaller when using the slope of the ionization current versus depth for the different radiation reference; this shows that the extrapolation technique, used for the absorbed dose calculation, reduces the uncertainty associated with the influence of the response variation with energy radiation

  3. Smooth extrapolation of unknown anatomy via statistical shape models

    Science.gov (United States)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  4. Wavefield extrapolation in pseudodepth domain

    KAUST Repository

    Ma, Xuxin

    2013-02-01

    Wavefields are commonly computed in the Cartesian coordinate frame. Its efficiency is inherently limited due to spatial oversampling in deep layers, where the velocity is high and wavelengths are long. To alleviate this computational waste due to uneven wavelength sampling, we convert the vertical axis of the conventional domain from depth to vertical time or pseudodepth. This creates a nonorthognal Riemannian coordinate system. Isotropic and anisotropic wavefields can be extrapolated in the new coordinate frame with improved efficiency and good consistency with Cartesian domain extrapolation results. Prestack depth migrations are also evaluated based on the wavefield extrapolation in the pseudodepth domain.© 2013 Society of Exploration Geophysicists. All rights reserved.

  5. A simple extrapolation of thermodynamic perturbation theory to infinite order

    International Nuclear Information System (INIS)

    Ghobadi, Ahmadreza F.; Elliott, J. Richard

    2015-01-01

    Recent analyses of the third and fourth order perturbation contributions to the equations of state for square well spheres and Lennard-Jones chains show trends that persist across orders and molecular models. In particular, the ratio between orders (e.g., A 3 /A 2 , where A i is the ith order perturbation contribution) exhibits a peak when plotted with respect to density. The trend resembles a Gaussian curve with the peak near the critical density. This observation can form the basis for a simple recursion and extrapolation from the highest available order to infinite order. The resulting extrapolation is analytic and therefore cannot fully characterize the critical region, but it remarkably improves accuracy, especially for the binodal curve. Whereas a second order theory is typically accurate for the binodal at temperatures within 90% of the critical temperature, the extrapolated result is accurate to within 99% of the critical temperature. In addition to square well spheres and Lennard-Jones chains, we demonstrate how the method can be applied semi-empirically to the Perturbed Chain - Statistical Associating Fluid Theory (PC-SAFT)

  6. Builtin vs. auxiliary detection of extrapolation risk.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Kegelmeyer, W. Philip,

    2013-02-01

    A key assumption in supervised machine learning is that future data will be similar to historical data. This assumption is often false in real world applications, and as a result, prediction models often return predictions that are extrapolations. We compare four approaches to estimating extrapolation risk for machine learning predictions. Two builtin methods use information available from the classification model to decide if the model would be extrapolating for an input data point. The other two build auxiliary models to supplement the classification model and explicitly model extrapolation risk. Experiments with synthetic and real data sets show that the auxiliary models are more reliable risk detectors. To best safeguard against extrapolating predictions, however, we recommend combining builtin and auxiliary diagnostics.

  7. Motion extrapolation in the central fovea.

    Directory of Open Access Journals (Sweden)

    Zhuanghua Shi

    Full Text Available Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination. A recent "correction-for-extrapolation" hypothesis suggests that the absence of forward shifts is caused by sensory signals representing 'failed' predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea.

  8. Effective wavefield extrapolation in anisotropic media: Accounting for resolvable anisotropy

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-04-30

    Spectral methods provide artefact-free and generally dispersion-free wavefield extrapolation in anisotropic media. Their apparent weakness is in accessing the medium-inhomogeneity information in an efficient manner. This is usually handled through a velocity-weighted summation (interpolation) of representative constant-velocity extrapolated wavefields, with the number of these extrapolations controlled by the effective rank of the original mixed-domain operator or, more specifically, by the complexity of the velocity model. Conversely, with pseudo-spectral methods, because only the space derivatives are handled in the wavenumber domain, we obtain relatively efficient access to the inhomogeneity in isotropic media, but we often resort to weak approximations to handle the anisotropy efficiently. Utilizing perturbation theory, I isolate the contribution of anisotropy to the wavefield extrapolation process. This allows us to factorize as much of the inhomogeneity in the anisotropic parameters as possible out of the spectral implementation, yielding effectively a pseudo-spectral formulation. This is particularly true if the inhomogeneity of the dimensionless anisotropic parameters are mild compared with the velocity (i.e., factorized anisotropic media). I improve on the accuracy by using the Shanks transformation to incorporate a denominator in the expansion that predicts the higher-order omitted terms; thus, we deal with fewer terms for a high level of accuracy. In fact, when we use this new separation-based implementation, the anisotropy correction to the extrapolation can be applied separately as a residual operation, which provides a tool for anisotropic parameter sensitivity analysis. The accuracy of the approximation is high, as demonstrated in a complex tilted transversely isotropic model. © 2014 European Association of Geoscientists & Engineers.

  9. Effective wavefield extrapolation in anisotropic media: Accounting for resolvable anisotropy

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-01-01

    Spectral methods provide artefact-free and generally dispersion-free wavefield extrapolation in anisotropic media. Their apparent weakness is in accessing the medium-inhomogeneity information in an efficient manner. This is usually handled through a velocity-weighted summation (interpolation) of representative constant-velocity extrapolated wavefields, with the number of these extrapolations controlled by the effective rank of the original mixed-domain operator or, more specifically, by the complexity of the velocity model. Conversely, with pseudo-spectral methods, because only the space derivatives are handled in the wavenumber domain, we obtain relatively efficient access to the inhomogeneity in isotropic media, but we often resort to weak approximations to handle the anisotropy efficiently. Utilizing perturbation theory, I isolate the contribution of anisotropy to the wavefield extrapolation process. This allows us to factorize as much of the inhomogeneity in the anisotropic parameters as possible out of the spectral implementation, yielding effectively a pseudo-spectral formulation. This is particularly true if the inhomogeneity of the dimensionless anisotropic parameters are mild compared with the velocity (i.e., factorized anisotropic media). I improve on the accuracy by using the Shanks transformation to incorporate a denominator in the expansion that predicts the higher-order omitted terms; thus, we deal with fewer terms for a high level of accuracy. In fact, when we use this new separation-based implementation, the anisotropy correction to the extrapolation can be applied separately as a residual operation, which provides a tool for anisotropic parameter sensitivity analysis. The accuracy of the approximation is high, as demonstrated in a complex tilted transversely isotropic model. © 2014 European Association of Geoscientists & Engineers.

  10. Characterization of low energy X-rays beams with an extrapolation chamber

    International Nuclear Information System (INIS)

    Bastos, Fernanda Martins

    2015-01-01

    In laboratories involving Radiological Protection practices, it is usual to use reference radiations for calibrating dosimeters and to study their response in terms of energy dependence. The International Organization for Standardization (ISO) established four series of reference X-rays beams in the ISO- 4037 standard: the L and H series, as low and high air Kerma rates, respectively, the N series of narrow spectrum and W series of wide spectrum. The X-rays beams with tube potential below 30 kV, called 'low energy beams' are, in most cases, critical as far as the determination of their parameters for characterization purpose, such as half-value layer. Extrapolation chambers are parallel plate ionization chambers that have one mobile electrode that allows variation of the air volume in its interior. These detectors are commonly used to measure the quantity Absorbed Dose, mostly in the medium surface, based on the extrapolation of the linear ionization current as a function of the distance between the electrodes. In this work, a characterization of a model 23392 PTW extrapolation chamber was done in low energy X-rays beams of the ISO- 4037 standard, by determining the polarization voltage range through the saturation curves and the value of the true null electrode spacing. In addition, the metrological reliability of the extrapolation chamber was studied with measurements of the value of leakage current and repeatability tests; limit values were established for the proper use of the chamber. The PTW23392 extrapolation chamber was calibrated in terms of air Kerma in some of the ISO radiation series of low energy; the traceability of the chamber to the National Standard Dosimeter was established. The study of energy dependency of the extrapolation chamber and the assessment of the uncertainties related to the calibration coefficient were also done; it was shown that the energy dependence was reduced to 4% when the extrapolation technique was used. Finally, the first

  11. Processing radioactive effluents with ion-exchanging resins: study of result extrapolation; Traitement des effluents radioactifs par resines echangeuses d'ions: etude de l'extrapolation des resultats

    Energy Technology Data Exchange (ETDEWEB)

    Wormser, G.

    1960-05-03

    As a previous study showed the ion-exchanging resins could be used in Saclay for the treatment of radioactive effluents, the author reports a study which aimed at investigating to which extent thus obtained results could be extrapolated to the case of higher industrial columns. The author reports experiments which aimed at determining extrapolation modes which could be used for columns of organic resin used for radioactive effluent decontamination. He notably studied whether the Hiester and Vermeulen extrapolation law could be applied. Experiments are performed at constant percolation flow rate, at varying flow rate, and at constant flow rate [French] Plusieurs etudes ont ete faites dans le but d'examiner les possibilites d'emploi des resines echangeuses d'ions pour le traitement des effluents radioactifs. Dans un rapport preliminaire, nous avons montre dans quelles limites un tel procede pouvait etre utilise au Centre d'Etudes Nucleaires de Saclay. Les essais ont ete effectues sur des petites colonnes de resine au laboratoire; il est apparu ensuite necessaire de prevoir dans quelle mesure les resultats ainsi obtenus peuvent etre extrapoles a des colonnes industrielles, de plus grande hauteur. Les experiences dont les resultats sont exposes dans ce rapport, ont pour but de determiner les modes d'extrapolation qui pourraient etre employes pour des colonnes de resine organique utilisees pour la decontamination d'effluents radioactifs. Nous avons en particulier recherche si la loi d'extrapolation de Hiester et Vermeulen qui donne de bons resultats dans le cas de fixation d'ions radioactifs en presence d'un ion macrocomposant sur des terres, pouvait etre appliquee. Les experiences, en nombre limite, ont montre que la loi d'extrapolation de Hiester et Vermeulen pouvait s'appliquer dans le cas de l'effluent considere quand les debits de percolation sont tres faibles; quand ils sont plus forts, les volumes de liquide percoles, a fixation egale, sont proportionnels aux

  12. Design and construction of an interface system for the extrapolation chamber from the beta secondary standard

    International Nuclear Information System (INIS)

    Jimenez C, L.F.

    1995-01-01

    The Interface System for the Extrapolation Chamber (SICE) contains several devices handled by a personal computer (PC), it is able to get the required data to calculate the absorbed dose due to Beta radiation. The main functions of the system are: a) Measures the ionization current or charge stored in the extrapolation chamber. b) Adjusts the distance between the plates of the extrapolation chamber automatically. c) Adjust the bias voltage of the extrapolation chamber automatically. d) Acquires the data of the temperature, atmospheric pressure, relative humidity of the environment and the voltage applied between the plates of the extrapolation chamber. e) Calculates the effective area of the plates of the extrapolation chamber and the real distance between them. f) Stores all the obtained information in hard disk or diskette. A comparison between the desired distance and the distance in the dial of the extrapolation chamber, show us that the resolution of the system is of 20 μm. The voltage can be changed between -399.9 V and +399.9 V with an error of less the 3 % with a resolution of 0.1 V. These uncertainties are between the accepted limits to be used in the determination of the absolute absorbed dose due to beta radiation. (Author)

  13. Accelerating Monte Carlo Molecular Simulations Using Novel Extrapolation Schemes Combined with Fast Database Generation on Massively Parallel Machines

    KAUST Repository

    Amir, Sahar Z.

    2013-05-01

    We introduce an efficient thermodynamically consistent technique to extrapolate and interpolate normalized Canonical NVT ensemble averages like pressure and energy for Lennard-Jones (L-J) fluids. Preliminary results show promising applicability in oil and gas modeling, where accurate determination of thermodynamic properties in reservoirs is challenging. The thermodynamic interpolation and thermodynamic extrapolation schemes predict ensemble averages at different thermodynamic conditions from expensively simulated data points. The methods reweight and reconstruct previously generated database values of Markov chains at neighboring temperature and density conditions. To investigate the efficiency of these methods, two databases corresponding to different combinations of normalized density and temperature are generated. One contains 175 Markov chains with 10,000,000 MC cycles each and the other contains 3000 Markov chains with 61,000,000 MC cycles each. For such massive database creation, two algorithms to parallelize the computations have been investigated. The accuracy of the thermodynamic extrapolation scheme is investigated with respect to classical interpolation and extrapolation. Finally, thermodynamic interpolation benefiting from four neighboring Markov chains points is implemented and compared with previous schemes. The thermodynamic interpolation scheme using knowledge from the four neighboring points proves to be more accurate than the thermodynamic extrapolation from the closest point only, while both thermodynamic extrapolation and thermodynamic interpolation are more accurate than the classical interpolation and extrapolation. The investigated extrapolation scheme has great potential in oil and gas reservoir modeling.That is, such a scheme has the potential to speed up the MCMC thermodynamic computation to be comparable with conventional Equation of State approaches in efficiency. In particular, this makes it applicable to large-scale optimization of L

  14. Ecotoxicological effects extrapolation models

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W. II

    1996-09-01

    One of the central problems of ecological risk assessment is modeling the relationship between test endpoints (numerical summaries of the results of toxicity tests) and assessment endpoints (formal expressions of the properties of the environment that are to be protected). For example, one may wish to estimate the reduction in species richness of fishes in a stream reach exposed to an effluent and have only a fathead minnow 96 hr LC50 as an effects metric. The problem is to extrapolate from what is known (the fathead minnow LC50) to what matters to the decision maker, the loss of fish species. Models used for this purpose may be termed Effects Extrapolation Models (EEMs) or Activity-Activity Relationships (AARs), by analogy to Structure-Activity Relationships (SARs). These models have been previously reviewed in Ch. 7 and 9 of and by an OECD workshop. This paper updates those reviews and attempts to further clarify the issues involved in the development and use of EEMs. Although there is some overlap, this paper does not repeat those reviews and the reader is referred to the previous reviews for a more complete historical perspective, and for treatment of additional extrapolation issues.

  15. Electric form factors of the octet baryons from lattice QCD and chiral extrapolation

    International Nuclear Information System (INIS)

    Shanahan, P.E.; Thomas, A.W.; Young, R.D.; Zanotti, J.M.; Pleiter, D.; Stueben, H.

    2014-03-01

    We apply a formalism inspired by heavy baryon chiral perturbation theory with finite-range regularization to dynamical 2+1-flavor CSSM/QCDSF/UKQCD Collaboration lattice QCD simulation results for the electric form factors of the octet baryons. The electric form factor of each octet baryon is extrapolated to the physical pseudoscalar masses, after finite-volume corrections have been applied, at six fixed values of Q 2 in the range 0.2-1.3 GeV 2 . The extrapolated lattice results accurately reproduce the experimental form factors of the nucleon at the physical point, indicating that omitted disconnected quark loop contributions are small. Furthermore, using the results of a recent lattice study of the magnetic form factors, we determine the ratio μ p G E p /G M p . This quantity decreases with Q 2 in a way qualitatively consistent with recent experimental results.

  16. Electric form factors of the octet baryons from lattice QCD and chiral extrapolation

    Energy Technology Data Exchange (ETDEWEB)

    Shanahan, P.E.; Thomas, A.W.; Young, R.D.; Zanotti, J.M. [Adelaide Univ., SA (Australia). ARC Centre of Excellence in Particle Physics at the Terascale and CSSM; Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Nakamura, Y. [RIKEN Advanced Institute for Computational Science, Kobe, Hyogo (Japan); Pleiter, D. [Forschungszentrum Juelich (Germany). JSC; Regensburg Univ. (Germany). Inst. fuer Theoretische Physik; Rakow, P.E.L. [Liverpool Univ. (United Kingdom). Theoretical Physics Div.; Schierholz, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Stueben, H. [Hamburg Univ. (Germany). Regionales Rechenzentrum; Collaboration: CSSM and QCDSF/UKQCD Collaborations

    2014-03-15

    We apply a formalism inspired by heavy baryon chiral perturbation theory with finite-range regularization to dynamical 2+1-flavor CSSM/QCDSF/UKQCD Collaboration lattice QCD simulation results for the electric form factors of the octet baryons. The electric form factor of each octet baryon is extrapolated to the physical pseudoscalar masses, after finite-volume corrections have been applied, at six fixed values of Q{sup 2} in the range 0.2-1.3 GeV{sup 2}. The extrapolated lattice results accurately reproduce the experimental form factors of the nucleon at the physical point, indicating that omitted disconnected quark loop contributions are small. Furthermore, using the results of a recent lattice study of the magnetic form factors, we determine the ratio μ{sub p}G{sub E}{sup p}/G{sub M}{sup p}. This quantity decreases with Q{sup 2} in a way qualitatively consistent with recent experimental results.

  17. Extrapolation methods theory and practice

    CERN Document Server

    Brezinski, C

    1991-01-01

    This volume is a self-contained, exhaustive exposition of the extrapolation methods theory, and of the various algorithms and procedures for accelerating the convergence of scalar and vector sequences. Many subroutines (written in FORTRAN 77) with instructions for their use are provided on a floppy disk in order to demonstrate to those working with sequences the advantages of the use of extrapolation methods. Many numerical examples showing the effectiveness of the procedures and a consequent chapter on applications are also provided - including some never before published results and applicat

  18. A NEW CODE FOR NONLINEAR FORCE-FREE FIELD EXTRAPOLATION OF THE GLOBAL CORONA

    International Nuclear Information System (INIS)

    Jiang Chaowei; Feng Xueshang; Xiang Changqing

    2012-01-01

    Reliable measurements of the solar magnetic field are still restricted to the photosphere, and our present knowledge of the three-dimensional coronal magnetic field is largely based on extrapolations from photospheric magnetograms using physical models, e.g., the nonlinear force-free field (NLFFF) model that is usually adopted. Most of the currently available NLFFF codes have been developed with computational volume such as a Cartesian box or a spherical wedge, while a global full-sphere extrapolation is still under development. A high-performance global extrapolation code is in particular urgently needed considering that the Solar Dynamics Observatory can provide a full-disk magnetogram with resolution up to 4096 × 4096. In this work, we present a new parallelized code for global NLFFF extrapolation with the photosphere magnetogram as input. The method is based on the magnetohydrodynamics relaxation approach, the CESE-MHD numerical scheme, and a Yin-Yang spherical grid that is used to overcome the polar problems of the standard spherical grid. The code is validated by two full-sphere force-free solutions from Low and Lou's semi-analytic force-free field model. The code shows high accuracy and fast convergence, and can be ready for future practical application if combined with an adaptive mesh refinement technique.

  19. Extrapolation in the development of paediatric medicines: examples from approvals for biological treatments for paediatric chronic immune-mediated inflammatory diseases.

    Science.gov (United States)

    Stefanska, Anna M; Distlerová, Dorota; Musaus, Joachim; Olski, Thorsten M; Dunder, Kristina; Salmonson, Tomas; Mentzer, Dirk; Müller-Berghaus, Jan; Hemmings, Robert; Veselý, Richard

    2017-10-01

    The European Union (EU) Paediatric Regulation requires that all new medicinal products applying for a marketing authorisation (MA) in the EU provide a paediatric investigation plan (PIP) covering a clinical and non-clinical trial programme relating to the use in the paediatric population, unless a waiver applies. Conducting trials in children is challenging on many levels, including ethical and practical issues, which may affect the availability of the clinical evidence. In scientifically justified cases, extrapolation of data from other populations can be an option to gather evidence supporting the benefit-risk assessment of the medicinal product for paediatric use. The European Medicines Agency (EMA) is working on providing a framework for extrapolation that is scientifically valid, reliable and adequate to support MA of medicines for children. It is expected that the extrapolation framework together with therapeutic area guidelines and individual case studies will support future PIPs. Extrapolation has already been employed in several paediatric development programmes including biological treatment for immune-mediated diseases. This article reviews extrapolation strategies from MA applications for products for the treatment of juvenile idiopathic arthritis, paediatric psoriasis and paediatric inflammatory bowel disease. It also provides a summary of extrapolation advice expressed in relevant EMA guidelines and initiatives supporting the use of alternative approaches in paediatric medicine development. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Efficient anisotropic wavefield extrapolation using effective isotropic models

    KAUST Repository

    Alkhalifah, Tariq Ali; Ma, X.; Waheed, Umair bin; Zuberi, Mohammad

    2013-01-01

    Isotropic wavefield extrapolation is more efficient than anisotropic extrapolation, and this is especially true when the anisotropy of the medium is tilted (from the vertical). We use the kinematics of the wavefield, appropriately represented

  1. Wavefield extrapolation in pseudo-depth domain

    KAUST Repository

    Ma, Xuxin; Alkhalifah, Tariq Ali

    2012-01-01

    Extrapolating seismic waves in Cartesian coordinate is prone to uneven spatial sampling, because the seismic wavelength tends to grow with depth, as velocity increase. We transform the vertical depth axis to a pseudo one using a velocity weighted mapping, which can effectively mitigate this wavelength variation. We derive acoustic wave equations in this new domain based on the direct transformation of the Laplacian derivatives, which admits solutions that are more accurate and stable than those derived from the kinematic transformation. The anisotropic versions of these equations allow us to isolate the vertical velocity influence and reduce its impact on modeling and imaging. The major benefit of extrapolating wavefields in pseudo-depth space is its near uniform wavelength as opposed to the normally dramatic change of wavelength with the conventional approach. Time wavefield extrapolation on a complex velocity shows some of the features of this approach.

  2. Cosmogony as an extrapolation of magnetospheric research

    International Nuclear Information System (INIS)

    Alfven, H.

    1984-03-01

    A theory of the origin and evolution of the Solar System (Alfven and Arrhenius, 1975: 1976) which considered electromagnetic forces and plasma effects is revised in the light of new information supplied by space research. In situ measurements in the magnetospheres and solar wind have changed our views of basic properties of cosmic plasmas. These results can be extrapolated both outwards in space, to interstellar clouds, backwards in time, to the formation of the solar system. The first extrapolation leads to a revision of some cloud properties which are essential for the early phases in the formation of stars and solar nebule. The latter extrapolation makes possible to approach the cosmogonic processes by extrapolation of (rather) well-known magnetospheric phenomena. Pioneer-Voyager observations of the Saturnian rings indicate that essential parts of their structure are fossils from cosmogonic times. By using detailed information from these space missions, it seems possible to reconstruct certain events 4-5 billion years ago with an accuracy of a few percent. This will cause a change in our views of the evolution of the solar system.(author)

  3. Extrapolation of Nitrogen Fertiliser Recommendation Zones for Maize in Kisii District Using Geographical Information Systems

    International Nuclear Information System (INIS)

    Okoth, P.F.; Wamae, D.K.

    1999-01-01

    A GIS database was established for fertiliser recommendation domains in Kisii District by using FURP fertiliser trial results, KSS soils data and MDBP climatic data. These are manipulated in ESRI's (Personal Computer Environmental Systems Research Institute) ARCINFO and ARCVIEW softwares. The extrapolations were only done for the long rains season (March- August) with three to four years data. GIS technology was used to cluster fertiliser recommendation domains as a geographical area expressed in terms of variation over space and not limited to the site of experiment where a certain agronomic or economic fertiliser recommendation was made. The extrapolation over space was found to be more representative for any recommendation, the result being digital maps describing each area in the geographical space. From the results of the extrapolations, approximately 38,255 ha of the district require zero Nitrogen (N) fertilisation while 94,330 ha requires 75 kg ha -1 Nitrogen fertilisation during the (March-August) long rains. The extrapolation was made difficult since no direct relationships could be established to occur between the available-N, % Carbon (C) or any of the other soil properties with the obtained yields. Decision rules were however developed based on % C which was the soil variable with values closest to the obtained yields. 3% organic carbon was found to be the boundary between 0 application and 75 kg-N application. GIS techniques made it possible to model and extrapolates the results using the available data. The extrapolations still need to be verified with more ground data from fertiliser trials. Data gaps in the soil map left some soil mapping units with no recommendations. Elevation was observed to influence yields and it should be included in future extrapolation by clustering digital elevation models with rainfall data in a spatial model at the district scale

  4. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    International Nuclear Information System (INIS)

    Jeffs, S.P.; Lancaster, R.J.; Garcia, T.E.

    2015-01-01

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k SP method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results

  5. Problems in the extrapolation of laboratory rheological data

    Science.gov (United States)

    Paterson, M. S.

    1987-02-01

    The many types of variables and deformation regimes that need to be taken into account in extrapolating rheological behaviour from the laboratory to the earth are reviewed. The problems of extrapolation are then illustrated with two particular cases. In the case of divine-rich rocks, recent experimental work indicates that, within present uncertainties of extrapolation, the flow in the upper mantle could be either grain size dependent and near-Newtonian or grain size independent and distinctly non-Newtonian. Both types of behaviour would be influenced by the present of trace amounts of water. In the case of quartz-rich rocks, the uncertainties are even greater and it is still premature to attempt any extrapolation to geological conditions except as an upper bound; the fugacity and the scale of dispersion of the water are probably two important variables but the quantitative laws governing their influence are not yet clear.

  6. Guided wave tomography in anisotropic media using recursive extrapolation operators

    Science.gov (United States)

    Volker, Arno

    2018-04-01

    Guided wave tomography is an advanced technology for quantitative wall thickness mapping to image wall loss due to corrosion or erosion. An inversion approach is used to match the measured phase (time) at a specific frequency to a model. The accuracy of the model determines the sizing accuracy. Particularly for seam welded pipes there is a measurable amount of anisotropy. Moreover, for small defects a ray-tracing based modelling approach is no longer accurate. Both issues are solved by applying a recursive wave field extrapolation operator assuming vertical transverse anisotropy. The inversion scheme is extended by not only estimating the wall loss profile but also the anisotropy, local material changes and transducer ring alignment errors. This makes the approach more robust. The approach will be demonstrated experimentally on different defect sizes, and a comparison will be made between this new approach and an isotropic ray-tracing approach. An example is given in Fig. 1 for a 75 mm wide, 5 mm deep defect. The wave field extrapolation based tomography clearly provides superior results.

  7. 40 CFR 86.435-78 - Extrapolated emission values.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Extrapolated emission values. 86.435-78 Section 86.435-78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Regulations for 1978 and Later New Motorcycles, General Provisions § 86.435-78 Extrapolated emission values...

  8. The optimizied expansion method for wavefield extrapolation

    KAUST Repository

    Wu, Zedong

    2013-01-01

    Spectral methods are fast becoming an indispensable tool for wave-field extrapolation, especially in anisotropic media, because of its dispersion and artifact free, as well as highly accurate, solutions of the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain operator.In this abstract, we propose an optimized expansion method that can approximate this operator with its low rank representation. The rank defines the number of inverse FFT required per time extrapolation step, and thus, a lower rank admits faster extrapolations. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its low rank representation.Thus,we obtain more accurate wave-fields using lower rank representation, and thus cheaper extrapolations. The optimization operation to define the low rank representation depends only on the velocity model, and this is done only once, and valid for a full reverse time migration (many shots) or one iteration of full waveform inversion. Applications on the BP model yielded superior results than those obtained using the decomposition approach. For transversely isotopic media, the solutions were free of the shear wave artifacts, and does not require that eta>0.

  9. Linear extrapolation distance for a black cylindrical control rod with the pulsed neutron method

    International Nuclear Information System (INIS)

    Loewenhielm, G.

    1978-03-01

    The objective of this experiment was to measure the linear extrapolation distance for a central black cylindrical control rod in a cylindrical water moderator. The radius for both the control rod and the moderator was varied. The pulsed neutron technique was used and the decay constant was measured for both a homogeneous and a heterogeneous system. From the difference in the decay constants the extrapolation distance could be calculated. The conclusion is that within experimental error it is safe to use the approximate formula given by Pellaud or the more exact one given by Kavenoky. We can also conclude that linear anisotropic scattering is accounted for in a correct way in the approximate formula given by Pellaud and Prinja and Williams

  10. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  11. Ground-state inversion method applied to calculation of molecular photoionization cross-sections by atomic extrapolation: Interference effects at low energies

    International Nuclear Information System (INIS)

    Hilton, P.R.; Nordholm, S.; Hush, N.S.

    1980-01-01

    The ground-state inversion method, which we have previously developed for the calculation of atomic cross-sections, is applied to the calculation of molecular photoionization cross-sections. These are obtained as a weighted sum of atomic subshell cross-sections plus multi-centre interference terms. The atomic cross-sections are calculated directly for the atomic functions which when summed over centre and symmetry yield the molecular orbital wave function. The use of the ground-state inversion method for this allows the effect of the molecular environment on the atomic cross-sections to be calculated. Multi-centre terms are estimated on the basis of an effective plane-wave expression for this contribution to the total cross-section. Finally the method is applied to the range of photon energies from 0 to 44 eV where atomic extrapolation procedures have not previously been tested. Results obtained for H 2 , N 2 and CO show good agreement with experiment, particularly when interference effects and effects of the molecular environment on the atomic cross-sections are included. The accuracy is very much better than that of previous plane-wave and orthogonalized plane-wave methods, and can stand comparison with that of recent more sophisticated approaches. It is a feature of the method that calculation of cross-sections either of atoms or of large molecules requires very little computer time, provided that good quality wave functions are available, and it is then of considerable potential practical interest for photoelectorn spectroscopy. (orig.)

  12. Resolution enhancement in digital holography by self-extrapolation of holograms.

    Science.gov (United States)

    Latychevskaia, Tatiana; Fink, Hans-Werner

    2013-03-25

    It is generally believed that the resolution in digital holography is limited by the size of the captured holographic record. Here, we present a method to circumvent this limit by self-extrapolating experimental holograms beyond the area that is actually captured. This is done by first padding the surroundings of the hologram and then conducting an iterative reconstruction procedure. The wavefront beyond the experimentally detected area is thus retrieved and the hologram reconstruction shows enhanced resolution. To demonstrate the power of this concept, we apply it to simulated as well as experimental holograms.

  13. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jeffs, S.P., E-mail: s.p.jeffs@swansea.ac.uk [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Lancaster, R.J. [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Garcia, T.E. [IUTA (University Institute of Industrial Technology of Asturias), University of Oviedo, Edificio Departamental Oeste 7.1.17, Campus Universitario, 33203 Gijón (Spain)

    2015-06-11

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k{sub SP} method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results.

  14. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    KAUST Repository

    Zhang, Zhendong

    2017-12-17

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyze the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artifacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration (RTM) applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modeling engine performs better than an isotropic migration.

  15. Efficient and stable extrapolation of prestack wavefields

    KAUST Repository

    Wu, Zedong

    2013-09-22

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers and the image point, or in other words, prestack wavefields. Extrapolating such wavefields in time, nevertheless, is a big challenge because the radicand can be negative, thus reduce to a complex phase velocity, which will make the rank of the mixed domain matrix very high. Using the vertical offset between the sources and receivers, we introduce a method for deriving the DSR formulation, which gives us the opportunity to derive approximations for the mixed domain operator. The method extrapolates prestack wavefields by combining all data into one wave extrapolation procedure, allowing both upgoing and downgoing wavefields since the extrapolation is done in time, and doesn’t have the v(z) assumption in the offset axis of the media. Thus, the imaging condition is imposed by taking the zero-time and zero-offset slice from the multi-dimensional prestack wavefield. Unlike reverse time migration (RTM), no crosscorrelation is needed and we also have access to the subsurface offset information, which is important for migration velocity analysis. Numerical examples show the capability of this approach in dealing with complex velocity models and can provide a better quality image compared to RTM more efficiently.

  16. Efficient and stable extrapolation of prestack wavefields

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2013-01-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers and the image point, or in other words, prestack wavefields. Extrapolating such wavefields in time, nevertheless, is a big challenge because the radicand can be negative, thus reduce to a complex phase velocity, which will make the rank of the mixed domain matrix very high. Using the vertical offset between the sources and receivers, we introduce a method for deriving the DSR formulation, which gives us the opportunity to derive approximations for the mixed domain operator. The method extrapolates prestack wavefields by combining all data into one wave extrapolation procedure, allowing both upgoing and downgoing wavefields since the extrapolation is done in time, and doesn’t have the v(z) assumption in the offset axis of the media. Thus, the imaging condition is imposed by taking the zero-time and zero-offset slice from the multi-dimensional prestack wavefield. Unlike reverse time migration (RTM), no crosscorrelation is needed and we also have access to the subsurface offset information, which is important for migration velocity analysis. Numerical examples show the capability of this approach in dealing with complex velocity models and can provide a better quality image compared to RTM more efficiently.

  17. Measurement of absorbed dose with a bone-equivalent extrapolation chamber

    International Nuclear Information System (INIS)

    DeBlois, Francois; Abdel-Rahman, Wamied; Seuntjens, Jan P.; Podgorsak, Ervin B.

    2002-01-01

    A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water trade mark sign and bone-equivalent material was used for determining absorbed dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain absorbed dose in bone for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC by 0.7% to ∼2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water trade mark sign PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). In conjunction with appropriate correction factors determined with Monte Carlo techniques, the uncalibrated hybrid PEEC can be used for measuring absorbed dose in bone material to within 2% for high-energy photon and electron beams

  18. One-step lowrank wave extrapolation

    KAUST Repository

    Sindi, Ghada Atif; Alkhalifah, Tariq Ali

    2014-01-01

    Wavefield extrapolation is at the heart of modeling, imaging, and Full waveform inversion. Spectral methods gained well deserved attention due to their dispersion free solutions and their natural handling of anisotropic media. We propose a scheme a

  19. Extrapolated stabilized explicit Runge-Kutta methods

    Science.gov (United States)

    Martín-Vaquero, J.; Kleefeld, B.

    2016-12-01

    Extrapolated Stabilized Explicit Runge-Kutta methods (ESERK) are proposed to solve multi-dimensional nonlinear partial differential equations (PDEs). In such methods it is necessary to evaluate the function nt times per step, but the stability region is O (nt2). Hence, the computational cost is O (nt) times lower than for a traditional explicit algorithm. In that way stiff problems can be integrated by the use of simple explicit evaluations in which case implicit methods usually had to be used. Therefore, they are especially well-suited for the method of lines (MOL) discretizations of parabolic nonlinear multi-dimensional PDEs. In this work, first s-stages first-order methods with extended stability along the negative real axis are obtained. They have slightly shorter stability regions than other traditional first-order stabilized explicit Runge-Kutta algorithms (also called Runge-Kutta-Chebyshev codes). Later, they are used to derive nt-stages second- and fourth-order schemes using Richardson extrapolation. The stability regions of these fourth-order codes include the interval [ - 0.01nt2, 0 ] (nt being the number of total functions evaluations), which are shorter than stability regions of ROCK4 methods, for example. However, the new algorithms neither suffer from propagation of errors (as other Runge-Kutta-Chebyshev codes as ROCK4 or DUMKA) nor internal instabilities. Additionally, many other types of higher-order (and also lower-order) methods can be obtained easily in a similar way. These methods also allow adaptation of the length step with no extra cost. Hence, the stability domain is adapted precisely to the spectrum of the problem at the current time of integration in an optimal way, i.e., with minimal number of additional stages. We compare the new techniques with other well-known algorithms with good results in very stiff diffusion or reaction-diffusion multi-dimensional nonlinear equations.

  20. A method of creep rupture data extrapolation based on physical processes

    International Nuclear Information System (INIS)

    Leinster, M.G.

    2008-01-01

    There is a need for a reliable method to extrapolate generic creep rupture data to failure times in excess of the currently published times. A method based on well-understood and mathematically described physical processes is likely to be stable and reliable. Creep process descriptions have been developed based on accepted theory, to the extent that good fits with published data have been obtained. Methods have been developed to apply these descriptions to extrapolate creep rupture data to stresses below the published values. The relationship creep life parameter=f(ln(sinh(stress))) has been shown to be justifiable over the stress ranges of most interest, and gives realistic results at high temperatures and long times to failure. In the interests of continuity with past and present practice, the suggested method is intended to extend existing polynomial descriptions of life parameters at low stress. Where no polynomials exist, the method can be used to describe the behaviour of life parameters throughout the full range of a particular failure mode in the published data

  1. A Method for Extrapolation of Atmospheric Soundings

    Science.gov (United States)

    2014-05-01

    case are not shown here. We also briefly examined data for the Anchorage, AK ( PANC ), radiosonde site for the case of the inversion height equal to...or greater than the extrapolation depth (i.e., hinv ≥ hext). PANC lies at the end of a broad inlet extending northward from the Gulf of Alaska at...type of terrain can affect the model and in turn affect the extrapolation. We examined a sounding from PANC (61.16 N and –150.01 W, elevation of 40

  2. Finite lattice extrapolation algorithms

    International Nuclear Information System (INIS)

    Henkel, M.; Schuetz, G.

    1987-08-01

    Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)

  3. A nowcasting technique based on application of the particle filter blending algorithm

    Science.gov (United States)

    Chen, Yuanzhao; Lan, Hongping; Chen, Xunlai; Zhang, Wenhai

    2017-10-01

    To improve the accuracy of nowcasting, a new extrapolation technique called particle filter blending was configured in this study and applied to experimental nowcasting. Radar echo extrapolation was performed by using the radar mosaic at an altitude of 2.5 km obtained from the radar images of 12 S-band radars in Guangdong Province, China. The first bilateral filter was applied in the quality control of the radar data; an optical flow method based on the Lucas-Kanade algorithm and the Harris corner detection algorithm were used to track radar echoes and retrieve the echo motion vectors; then, the motion vectors were blended with the particle filter blending algorithm to estimate the optimal motion vector of the true echo motions; finally, semi-Lagrangian extrapolation was used for radar echo extrapolation based on the obtained motion vector field. A comparative study of the extrapolated forecasts of four precipitation events in 2016 in Guangdong was conducted. The results indicate that the particle filter blending algorithm could realistically reproduce the spatial pattern, echo intensity, and echo location at 30- and 60-min forecast lead times. The forecasts agreed well with observations, and the results were of operational significance. Quantitative evaluation of the forecasts indicates that the particle filter blending algorithm performed better than the cross-correlation method and the optical flow method. Therefore, the particle filter blending method is proved to be superior to the traditional forecasting methods and it can be used to enhance the ability of nowcasting in operational weather forecasts.

  4. Load Extrapolation During Operation for Wind Turbines

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard

    2008-01-01

    In the recent years load extrapolation for wind turbines has been widely considered in the wind turbine industry. Loads on wind turbines during operations are normally dependent on the mean wind speed, the turbulence intensity and the type and settings of the control system. All these parameters...... must be taken into account when characteristic load effects during operation are determined. In the wind turbine standard IEC 61400-1 a method for load extrapolation using the peak over threshold method is recommended. In this paper this method is considered and some of the assumptions are examined...

  5. Residual extrapolation operators for efficient wavefield construction

    KAUST Repository

    Alkhalifah, Tariq Ali

    2013-02-27

    Solving the wave equation using finite-difference approximations allows for fast extrapolation of the wavefield for modelling, imaging and inversion in complex media. It, however, suffers from dispersion and stability-related limitations that might hamper its efficient or proper application to high frequencies. Spectral-based time extrapolation methods tend to mitigate these problems, but at an additional cost to the extrapolation. I investigate the prospective of using a residual formulation of the spectral approach, along with utilizing Shanks transform-based expansions, that adheres to the residual requirements, to improve accuracy and reduce the cost. Utilizing the fact that spectral methods excel (time steps are allowed to be large) in homogeneous and smooth media, the residual implementation based on velocity perturbation optimizes the use of this feature. Most of the other implementations based on the spectral approach are focussed on reducing cost by reducing the number of inverse Fourier transforms required in every step of the spectral-based implementation. The approach here fixes that by improving the accuracy of each, potentially longer, time step.

  6. Flow Forecasting in Drainage Systems with Extrapolated Radar Rainfall Data and Auto Calibration on Flow Observations

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Grum, M.; Rasmussen, Michael R.

    2011-01-01

    Forecasting of flows, overflow volumes, water levels, etc. in drainage systems can be applied in real time control of drainage systems in the future climate in order to fully utilize system capacity and thus save possible construction costs. An online system for forecasting flows and water levels......-calibrated on flow measurements in order to produce the best possible forecast for the drainage system at all times. The system shows great potential for the implementation of real time control in drainage systems and forecasting flows and water levels.......Forecasting of flows, overflow volumes, water levels, etc. in drainage systems can be applied in real time control of drainage systems in the future climate in order to fully utilize system capacity and thus save possible construction costs. An online system for forecasting flows and water levels...... in a small urban catchment has been developed. The forecast is based on application of radar rainfall data, which by a correlation based technique, is extrapolated with a lead time up to two hours. The runoff forecast in the drainage system is based on a fully distributed MOUSE model which is auto...

  7. Chemical vapor deposition: A technique for applying protective coatings

    Energy Technology Data Exchange (ETDEWEB)

    Wallace, T.C. Sr.; Bowman, M.G.

    1979-01-01

    Chemical vapor deposition is discussed as a technique for applying coatings for materials protection in energy systems. The fundamentals of the process are emphasized in order to establish a basis for understanding the relative advantages and limitations of the technique. Several examples of the successful application of CVD coating are described. 31 refs., and 18 figs.

  8. An extrapolation scheme for solid-state NMR chemical shift calculations

    Science.gov (United States)

    Nakajima, Takahito

    2017-06-01

    Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.

  9. Video error concealment using block matching and frequency selective extrapolation algorithms

    Science.gov (United States)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  10. Lowrank seismic-wave extrapolation on a staggered grid

    KAUST Repository

    Fang, Gang

    2014-05-01

    © 2014 Society of Exploration Geophysicists. We evaluated a new spectral method and a new finite-difference (FD) method for seismic-wave extrapolation in time. Using staggered temporal and spatial grids, we derived a wave-extrapolation operator using a lowrank decomposition for a first-order system of wave equations and designed the corresponding FD scheme. The proposed methods extend previously proposed lowrank and lowrank FD wave extrapolation methods from the cases of constant density to those of variable density. Dispersion analysis demonstrated that the proposed methods have high accuracy for a wide wavenumber range and significantly reduce the numerical dispersion. The method of manufactured solutions coupled with mesh refinement was used to verify each method and to compare numerical errors. Tests on 2D synthetic examples demonstrated that the proposed method is highly accurate and stable. The proposed methods can be used for seismic modeling or reverse-time migration.

  11. Lowrank seismic-wave extrapolation on a staggered grid

    KAUST Repository

    Fang, Gang; Fomel, Sergey; Du, Qizhen; Hu, Jingwei

    2014-01-01

    © 2014 Society of Exploration Geophysicists. We evaluated a new spectral method and a new finite-difference (FD) method for seismic-wave extrapolation in time. Using staggered temporal and spatial grids, we derived a wave-extrapolation operator using a lowrank decomposition for a first-order system of wave equations and designed the corresponding FD scheme. The proposed methods extend previously proposed lowrank and lowrank FD wave extrapolation methods from the cases of constant density to those of variable density. Dispersion analysis demonstrated that the proposed methods have high accuracy for a wide wavenumber range and significantly reduce the numerical dispersion. The method of manufactured solutions coupled with mesh refinement was used to verify each method and to compare numerical errors. Tests on 2D synthetic examples demonstrated that the proposed method is highly accurate and stable. The proposed methods can be used for seismic modeling or reverse-time migration.

  12. Establishing macroecological trait datasets: digitalization, extrapolation, and validation of diet preferences in terrestrial mammals worldwide

    DEFF Research Database (Denmark)

    Kissling, W. Daniel; Dalby, Lars; Fløjgaard, Camilla

    2014-01-01

    , the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals (“MammalDIET”). Diet information was digitized from two global...... species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally...... information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external validation showed that: (1) extrapolations were most reliable for primary food items; (2) several diet categories (“Animal”, “Mammal...

  13. Early counterpulse technique applied to vacuum interrupters

    International Nuclear Information System (INIS)

    Warren, R.W.

    1979-11-01

    Interruption of dc currents using counterpulse techniques is investigated with vacuum interrupters and a novel approach in which the counterpulse is applied before contact separation. Important increases have been achieved in this way in the maximum interruptible current as well as large reductions in contact erosion. The factors establishing these new limits are presented and ways are discussed to make further improvements

  14. Effective orthorhombic anisotropic models for wavefield extrapolation

    KAUST Repository

    Ibanez-Jacome, W.

    2014-07-18

    Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth\\'s subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the firstarrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.

  15. Effective orthorhombic anisotropic models for wavefield extrapolation

    KAUST Repository

    Ibanez-Jacome, W.; Alkhalifah, Tariq Ali; Waheed, Umair bin

    2014-01-01

    Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth's subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the firstarrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.

  16. Comparison among creep rupture strength extrapolation methods with application to data for AISI 316 SS from Italy, France, U.K. and F.R.G

    International Nuclear Information System (INIS)

    Brunori, G.; Cappellato, S.; Vacchiano, S.; Guglielmi, F.

    1982-01-01

    Inside Activity 3 ''Materials'' of WGCS, the member states UK and FRG have developed a work regarding extrapolation methods for creep data. This work has been done by comparising extrapolation methods in use in their countries by applying them to creep rupture strength data on AISI 316 SS obtained in UK and FRG. This work has been issued on April 1978 and the Community has dealed it to all Activity 3 Members. Italy, in the figure of NIRA S.p.A., has received, from the European Community a contract to extend the work to Italian and French data, using extrapolation methods currently in use in Italy. The work should deal with the following points: - Collect of Italian experimental data; - Chemical analysis on Italian Specimen; - Comparison among Italian experimental data with French, FRG and UK data; - Description of extrapolation methods in use in Italy; - Application of these extrapolation methods to Italian, French, British and Germany data; - Extensions of a Final Report

  17. Early counterpulse technique applied to vacuum interrupters

    International Nuclear Information System (INIS)

    Warren, R.W.

    1979-01-01

    Interruption of dc currents using counterpulse techniques is investigated with vacuum interrupters and a novel approach in which the counterpulse is applied before contact separation. Important increases have been achieved in this way in the maximum interruptible current and large reductions in contact erosion. The factors establishing these new limits are presented and ways are discussed to make further improvements to the maximum interruptible current

  18. Making the most of what we have: application of extrapolation approaches in radioecological wildlife transfer models

    International Nuclear Information System (INIS)

    Beresford, Nicholas A.; Wood, Michael D.; Vives i Batlle, Jordi; Yankovich, Tamara L.; Bradshaw, Clare; Willey, Neil

    2016-01-01

    We will never have data to populate all of the potential radioecological modelling parameters required for wildlife assessments. Therefore, we need robust extrapolation approaches which allow us to make best use of our available knowledge. This paper reviews and, in some cases, develops, tests and validates some of the suggested extrapolation approaches. The concentration ratio (CR_p_r_o_d_u_c_t_-_d_i_e_t or CR_w_o_-_d_i_e_t) is shown to be a generic (trans-species) parameter which should enable the more abundant data for farm animals to be applied to wild species. An allometric model for predicting the biological half-life of radionuclides in vertebrates is further tested and generally shown to perform acceptably. However, to fully exploit allometry we need to understand why some elements do not scale to expected values. For aquatic ecosystems, the relationship between log_1_0(a) (a parameter from the allometric relationship for the organism-water concentration ratio) and log(K_d) presents a potential opportunity to estimate concentration ratios using K_d values. An alternative approach to the CR_w_o_-_m_e_d_i_a model proposed for estimating the transfer of radionuclides to freshwater fish is used to satisfactorily predict activity concentrations in fish of different species from three lakes. We recommend that this approach (REML modelling) be further investigated and developed for other radionuclides and across a wider range of organisms and ecosystems. Ecological stoichiometry shows potential as an extrapolation method in radioecology, either from one element to another or from one species to another. Although some of the approaches considered require further development and testing, we demonstrate the potential to significantly improve predictions of radionuclide transfer to wildlife by making better use of available data. - Highlights: • Robust extrapolation approaches allowing best use of available knowledge are needed. • Extrapolation approaches are

  19. In situ LTE exposure of the general public: Characterization and extrapolation.

    Science.gov (United States)

    Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc

    2012-09-01

    In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.

  20. The optimized expansion based low-rank method for wavefield extrapolation

    KAUST Repository

    Wu, Zedong

    2014-03-01

    Spectral methods are fast becoming an indispensable tool for wavefield extrapolation, especially in anisotropic media because it tends to be dispersion and artifact free as well as highly accurate when solving the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain extrapolation operator efficiently. To solve this problem, we evaluated an optimized expansion method that can approximate this operator with a low-rank variable separation representation. The rank defines the number of inverse Fourier transforms for each time extrapolation step, and thus, the lower the rank, the faster the extrapolation. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its explicit low-rank representation. As a result, we obtain lower rank representations compared with the standard low-rank method within reasonable accuracy and thus cheaper extrapolations. Additional bounds set on the range of propagated wavenumbers to adhere to the physical wave limits yield unconditionally stable extrapolations regardless of the time step. An application on the BP model provided superior results compared to those obtained using the decomposition approach. For transversely isotopic media, because we used the pure P-wave dispersion relation, we obtained solutions that were free of the shear wave artifacts, and the algorithm does not require that n > 0. In addition, the required rank for the optimization approach to obtain high accuracy in anisotropic media was lower than that obtained by the decomposition approach, and thus, it was more efficient. A reverse time migration result for the BP tilted transverse isotropy model using this method as a wave propagator demonstrated the ability of the algorithm.

  1. Computational optimization techniques applied to microgrids planning

    DEFF Research Database (Denmark)

    Gamarra, Carlos; Guerrero, Josep M.

    2015-01-01

    Microgrids are expected to become part of the next electric power system evolution, not only in rural and remote areas but also in urban communities. Since microgrids are expected to coexist with traditional power grids (such as district heating does with traditional heating systems......), their planning process must be addressed to economic feasibility, as a long-term stability guarantee. Planning a microgrid is a complex process due to existing alternatives, goals, constraints and uncertainties. Usually planning goals conflict each other and, as a consequence, different optimization problems...... appear along the planning process. In this context, technical literature about optimization techniques applied to microgrid planning have been reviewed and the guidelines for innovative planning methodologies focused on economic feasibility can be defined. Finally, some trending techniques and new...

  2. Short-Term Forecasting of Urban Storm Water Runoff in Real-Time using Extrapolated Radar Rainfall Data

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2013-01-01

    Model based short-term forecasting of urban storm water runoff can be applied in realtime control of drainage systems in order to optimize system capacity during rain and minimize combined sewer overflows, improve wastewater treatment or activate alarms if local flooding is impending. A novel onl....... The radar rainfall extrapolation (nowcast) limits the lead time of the system to two hours. In this paper, the model set-up is tested on a small urban catchment for a period of 1.5 years. The 50 largest events are presented....... online system, which forecasts flows and water levels in real-time with inputs from extrapolated radar rainfall data, has been developed. The fully distributed urban drainage model includes auto-calibration using online in-sewer measurements which is seen to improve forecast skills significantly...

  3. A comparison of LOD and UT1-UTC forecasts by different combined prediction techniques

    Science.gov (United States)

    Kosek, W.; Kalarus, M.; Johnson, T. J.; Wooden, W. H.; McCarthy, D. D.; Popiński, W.

    Stochastic prediction techniques including autocovariance, autoregressive, autoregressive moving average, and neural networks were applied to the UT1-UTC and Length of Day (LOD) International Earth Rotation and Reference Systems Servive (IERS) EOPC04 time series to evaluate the capabilities of each method. All known effects such as leap seconds and solid Earth zonal tides were first removed from the observed values of UT1-UTC and LOD. Two combination procedures were applied to predict the resulting LODR time series: 1) the combination of the least-squares (LS) extrapolation with a stochastic predition method, and 2) the combination of the discrete wavelet transform (DWT) filtering and a stochastic prediction method. The results of the combination of the LS extrapolation with different stochastic prediction techniques were compared with the results of the UT1-UTC prediction method currently used by the IERS Rapid Service/Prediction Centre (RS/PC). It was found that the prediction accuracy depends on the starting prediction epochs, and for the combined forecast methods, the mean prediction errors for 1 to about 70 days in the future are of the same order as those of the method used by the IERS RS/PC.

  4. WE-A-17A-01: Absorbed Dose Rate-To-Water at the Surface of a Beta-Emitting Planar Ophthalmic Applicator with a Planar, Windowless Extrapolation Chamber

    Energy Technology Data Exchange (ETDEWEB)

    Riley, A [of Wisconsin Medical Radiation Research Center, Madison, WI (United States); Soares, C [NIST (Retired), Gaithersburg, MD (United States); Micka, J; Culberson, W [University of Wisconsin Medical Radiation Research Center, Madison, WI (United States); DeWerd, L [University of WIMadison/ ADCL, Madison, WI (United States)

    2014-06-15

    Purpose: Currently there is no primary calibration standard for determining the absorbed dose rate-to-water at the surface of β-emitting concave ophthalmic applicators and plaques. Machining tolerances involved in the design of concave window extrapolation chambers are a limiting factor for development of such a standard. Use of a windowless extrapolation chamber avoids these window-machining tolerance issues. As a windowless extrapolation chamber has never been attempted, this work focuses on proof of principle measurements with a planar, windowless extrapolation chamber to verify the accuracy in comparison to initial calibration, which could be extended to the design of a hemispherical, windowless extrapolation chamber. Methods: The window of an extrapolation chamber defines the electrical field, aids in aligning the source parallel to the collector-guard assembly, and decreases the backscatter due to attenuation of lower electron energy. To create a uniform and parallel electric field in this research, the source was made common to the collector-guard assembly. A precise positioning protocol was designed to enhance the parallelism of the source and collector-guard assembly. Additionally, MCNP5 was used to determine a backscatter correction factor to apply to the calibration. With these issues addressed, the absorbed dose rate-to-water of a Tracerlab 90Sr planar ophthalmic applicator was determined using National Institute of Standards and Technology's (NIST) calibration formalism, and the results of five trials with this source were compared to measurements at NIST with a traditional extrapolation chamber. Results: The absorbed dose rate-to-water of the planar applicator was determined to be 0.473 Gy/s ±0.6%. Comparing these results to NIST's determination of 0.474 Gy/s yields a −0.6% difference. Conclusion: The feasibility of a planar, windowless extrapolation chamber has been demonstrated. A similar principle will be applied to developing a

  5. Establishing macroecological trait datasets: digitalization, extrapolation, and validation of diet preferences in terrestrial mammals worldwide.

    Science.gov (United States)

    Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian

    2014-07-01

    Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species' evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals ("MammalDIET"). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external

  6. Applying of USB interface technique in nuclear spectrum acquisition system

    International Nuclear Information System (INIS)

    Zhou Jianbin; Huang Jinhua

    2004-01-01

    This paper introduces applying of USB technique and constructing nuclear spectrum acquisition system via PC's USB interface. The authors choose the USB component USB100 module and the W77E58μc to do the key work. It's easy to apply USB interface technique, when USB100 module is used. USB100 module can be treated as a common I/O component for the μc controller, and can be treated as a communication interface (COM) when connected to PC' USB interface. It's easy to modify the PC's program for the new system with USB100 module. The authors can smoothly change from ISA, RS232 bus to USB bus. (authors)

  7. Effective Orthorhombic Anisotropic Models for Wave field Extrapolation

    KAUST Repository

    Ibanez Jacome, Wilson

    2013-05-01

    Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models, to reproduce wave propagation phenomena in the Earth\\'s subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, I generate effective isotropic inhomogeneous models that are capable of reproducing the first-arrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, I develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic one, is represented by a sixth order polynomial equation that includes the fastest solution corresponding to outgoing P-waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, which is done by explicitly solving the isotropic eikonal equation for the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. I extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the

  8. Predicting structural properties of fluids by thermodynamic extrapolation

    Science.gov (United States)

    Mahynski, Nathan A.; Jiao, Sally; Hatch, Harold W.; Blanco, Marco A.; Shen, Vincent K.

    2018-05-01

    We describe a methodology for extrapolating the structural properties of multicomponent fluids from one thermodynamic state to another. These properties generally include features of a system that may be computed from an individual configuration such as radial distribution functions, cluster size distributions, or a polymer's radius of gyration. This approach is based on the principle of using fluctuations in a system's extensive thermodynamic variables, such as energy, to construct an appropriate Taylor series expansion for these structural properties in terms of intensive conjugate variables, such as temperature. Thus, one may extrapolate these properties from one state to another when the series is truncated to some finite order. We demonstrate this extrapolation for simple and coarse-grained fluids in both the canonical and grand canonical ensembles, in terms of both temperatures and the chemical potentials of different components. The results show that this method is able to reasonably approximate structural properties of such fluids over a broad range of conditions. Consequently, this methodology may be employed to increase the computational efficiency of molecular simulations used to measure the structural properties of certain fluid systems, especially those used in high-throughput or data-driven investigations.

  9. Properties of an extrapolation chamber for beta radiation dosimetry

    International Nuclear Information System (INIS)

    Caldas, L.V.E.

    The properties of a commercial extrapolation chamber were studied, and the possibility is shown of its use in beta radiation dosimetry. The chamber calibration factors were determined for several sources ( 90 Sr, 90 Y- 204 Tl and 147 Pm) making known the dependence of its response on the energy of the incident radiation. Extrapolation curves allow to obtain independence on energy for each source. One of such curves, shown for the 90 Sr- 90 Y source at 50 cm from the detector, is obtained through the variation of the chamber window thickness and the extrapolation to the null distance (determined graphically). Different curves shown also: 1) the dependence of the calibration factor on the average energy of beta radiation; 2) the variation of ionization current with the distance between the chamber and the sources; 3) the effect of the collecting electrode area on the value of calibration factors for the different sources. (I.C.R.) [pt

  10. Medical extrapolation chamber dosimeter model XW6012A

    International Nuclear Information System (INIS)

    Jin Tao; Wang Mi; Wu Jinzheng; Guo Qi

    1992-01-01

    An extrapolation chamber dosimeter has been developed for clinical dosimetry of electron beams and X-rays from medical linear accelerators. It consists of a new type extrapolation chamber, a water phantom and an intelligent portable instrument. With a thin entrance window and a φ20 mm collecting electrode made of polystyrene, the electrode spacing can be varied from 0.2 to 6 mm. The dosimeter can accomplish dose measurement automatically, and has functions of error self-diagnosis and dose self-recording. The energy range applicable is 0.5-20 MeV, and the dose-rate range 0.02-40 Gy/min. The total uncertainty is 2.7%

  11. Extrapolation of zircon fission-track annealing models

    International Nuclear Information System (INIS)

    Palissari, R.; Guedes, S.; Curvo, E.A.C.; Moreira, P.A.F.P.; Tello, C.A.; Hadler, J.C.

    2013-01-01

    One of the purposes of this study is to give further constraints on the temperature range of the zircon partial annealing zone over a geological time scale using data from borehole zircon samples, which have experienced stable temperatures for ∼1 Ma. In this way, the extrapolation problem is explicitly addressed by fitting the zircon annealing models with geological timescale data. Several empirical model formulations have been proposed to perform these calibrations and have been compared in this work. The basic form proposed for annealing models is the Arrhenius-type model. There are other annealing models, that are based on the same general formulation. These empirical model equations have been preferred due to the great number of phenomena from track formation to chemical etching that are not well understood. However, there are two other models, which try to establish a direct correlation between their parameters and the related phenomena. To compare the response of the different annealing models, thermal indexes, such as closure temperature, total annealing temperature and the partial annealing zone, have been calculated and compared with field evidence. After comparing the different models, it was concluded that the fanning curvilinear models yield the best agreement between predicted index temperatures and field evidence. - Highlights: ► Geological data were used along with lab data for improving model extrapolation. ► Index temperatures were simulated for testing model extrapolation. ► Curvilinear Arrhenius models produced better geological temperature predictions

  12. Endangered species toxicity extrapolation using ICE models

    Science.gov (United States)

    The National Research Council’s (NRC) report on assessing pesticide risks to threatened and endangered species (T&E) included the recommendation of using interspecies correlation models (ICE) as an alternative to general safety factors for extrapolating across species. ...

  13. Line-of-sight extrapolation noise in dust polarization

    Energy Technology Data Exchange (ETDEWEB)

    Poh, Jason; Dodelson, Scott

    2017-05-19

    The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g., 350 GHz) is due solely to dust and then extrapolate the signal down to lower frequency (e.g., 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typical Galactic thermal dust temperatures of about 20K, these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r < 0.0015 .

  14. Diagonal ordering operation technique applied to Morse oscillator

    Energy Technology Data Exchange (ETDEWEB)

    Popov, Dušan, E-mail: dusan_popov@yahoo.co.uk [Politehnica University Timisoara, Department of Physical Foundations of Engineering, Bd. V. Parvan No. 2, 300223 Timisoara (Romania); Dong, Shi-Hai [CIDETEC, Instituto Politecnico Nacional, Unidad Profesional Adolfo Lopez Mateos, Mexico D.F. 07700 (Mexico); Popov, Miodrag [Politehnica University Timisoara, Department of Steel Structures and Building Mechanics, Traian Lalescu Street, No. 2/A, 300223 Timisoara (Romania)

    2015-11-15

    We generalize the technique called as the integration within a normally ordered product (IWOP) of operators referring to the creation and annihilation operators of the harmonic oscillator coherent states to a new operatorial approach, i.e. the diagonal ordering operation technique (DOOT) about the calculations connected with the normally ordered product of generalized creation and annihilation operators that generate the generalized hypergeometric coherent states. We apply this technique to the coherent states of the Morse oscillator including the mixed (thermal) state case and get the well-known results achieved by other methods in the corresponding coherent state representation. Also, in the last section we construct the coherent states for the continuous dynamics of the Morse oscillator by using two new methods: the discrete–continuous limit, respectively by solving a finite difference equation. Finally, we construct the coherent states corresponding to the whole Morse spectrum (discrete plus continuous) and demonstrate their properties according the Klauder’s prescriptions.

  15. Extrapolation of dynamic load behaviour on hydroelectric turbine blades with cyclostationary modelling

    Science.gov (United States)

    Poirier, Marc; Gagnon, Martin; Tahan, Antoine; Coutu, André; Chamberland-lauzon, Joël

    2017-01-01

    In this paper, we present the application of cyclostationary modelling for the extrapolation of short stationary load strain samples measured in situ on hydraulic turbine blades. Long periods of measurements allow for a wide range of fluctuations representative of long-term reality to be considered. However, sampling over short periods limits the dynamic strain fluctuations available for analysis. The purpose of the technique presented here is therefore to generate a representative signal containing proper long term characteristics and expected spectrum starting with a much shorter signal period. The final objective is to obtain a strain history that can be used to estimate long-term fatigue behaviour of hydroelectric turbine runners.

  16. Applied potential tomography. A new noninvasive technique for measuring gastric emptying

    International Nuclear Information System (INIS)

    Avill, R.; Mangnall, Y.F.; Bird, N.C.; Brown, B.H.; Barber, D.C.; Seagar, A.D.; Johnson, A.G.; Read, N.W.

    1987-01-01

    Applied potential tomography is a new, noninvasive technique that yields sequential images of the resistivity of gastric contents after subjects have ingested a liquid or semisolid meal. This study validates the technique as a means of measuring gastric emptying. Experiments in vitro showed an excellent correlation between measurements of resistivity and either the square of the radius of a glass rod or the volume of water in a spherical balloon when both were placed in an oval tank containing saline. Altering the lateral position of the rod in the tank did not alter the values obtained. Images of abdominal resistivity were also directly correlated with the volume of air in a gastric balloon. Profiles of gastric emptying of liquid meals obtained using applied potential tomography were very similar to those obtained using scintigraphy or dye dilution techniques, provided that acid secretion was inhibited by cimetidine. Profiles of emptying of a mashed potato meal using applied potential tomography were also very similar to those obtained by scintigraphy. Measurements of the emptying of a liquid meal from the stomach were reproducible if acid secretion was inhibited by cimetidine. Thus, applied potential tomography is an accurate and reproducible method of measuring gastric emptying of liquids and particulate food. It is inexpensive, well tolerated, easy to use, and ideally suited for multiple studies in patients, even those who are pregnant

  17. Applied potential tomography. A new noninvasive technique for measuring gastric emptying

    Energy Technology Data Exchange (ETDEWEB)

    Avill, R.; Mangnall, Y.F.; Bird, N.C.; Brown, B.H.; Barber, D.C.; Seagar, A.D.; Johnson, A.G.; Read, N.W.

    1987-04-01

    Applied potential tomography is a new, noninvasive technique that yields sequential images of the resistivity of gastric contents after subjects have ingested a liquid or semisolid meal. This study validates the technique as a means of measuring gastric emptying. Experiments in vitro showed an excellent correlation between measurements of resistivity and either the square of the radius of a glass rod or the volume of water in a spherical balloon when both were placed in an oval tank containing saline. Altering the lateral position of the rod in the tank did not alter the values obtained. Images of abdominal resistivity were also directly correlated with the volume of air in a gastric balloon. Profiles of gastric emptying of liquid meals obtained using applied potential tomography were very similar to those obtained using scintigraphy or dye dilution techniques, provided that acid secretion was inhibited by cimetidine. Profiles of emptying of a mashed potato meal using applied potential tomography were also very similar to those obtained by scintigraphy. Measurements of the emptying of a liquid meal from the stomach were reproducible if acid secretion was inhibited by cimetidine. Thus, applied potential tomography is an accurate and reproducible method of measuring gastric emptying of liquids and particulate food. It is inexpensive, well tolerated, easy to use, and ideally suited for multiple studies in patients, even those who are pregnant.

  18. Time extrapolation aspects in the performance assessment of high and medium level radioactive waste disposal in the Boom Clay at Mol (Belgium)

    International Nuclear Information System (INIS)

    Volckaert, G.

    2000-01-01

    SCK-CEN is studying the disposal of high and long-lived medium level waste in the Boom Clay at Mol, Belgium. In the performance assessment for such a repository time extrapolation is an inherent problem due to the extremely long half-life of some important radionuclides. To increase the confidence in these time extrapolations SCK-CEN applies a combination of different experimental and modelling approaches including laboratory and in situ experiments, natural analogue studies, deterministic (or mechanistic) models and stochastical models. An overview is given of these approaches and some examples of applications to the different repository system components are given. (author)

  19. Unified Scaling Law for flux pinning in practical superconductors: III. Minimum datasets, core parameters, and application of the Extrapolative Scaling Expression

    Science.gov (United States)

    Ekin, Jack W.; Cheggour, Najib; Goodrich, Loren; Splett, Jolene

    2017-03-01

    In Part 2 of these articles, an extensive analysis of pinning-force curves and raw scaling data was used to derive the Extrapolative Scaling Expression (ESE). This is a parameterization of the Unified Scaling Law (USL) that has the extrapolation capability of fundamental unified scaling, coupled with the application ease of a simple fitting equation. Here in Part 3, the accuracy of the ESE relation to interpolate and extrapolate limited critical-current data to obtain complete I c(B,T,ɛ) datasets is evaluated and compared with present fitting equations. Accuracy is analyzed in terms of root mean square (RMS) error and fractional deviation statistics. Highlights from 92 test cases are condensed and summarized, covering most fitting protocols and proposed parameterizations of the USL. The results show that ESE reliably extrapolates critical currents at fields B, temperatures T, and strains ɛ that are remarkably different from the fitted minimum dataset. Depending on whether the conductor is moderate-J c or high-J c, effective RMS extrapolation errors for ESE are in the range 2-5 A at 12 T, which approaches the I c measurement error (1-2%). The minimum dataset for extrapolating full I c(B,T,ɛ) characteristics is also determined from raw scaling data. It consists of one set of I c(B,ɛ) data at a fixed temperature (e.g., liquid helium temperature), and one set of I c(B,T) data at a fixed strain (e.g., zero applied strain). Error analysis of extrapolations from the minimum dataset with different fitting equations shows that ESE reduces the percentage extrapolation errors at individual data points at high fields, temperatures, and compressive strains down to 1/10th to 1/40th the size of those for extrapolations with present fitting equations. Depending on the conductor, percentage fitting errors for interpolations are also reduced to as little as 1/15th the size. The extrapolation accuracy of the ESE relation offers the prospect of straightforward implementation of

  20. On the existence of the optimal order for wavefunction extrapolation in Born-Oppenheimer molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Jun; Wang, Han, E-mail: wang-han@iapcm.ac.cn [Institute of Applied Physics and Computational Mathematics, Beijing (China); CAEP Software Center for High Performance Numerical Simulation, Beijing (China); Gao, Xingyu; Song, Haifeng [Institute of Applied Physics and Computational Mathematics, Beijing (China); CAEP Software Center for High Performance Numerical Simulation, Beijing (China); Laboratory of Computational Physics, Beijing (China)

    2016-06-28

    Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn–Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps or more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.

  1. UFOs: Observations, Studies and Extrapolations

    CERN Document Server

    Baer, T; Barnes, M J; Bartmann, W; Bracco, C; Carlier, E; Cerutti, F; Dehning, B; Ducimetière, L; Ferrari, A; Ferro-Luzzi, M; Garrel, N; Gerardin, A; Goddard, B; Holzer, E B; Jackson, S; Jimenez, J M; Kain, V; Zimmermann, F; Lechner, A; Mertens, V; Misiowiec, M; Nebot Del Busto, E; Morón Ballester, R; Norderhaug Drosdal, L; Nordt, A; Papotti, G; Redaelli, S; Uythoven, J; Velghe, B; Vlachoudis, V; Wenninger, J; Zamantzas, C; Zerlauth, M; Fuster Martinez, N

    2012-01-01

    UFOs (“ Unidentified Falling Objects”) could be one of the major performance limitations for nominal LHC operation. Therefore, in 2011, the diagnostics for UFO events were significantly improved, dedicated experiments and measurements in the LHC and in the laboratory were made and complemented by FLUKA simulations and theoretical studies. The state of knowledge is summarized and extrapolations for LHC operation in 2012 and beyond are presented. Mitigation strategies are proposed and related tests and measures for 2012 are specified.

  2. Extrapolation bias and the predictability of stock returns by price-scaled variables

    NARCIS (Netherlands)

    Cassella, Stefano; Gulen, H.

    Using survey data on expectations of future stock returns, we recursively estimate the degree of extrapolative weighting in investors' beliefs (DOX). In an extrapolation framework, DOX determines the relative weight investors place on recent-versus-distant past returns. DOX varies considerably over

  3. Kriging interpolation in seismic attribute space applied to the South Arne Field, North Sea

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Schiøtt, Christian

    2010-01-01

    Seismic attributes can be used to guide interpolation in-between and extrapolation away from well log locations using for example linear regression, neural networks, and kriging. Kriging-based estimation methods (and most other types of interpolation/extrapolation techniques) are intimately linke...

  4. Dielectric spectroscopy technique applied to study the behaviour of irradiated polymer

    International Nuclear Information System (INIS)

    Saoud, R.; Soualmia, A.; Guerbi, C.A.; Benrekaa, N.

    2006-01-01

    Relaxation spectroscopy provides an excellent method for the study of motional processes in materials and has been widely applied to macromolecules and polymers. The technique is potentially of most interest when applied to irradiated systems. Application to the study of the structure beam-irradiated Teflon is thus an outstanding opportunity for the dielectric relaxation technique, particularly as this material exhibits clamping problems when subjected to dynamic mechanical relaxation studies. A very wide frequency range is necessary to resolve dipolar effects. In this paper, we discuss some significant results about the behavior and the modification of the structure of Teflon submitted to weak energy radiations

  5. Statistical Techniques Used in Three Applied Linguistics Journals: "Language Learning,""Applied Linguistics" and "TESOL Quarterly," 1980-1986: Implications for Readers and Researchers.

    Science.gov (United States)

    Teleni, Vicki; Baldauf, Richard B., Jr.

    A study investigated the statistical techniques used by applied linguists and reported in three journals, "Language Learning,""Applied Linguistics," and "TESOL Quarterly," between 1980 and 1986. It was found that 47% of the published articles used statistical procedures. In these articles, 63% of the techniques used could be called basic, 28%…

  6. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... that include shorter lead times, improved quality of specifications and products, and lower overall product costs. The design and implementation of configurators are a challenging task that calls for scientifically based modelling techniques to support the formal representation of configurator knowledge. Even...... the phenomenon model and information model are considered visually, (2) non-UML-based modelling techniques, in which only the phenomenon model is considered and (3) non-formal modelling techniques. This study analyses the impact to companies from increased availability of product knowledge and improved control...

  7. Human risk assessment of dermal and inhalation exposures to chemicals assessed by route-to-route extrapolation: the necessity of kinetic data.

    Science.gov (United States)

    Geraets, Liesbeth; Bessems, Jos G M; Zeilmaker, Marco J; Bos, Peter M J

    2014-10-01

    In toxicity testing the oral route is in general the first choice. Often, appropriate inhalation and dermal toxicity data are absent. Risk assessment for these latter routes usually has to rely on route-to-route extrapolation starting from oral toxicity data. Although it is generally recognized that the uncertainties involved are (too) large, route-to-route extrapolation is applied in many cases because of a strong need of an assessment of risks linked to a given exposure scenario. For an adequate route-to-route extrapolation the availability of at least some basic toxicokinetic data is a pre-requisite. These toxicokinetic data include all phases of kinetics, from absorption (both absorbed fraction and absorption rate for both the starting route and route of interest) via distribution and biotransformation to excretion. However, in practice only differences in absorption between the different routes are accounted for. The present paper demonstrates the necessity of route-specific absorption data by showing the impact of its absence on the uncertainty of the human health risk assessment using route-to-route extrapolation. Quantification of the absorption (by in vivo, in vitro or in silico methods), particularly for the starting route, is considered essential. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  9. Loop integration results using numerical extrapolation for a non-scalar integral

    International Nuclear Information System (INIS)

    Doncker, E. de; Shimizu, Y.; Fujimoto, J.; Yuasa, F.; Kaugars, K.; Cucos, L.; Van Voorst, J.

    2004-01-01

    Loop integration results have been obtained using numerical integration and extrapolation. An extrapolation to the limit is performed with respect to a parameter in the integrand which tends to zero. Results are given for a non-scalar four-point diagram. Extensions to accommodate loop integration by existing integration packages are also discussed. These include: using previously generated partitions of the domain and roundoff error guards

  10. Determination of palladium in biological samples applying nuclear analytical techniques

    International Nuclear Information System (INIS)

    Cavalcante, Cassio Q.; Sato, Ivone M.; Salvador, Vera L. R.; Saiki, Mitiko

    2008-01-01

    This study presents Pd determinations in bovine tissue samples containing palladium prepared in the laboratory, and CCQM-P63 automotive catalyst materials of the Proficiency Test, using instrumental thermal and epithermal neutron activation analysis and energy dispersive X-ray fluorescence techniques. Solvent extraction and solid phase extraction procedures were also applied to separate Pd from interfering elements before the irradiation in the nuclear reactor. The results obtained by different techniques were compared against each other to examine sensitivity, precision and accuracy. (author)

  11. Applying effective teaching and learning techniques to nephrology education.

    Science.gov (United States)

    Rondon-Berrios, Helbert; Johnston, James R

    2016-10-01

    The interest in nephrology as a career has declined over the last several years. Some of the reasons cited for this decline include the complexity of the specialty, poor mentoring and inadequate teaching of nephrology from medical school through residency. The purpose of this article is to introduce the reader to advances in the science of adult learning, illustrate best teaching practices in medical education that can be extrapolated to nephrology and introduce the basic teaching methods that can be used on the wards, in clinics and in the classroom.

  12. Biosimilars: From Extrapolation into Off Label Use.

    Science.gov (United States)

    Zhao, Sizheng; Nair, Jagdish R; Moots, Robert J

    2017-01-01

    Biologic drugs have revolutionised the management of many inflammatory conditions. Patent expirations have stimulated development of highly similar but non-identical molecules, the biosimilars. Extrapolation of indications is a key concept in the development of biosimilars. However, this has been met with concerns around mechanisms of action, equivalence in efficacy and immunogenicity, which are reviewed in this article. Narrative overview composed from literature search and the authors' experience. Literature search included Pubmed, Web of Science, and online document archives of the Food and Drug Administration and European Medicines Agency. The concepts of biosimilarity and extrapolation of indications are revisited. Concerns around extrapolation are exemplified using the biosimilar infliximab, CT-P13, focusing on mechanisms of action, immunogenicity and trial design. The opportunities and cautions for using biologics and biosimilars in unlicensed inflammatory conditions are reviewed. Biosimilars offer many potential opportunities in improving treatment access and increasing treatment options. The high cost associated with marketing approval means that many bio-originators may never become licenced for rarer inflammatory conditions, despite clinical efficacy. Biosimilars, with lower acquisition cost, may improve access for off-label use of biologics in the management of these patients. They may also provide opportunities to explore off-label treatment of conditions where biologic therapy is less established. However, this potential advantage must be balanced with the awareness that off-label prescribing can potentially expose patients to risky and ineffective treatments. Post-marketing surveillance is critical to developing long-term evidence to provide assurances on efficacy as well as safety. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  13. Characterization of an extrapolation chamber in a 90Sr/90Y beta radiation field

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Tamayo Garcia, J. A.

    2015-01-01

    The extrapolation chamber is a parallel plate chamber and variable volume based on the Bragg-Gray theory. It determines in absolute mode, with high accuracy the dose absorbed by the extrapolation of the ionization current measured for a null distance between the electrodes. This camera is used for dosimetry of external beta rays for radiation protection. This paper presents the characterization of an extrapolation chamber in a 90 Sr/ 90 Y beta radiation field. The absorbed dose rate to tissue at a depth of 0.07 mm was calculated and is (0.13206±0.0028) μGy. The extrapolation chamber null depth was determined and its value is 60 μm. The influence of temperature, pressure and humidity on the value of the corrected current was also evaluated. Temperature is the parameter that has more influence on this value and the influence of pressure and the humidity is not very significant. Extrapolation curves were obtained. (Author)

  14. NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.

    Science.gov (United States)

    Hinrichs, R N; McLean, S P

    1995-10-01

    This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.

  15. Effect of extrapolation length on the phase transformation of epitaxial ferroelectric thin films

    International Nuclear Information System (INIS)

    Hu, Z.S.; Tang, M.H.; Wang, J.B.; Zheng, X.J.; Zhou, Y.C.

    2008-01-01

    Effects of extrapolation length on the phase transformation of epitaxial ferroelectric thin films on dissimilar cubic substrates have been studied on the basis of the mean-field Landau-Ginzburg-Devonshire (LGD) thermodynamic theory by taking an uneven distribution of the interior stress with thickness into account. It was found that the polarization of epitaxial ferroelectric thin films is strongly dependent on the extrapolation length of films. The physical origin of the extrapolation length during the phase transformation from paraelectric to ferroelectric was revealed in the case of ferroelectric thin films

  16. A stabilized MFE reduced-order extrapolation model based on POD for the 2D unsteady conduction-convection problem.

    Science.gov (United States)

    Xia, Hong; Luo, Zhendong

    2017-01-01

    In this study, we devote ourselves to establishing a stabilized mixed finite element (MFE) reduced-order extrapolation (SMFEROE) model holding seldom unknowns for the two-dimensional (2D) unsteady conduction-convection problem via the proper orthogonal decomposition (POD) technique, analyzing the existence and uniqueness and the stability as well as the convergence of the SMFEROE solutions and validating the correctness and dependability of the SMFEROE model by means of numerical simulations.

  17. Testing a solar coronal magnetic field extrapolation code with the Titov–Démoulin magnetic flux rope model

    International Nuclear Information System (INIS)

    Jiang, Chao-Wei; Feng, Xue-Shang

    2016-01-01

    In the solar corona, the magnetic flux rope is believed to be a fundamental structure that accounts for magnetic free energy storage and solar eruptions. Up to the present, the extrapolation of the magnetic field from boundary data has been the primary way to obtain fully three-dimensional magnetic information about the corona. As a result, the ability to reliably recover the coronal magnetic flux rope is important for coronal field extrapolation. In this paper, our coronal field extrapolation code is examined with an analytical magnetic flux rope model proposed by Titov and Démoulin, which consists of a bipolar magnetic configuration holding a semi-circular line-tied flux rope in force-free equilibrium. By only using the vector field at the bottom boundary as input, we test our code with the model in a representative range of parameter space and find that the model field can be reconstructed with high accuracy. In particular, the magnetic topological interfaces formed between the flux rope and the surrounding arcade, i.e., the “hyperbolic flux tube” and “bald patch separatrix surface,” are also reliably reproduced. By this test, we demonstrate that our CESE–MHD–NLFFF code can be applied to recovering the magnetic flux rope in the solar corona as long as the vector magnetogram satisfies the force-free constraints. (paper)

  18. SNSEDextend: SuperNova Spectral Energy Distributions extrapolation toolkit

    Science.gov (United States)

    Pierel, Justin D. R.; Rodney, Steven A.; Avelino, Arturo; Bianco, Federica; Foley, Ryan J.; Friedman, Andrew; Hicken, Malcolm; Hounsell, Rebekah; Jha, Saurabh W.; Kessler, Richard; Kirshner, Robert; Mandel, Kaisey; Narayan, Gautham; Filippenko, Alexei V.; Scolnic, Daniel; Strolger, Louis-Gregory

    2018-05-01

    SNSEDextend extrapolates core-collapse and Type Ia Spectral Energy Distributions (SEDs) into the UV and IR for use in simulations and photometric classifications. The user provides a library of existing SED templates (such as those in the authors' SN SED Repository) along with new photometric constraints in the UV and/or NIR wavelength ranges. The software then extends the existing template SEDs so their colors match the input data at all phases. SNSEDextend can also extend the SALT2 spectral time-series model for Type Ia SN for a "first-order" extrapolation of the SALT2 model components, suitable for use in survey simulations and photometric classification tools; as the code does not do a rigorous re-training of the SALT2 model, the results should not be relied on for precision applications such as light curve fitting for cosmology.

  19. Applying BI Techniques To Improve Decision Making And Provide Knowledge Based Management

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2015-07-01

    Full Text Available The paper focuses on BI techniques and especially data mining algorithms that can support and improve the decision making process, with applications within the financial sector. We consider the data mining techniques to be more efficient and thus we applied several techniques, supervised and unsupervised learning algorithms The case study in which these algorithms have been implemented regards the activity of a banking institution, with focus on the management of lending activities.

  20. Why does the Aitken extrapolation often help to attain convergence in self-consistent field calculations?

    International Nuclear Information System (INIS)

    Cioslowski, J.

    1988-01-01

    The Aitken (three-point) extrapolation is one of the most popular convergence accelerators in the SCF calculations. The conditions that guarantee the Aitken extrapolation to bring about an unconditional convergence in the SCF process are examined. Classification of the SCF divergences is presented and it is shown that the extrapolation can be expected to work properly only in the case of oscillatory divergence

  1. Extrapolation of π-meson form factor, zeros in the analyticity domain

    International Nuclear Information System (INIS)

    Morozov, P.T.

    1978-01-01

    The problem of a stable extrapolation from the cut to an arbitrary interior of the analyticity domain for the pion form factor is formulated and solved. As it is shown a stable solution can be derived if module representations with the Karleman weight function are used as the analyticity conditions. The case when the form factor has zeros is discussed. If there are zeros in the complex plane they must be taken into account when determining the extrapolation function

  2. Extrapolation method in the Monte Carlo Shell Model and its applications

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2011-01-01

    We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking 56 Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as 72 Ge with f5pg9-shell. The structure of 72 Se is also studied including the discussion of the shape-coexistence phenomenon.

  3. Oral-to-inhalation route extrapolation in occupational health risk assessment: A critical assessment

    NARCIS (Netherlands)

    Rennen, M.A.J.; Bouwman, T.; Wilschut, A.; Bessems, J.G.M.; Heer, C.de

    2004-01-01

    Due to a lack of route-specific toxicity data, the health risks resulting from occupational exposure are frequently assessed by route-to-route (RtR) extrapolation based on oral toxicity data. Insight into the conditions for and the uncertainties connected with the application of RtR extrapolation

  4. Response Load Extrapolation for Wind Turbines during Operation Based on Average Conditional Exceedance Rates

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Naess, Arvid; Saha, Nilanjan

    2011-01-01

    to cases where the Gumbel distribution is the appropriate asymptotic extreme value distribution. However, two extra parameters are introduced by which a more general and flexible class of extreme value distributions is obtained with the Gumbel distribution as a subclass. The general method is implemented...... within a hierarchical model where the variables that influence the loading are divided into ergodic variables and time-invariant non-ergodic variables. The presented method for statistical response load extrapolation was compared with the existing methods based on peak extrapolation for the blade out......The paper explores a recently developed method for statistical response load (load effect) extrapolation for application to extreme response of wind turbines during operation. The extrapolation method is based on average conditional exceedance rates and is in the present implementation restricted...

  5. Outlier robustness for wind turbine extrapolated extreme loads

    DEFF Research Database (Denmark)

    Natarajan, Anand; Verelst, David Robert

    2012-01-01

    . Stochastic identification of numerical artifacts in simulated loads is demonstrated using the method of principal component analysis. The extrapolation methodology is made robust to outliers through a weighted loads approach, whereby the eigenvalues of the correlation matrix obtained using the loads with its...

  6. A stabilized MFE reduced-order extrapolation model based on POD for the 2D unsteady conduction-convection problem

    Directory of Open Access Journals (Sweden)

    Hong Xia

    2017-05-01

    Full Text Available Abstract In this study, we devote ourselves to establishing a stabilized mixed finite element (MFE reduced-order extrapolation (SMFEROE model holding seldom unknowns for the two-dimensional (2D unsteady conduction-convection problem via the proper orthogonal decomposition (POD technique, analyzing the existence and uniqueness and the stability as well as the convergence of the SMFEROE solutions and validating the correctness and dependability of the SMFEROE model by means of numerical simulations.

  7. Applying DEA Technique to Library Evaluation in Academic Research Libraries.

    Science.gov (United States)

    Shim, Wonsik

    2003-01-01

    This study applied an analytical technique called Data Envelopment Analysis (DEA) to calculate the relative technical efficiency of 95 academic research libraries, all members of the Association of Research Libraries. DEA, with the proper model of library inputs and outputs, can reveal best practices in the peer groups, as well as the technical…

  8. An efficient wave extrapolation method for anisotropic media with tilt

    KAUST Repository

    Waheed, Umair bin

    2015-03-23

    Wavefield extrapolation operators for elliptically anisotropic media offer significant cost reduction compared with that for the transversely isotropic case, particularly when the axis of symmetry exhibits tilt (from the vertical). However, elliptical anisotropy does not provide accurate wavefield representation or imaging for transversely isotropic media. Therefore, we propose effective elliptically anisotropic models that correctly capture the kinematic behaviour of wavefields for transversely isotropic media. Specifically, we compute source-dependent effective velocities for the elliptic medium using kinematic high-frequency representation of the transversely isotropic wavefield. The effective model allows us to use cheaper elliptic wave extrapolation operators. Despite the fact that the effective models are obtained by matching kinematics using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy trade-off for wavefield computations in transversely isotropic media, particularly for media of low to moderate complexity. In addition, the wavefield solution is free from shear-wave artefacts as opposed to the conventional finite-difference-based transversely isotropic wave extrapolation scheme. We demonstrate these assertions through numerical tests on synthetic tilted transversely isotropic models.

  9. An efficient wave extrapolation method for anisotropic media with tilt

    KAUST Repository

    Waheed, Umair bin; Alkhalifah, Tariq Ali

    2015-01-01

    Wavefield extrapolation operators for elliptically anisotropic media offer significant cost reduction compared with that for the transversely isotropic case, particularly when the axis of symmetry exhibits tilt (from the vertical). However, elliptical anisotropy does not provide accurate wavefield representation or imaging for transversely isotropic media. Therefore, we propose effective elliptically anisotropic models that correctly capture the kinematic behaviour of wavefields for transversely isotropic media. Specifically, we compute source-dependent effective velocities for the elliptic medium using kinematic high-frequency representation of the transversely isotropic wavefield. The effective model allows us to use cheaper elliptic wave extrapolation operators. Despite the fact that the effective models are obtained by matching kinematics using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy trade-off for wavefield computations in transversely isotropic media, particularly for media of low to moderate complexity. In addition, the wavefield solution is free from shear-wave artefacts as opposed to the conventional finite-difference-based transversely isotropic wave extrapolation scheme. We demonstrate these assertions through numerical tests on synthetic tilted transversely isotropic models.

  10. Surface analytical techniques applied to minerals processing

    International Nuclear Information System (INIS)

    Smart, R.St.C.

    1991-01-01

    An understanding of the chemical and physical forms of the chemically altered layers on the surfaces of base metal sulphides, particularly in the form of hydroxides, oxyhydroxides and oxides, and the changes that occur in them during minerals processing lies at the core of a complete description of flotation chemistry. This paper reviews the application of a variety of surface-sensitive techniques and methodologies applied to the study of surface layers on single minerals, mixed minerals, synthetic ores and real ores. Evidence from combined XPS/SAM/SEM studies have provided images and analyses of three forms of oxide, oxyhydroxide and hydroxide products on the surfaces of single sulphide minerals, mineral mixtures and complex sulphide ores. 4 refs., 2 tabs., 4 figs

  11. Design and construction of an interface system for the extrapolation chamber from the beta secondary standard.; Diseno y construccion del sistema de interfaz para la camara de extrapolacion del patron secundario beta.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez C, L F

    1995-10-01

    The Interface System for the Extrapolation Chamber (SICE) contains several devices handled by a personal computer (PC), it is able to get the required data to calculate the absorbed dose due to Beta radiation. The main functions of the system are: (a) Measures the ionization current or charge stored in the extrapolation chamber. (b) Adjusts the distance between the plates of the extrapolation chamber automatically. (c) Adjust the bias voltage of the extrapolation chamber automatically. (d) Acquires the data of the temperature, atmospheric pressure, relative humidity of the environment and the voltage applied between the plates of the extrapolation chamber. (e) Calculates the effective area of the plates of the extrapolation chamber and the real distance between them. (f) Stores all the obtained information in hard disk or diskette. A comparison between the desired distance and the distance in the dial of the extrapolation chamber, show us that the resolution of the system is of 20 {mu}m. The voltage can be changed between -399.9 V and +399.9 V with an error of less the 3% with a resolution of 0.1 V. These uncertainties are between the accepted limits to be used in the determination of the absolute absorbed dose due to beta radiation. (Author).

  12. The correlated k-distribution technique as applied to the AVHRR channels

    Science.gov (United States)

    Kratz, David P.

    1995-01-01

    Correlated k-distributions have been created to account for the molecular absorption found in the spectral ranges of the five Advanced Very High Resolution Radiometer (AVHRR) satellite channels. The production of the k-distributions was based upon an exponential-sum fitting of transmissions (ESFT) technique which was applied to reference line-by-line absorptance calculations. To account for the overlap of spectral features from different molecular species, the present routines made use of the multiplication transmissivity property which allows for considerable flexibility, especially when altering relative mixing ratios of the various molecular species. To determine the accuracy of the correlated k-distribution technique as compared to the line-by-line procedure, atmospheric flux and heating rate calculations were run for a wide variety of atmospheric conditions. For the atmospheric conditions taken into consideration, the correlated k-distribution technique has yielded results within about 0.5% for both the cases where the satellite spectral response functions were applied and where they were not. The correlated k-distribution's principal advantages is that it can be incorporated directly into multiple scattering routines that consider scattering as well as absorption by clouds and aerosol particles.

  13. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    Energy Technology Data Exchange (ETDEWEB)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)

    2015-05-15

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.

  14. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    International Nuclear Information System (INIS)

    Spackman, Peter R.; Karton, Amir

    2015-01-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L α two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol –1 . The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol –1

  15. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    Science.gov (United States)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  16. Technique applied in electrical power distribution for Satellite Launch Vehicle

    Directory of Open Access Journals (Sweden)

    João Maurício Rosário

    2010-09-01

    Full Text Available The Satellite Launch Vehicle electrical network, which is currently being developed in Brazil, is sub-divided for analysis in the following parts: Service Electrical Network, Controlling Electrical Network, Safety Electrical Network and Telemetry Electrical Network. During the pre-launching and launching phases, these electrical networks are associated electrically and mechanically to the structure of the vehicle. In order to succeed in the integration of these electrical networks it is necessary to employ techniques of electrical power distribution, which are proper to Launch Vehicle systems. This work presents the most important techniques to be considered in the characterization of the electrical power supply applied to Launch Vehicle systems. Such techniques are primarily designed to allow the electrical networks, when submitted to the single-phase fault to ground, to be able of keeping the power supply to the loads.

  17. Standardization of electron-capture and complex beta-gamma radionuclides by the efficiency extrapolation method

    International Nuclear Information System (INIS)

    Grigorescu, L.

    1976-07-01

    The efficiency extrapolation method was improved by establishing ''linearity conditions'' for the discrimination on the gamma channel of the coincidence equipment. These conditions were proved to eliminate the systematic error of the method. A control procedure for the fulfilment of linearity conditions and estimation of residual systematic error was given. For law-energy gamma transitions an ''equivalent scheme principle'' was established, which allow for a correct application of the method. Solutions of Cs-134, Co-57, Ba-133 and Zn-65 were standardized with an ''effective standard deviation'' of 0.3-0.7 per cent. For Zn-65 ''special linearity conditions'' were applied. (author)

  18. Assessment of load extrapolation methods for wind turbines

    DEFF Research Database (Denmark)

    Toft, H.S.; Sørensen, John Dalsgaard; Veldkamp, D.

    2010-01-01

    an approximate analytical solution for the distribution of the peaks is given by Rice. In the present paper three different methods for statistical load extrapolation are compared with the analytical solution for one mean wind speed. The methods considered are global maxima, block maxima and the peak over...

  19. Assessment of Load Extrapolation Methods for Wind Turbines

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard; Veldkamp, Dick

    2011-01-01

    , an approximate analytical solution for the distribution of the peaks is given by Rice. In the present paper, three different methods for statistical load extrapolation are compared with the analytical solution for one mean wind speed. The methods considered are global maxima, block maxima, and the peak over...

  20. On extrapolation blowups in the $L_p$ scale

    Czech Academy of Sciences Publication Activity Database

    Capone, C.; Fiorenza, A.; Krbec, Miroslav

    2006-01-01

    Roč. 9, č. 4 (2006), s. 1-15 ISSN 1025-5834 R&D Projects: GA ČR(CZ) GA201/01/1201 Institutional research plan: CEZ:AV0Z10190503 Keywords : extrapolation * Lebesgue spaces * small Lebesgue spaces Subject RIV: BA - General Mathematics Impact factor: 0.349, year: 2004

  1. [Technique and value of direct MR arthrography applying articular distraction].

    Science.gov (United States)

    Becce, Fabio; Wettstein, Michael; Guntern, Daniel; Mouhsine, Elyazid; Palhais, Nuno; Theumann, Nicolas

    2010-02-24

    Direct MR arthrography has a better diagnostic accuracy than MR imaging alone. However, contrast material is not always homogeneously distributed in the articular space. Lesions of cartilage surfaces or intra-articular soft tissues can thus be misdiagnosed. Concomitant application of axial traction during MR arthrography leads to articular distraction. This enables better distribution of contrast material in the joint and better delineation of intra-articular structures. Therefore, this technique improves detection of cartilage lesions. Moreover, the axial stress applied on articular structures may reveal lesions invisible on MR images without traction. Based on our clinical experience, we believe that this relatively unknown technique is promising and should be further developed.

  2. Optimization technique applied to interpretation of experimental data and research of constitutive laws

    International Nuclear Information System (INIS)

    Grossette, J.C.

    1982-01-01

    The feasibility of identification technique applied to one dimensional numerical analysis of the split-Hopkinson pressure bar experiment is proven. A general 1-D elastic-plastic-viscoplastic computer program was written down so as to give an adequate solution for elastic-plastic-viscoplastic response of a pressure bar subjected to a general Heaviside step loading function in time which is applied over one end of the bar. Special emphasis is placed on the response of the specimen during the first microseconds where no equilibrium conditions can be stated. During this transient phase discontinuity conditions related to wave propagation are encountered and must be carefully taken into account. Having derived an adequate numerical model, then Pontryagin identification technique has been applied in such a way that the unknowns are physical parameters. The solutions depend mainly on the selection of a class of proper eigen objective functionals (cost functions) which may be combined so as to obtain a convenient numerical objective function. A number of significant questions arising in the choice of parameter adjustment algorithms are discussed. In particular, this technique leads to a two point boundary value problem which has been solved using an iterative gradient like technique usually referred to as a double operator gradient method. This method combines the classical Fletcher-Powell technique and a partial quadratic technique with an automatic parameter step size selection. This method is much more efficient than usual ones. Numerical experimentation with simulated data was performed to test the accuracy and stability of the identification algorithm and to determine the most adequate type and quantity of data for estimation purposes

  3. Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?

    Science.gov (United States)

    Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.

    2016-02-01

    It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.

  4. Biosimilars in Inflammatory Bowel Disease: Facts and Fears of Extrapolation.

    Science.gov (United States)

    Ben-Horin, Shomron; Vande Casteele, Niels; Schreiber, Stefan; Lakatos, Peter Laszlo

    2016-12-01

    Biologic drugs such as infliximab and other anti-tumor necrosis factor monoclonal antibodies have transformed the treatment of immune-mediated inflammatory conditions such as Crohn's disease and ulcerative colitis (collectively known as inflammatory bowel disease [IBD]). However, the complex manufacturing processes involved in producing these drugs mean their use in clinical practice is expensive. Recent or impending expiration of patents for several biologics has led to development of biosimilar versions of these drugs, with the aim of providing substantial cost savings and increased accessibility to treatment. Biosimilars undergo an expedited regulatory process. This involves proving structural, functional, and biological biosimilarity to the reference product (RP). It is also expected that clinical equivalency/comparability will be demonstrated in a clinical trial in one (or more) sensitive population. Once these requirements are fulfilled, extrapolation of biosimilar approval to other indications for which the RP is approved is permitted without the need for further clinical trials, as long as this is scientifically justifiable. However, such justification requires that the mechanism(s) of action of the RP in question should be similar across indications and also comparable between the RP and the biosimilar in the clinically tested population(s). Likewise, the pharmacokinetics, immunogenicity, and safety of the RP should be similar across indications and comparable between the RP and biosimilar in the clinically tested population(s). To date, most anti-tumor necrosis factor biosimilars have been tested in trials recruiting patients with rheumatoid arthritis. Concerns have been raised regarding extrapolation of clinical data obtained in rheumatologic populations to IBD indications. In this review, we discuss the issues surrounding indication extrapolation, with a focus on extrapolation to IBD. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All

  5. An optimization planning technique for Suez Canal Network in Egypt

    Energy Technology Data Exchange (ETDEWEB)

    Abou El-Ela, A.A.; El-Zeftawy, A.A.; Allam, S.M.; Atta, Gasir M. [Electrical Engineering Dept., Faculty of Eng., Shebin El-Kom (Egypt)

    2010-02-15

    This paper introduces a proposed optimization technique POT for predicting the peak load demand and planning of transmission line systems. Many of traditional methods have been presented for long-term load forecasting of electrical power systems. But, the results of these methods are approximated. Therefore, the artificial neural network (ANN) technique for long-term peak load forecasting is modified and discussed as a modern technique in long-term load forecasting. The modified technique is applied on the Egyptian electrical network dependent on its historical data to predict the electrical peak load demand forecasting up to year 2017. This technique is compared with extrapolation of trend curves as a traditional method. The POT is applied also to obtain the optimal planning of transmission lines for the 220 kV of Suez Canal Network (SCN) using the ANN technique. The minimization of the transmission network costs are considered as an objective function, while the transmission lines (TL) planning constraints are satisfied. Zafarana site on the Red Sea coast is considered as an optimal site for installing big wind farm (WF) units in Egypt. So, the POT is applied to plan both the peak load and the electrical transmission of SCN with and without considering WF to develop the impact of WF units on the electrical transmission system of Egypt, considering the reliability constraints which were taken as a separate model in the previous techniques. The application on SCN shows the capability and the efficiently of the proposed techniques to obtain the predicting peak load demand and the optimal planning of transmission lines of SCN up to year 2017. (author)

  6. Evaluation of irradiation damage effect by applying electric properties based techniques

    International Nuclear Information System (INIS)

    Acosta, B.; Sevini, F.

    2004-01-01

    The most important effect of the degradation by radiation is the decrease in the ductility of the pressure vessel of the reactor (RPV) ferritic steels. The main way to determine the mechanical behaviour of the RPV steels is tensile and impact tests, from which the ductile to brittle transition temperature (DBTT) and its increase due to neutron irradiation can be calculated. These tests are destructive and regularly applied to surveillance specimens to assess the integrity of RPV. The possibility of applying validated non-destructive ageing monitoring techniques would however facilitate the surveillance of the materials that form the reactor vessel. The JRC-IE has developed two devices, focused on the measurement of the electrical properties to assess non-destructively the embrittlement state of materials. The first technique, called Seebeck and Thomson Effects on Aged Material (STEAM), is based on the measurement of the Seebeck coefficient, characteristic of the material and related to the microstructural changes induced by irradiation embrittlement. With the same aim the second technique, named Resistivity Effects on Aged Material (REAM), measures instead the resistivity of the material. The purpose of this research is to correlate the results of the impact tests, STEAM and REAM measurements with the change in the mechanical properties due to neutron irradiation. These results will make possible the improvement of such techniques based on the measurement of material electrical properties for their application to the irradiation embrittlement assessment

  7. Statistically extrapolated nowcasting of summertime precipitation over the Eastern Alps

    Science.gov (United States)

    Chen, Min; Bica, Benedikt; Tüchler, Lukas; Kann, Alexander; Wang, Yong

    2017-07-01

    This paper presents a new multiple linear regression (MLR) approach to updating the hourly, extrapolated precipitation forecasts generated by the INCA (Integrated Nowcasting through Comprehensive Analysis) system for the Eastern Alps. The generalized form of the model approximates the updated precipitation forecast as a linear response to combinations of predictors selected through a backward elimination algorithm from a pool of predictors. The predictors comprise the raw output of the extrapolated precipitation forecast, the latest radar observations, the convective analysis, and the precipitation analysis. For every MLR model, bias and distribution correction procedures are designed to further correct the systematic regression errors. Applications of the MLR models to a verification dataset containing two months of qualified samples, and to one-month gridded data, are performed and evaluated. Generally, MLR yields slight, but definite, improvements in the intensity accuracy of forecasts during the late evening to morning period, and significantly improves the forecasts for large thresholds. The structure-amplitude-location scores, used to evaluate the performance of the MLR approach, based on its simulation of morphological features, indicate that MLR typically reduces the overestimation of amplitudes and generates similar horizontal structures in precipitation patterns and slightly degraded location forecasts, when compared with the extrapolated nowcasting.

  8. Applying field mapping refractive beam shapers to improve holographic techniques

    Science.gov (United States)

    Laskin, Alexander; Williams, Gavin; McWilliam, Richard; Laskin, Vadim

    2012-03-01

    Performance of various holographic techniques can be essentially improved by homogenizing the intensity profile of the laser beam with using beam shaping optics, for example, the achromatic field mapping refractive beam shapers like πShaper. The operational principle of these devices presumes transformation of laser beam intensity from Gaussian to flattop one with high flatness of output wavefront, saving of beam consistency, providing collimated output beam of low divergence, high transmittance, extended depth of field, negligible residual wave aberration, and achromatic design provides capability to work with several laser sources with different wavelengths simultaneously. Applying of these beam shapers brings serious benefits to the Spatial Light Modulator based techniques like Computer Generated Holography or Dot-Matrix mastering of security holograms since uniform illumination of an SLM allows simplifying mathematical calculations and increasing predictability and reliability of the imaging results. Another example is multicolour Denisyuk holography when the achromatic πShaper provides uniform illumination of a field at various wavelengths simultaneously. This paper will describe some design basics of the field mapping refractive beam shapers and optical layouts of their applying in holographic systems. Examples of real implementations and experimental results will be presented as well.

  9. Source-receiver two-way wave extrapolation for prestack exploding-reflector modelling and migration

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-10-08

    Most modern seismic imaging methods separate input data into parts (shot gathers). We develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield forward or backward in time. This approach has the potential for generating accurate images free of artiefacts associated with conventional approaches. We derive novel high-order partial differential equations in the source-receiver time domain. The fourth-order nature of the extrapolation in time leads to four solutions, two of which correspond to the incoming and outgoing P-waves and reduce to the zero-offset exploding-reflector solutions when the source coincides with the receiver. A challenge for implementing two-way time extrapolation is an essential singularity for horizontally travelling waves. This singularity can be avoided by limiting the range of wavenumbers treated in a spectral-based extrapolation. Using spectral methods based on the low-rank approximation of the propagation symbol, we extrapolate only the desired solutions in an accurate and efficient manner with reduced dispersion artiefacts. Applications to synthetic data demonstrate the accuracy of the new prestack modelling and migration approach.

  10. Just-in-Time techniques as applied to hazardous materials management

    OpenAIRE

    Spicer, John S.

    1996-01-01

    Approved for public release; distribution is unlimited This study investigates the feasibility of integrating JIT techniques in the context of hazardous materials management. This study provides a description of JIT, a description of environmental compliance issues and the outgrowth of related HAZMAT policies, and a broad perspective on strategies for applying JIT to HAZMAT management. http://archive.org/details/justintimetechn00spic Lieutenant Commander, United States Navy

  11. extrap: Software to assist the selection of extrapolation methods for moving-boat ADCP streamflow measurements

    Science.gov (United States)

    Mueller, David S.

    2013-04-01

    Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.

  12. Three-dimensional integrated CAE system applying computer graphic technique

    International Nuclear Information System (INIS)

    Kato, Toshisada; Tanaka, Kazuo; Akitomo, Norio; Obata, Tokayasu.

    1991-01-01

    A three-dimensional CAE system for nuclear power plant design is presented. This system utilizes high-speed computer graphic techniques for the plant design review, and an integrated engineering database for handling the large amount of nuclear power plant engineering data in a unified data format. Applying this system makes it possible to construct a nuclear power plant using only computer data from the basic design phase to the manufacturing phase, and it increases the productivity and reliability of the nuclear power plants. (author)

  13. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  14. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  15. Calibration of the 90Sr+90Y ophthalmic and dermatological applicators with an extrapolation ionization minichamber

    International Nuclear Information System (INIS)

    Antonio, Patrícia L.; Oliveira, Mércia L.; Caldas, Linda V.E.

    2014-01-01

    90 Sr+ 90 Y clinical applicators are used for brachytherapy in Brazilian clinics even though they are not manufactured anymore. Such sources must be calibrated periodically, and one of the calibration methods in use is ionometry with extrapolation ionization chambers. 90 Sr+ 90 Y clinical applicators were calibrated using an extrapolation minichamber developed at the Calibration Laboratory at IPEN. The obtained results agree satisfactorily with the data provided in calibration certificates of the sources. - Highlights: • 90 Sr+ 90 Y clinical applicators were calibrated using a mini-extrapolation chamber. • An extrapolation curve was obtained for each applicator during its calibration. • The results were compared with those provided by the calibration certificates. • All results of the dermatological applicators presented lower differences than 5%

  16. Generalized empirical equation for the extrapolated range of electrons in elemental and compound materials

    International Nuclear Information System (INIS)

    Lima, W. de; Poli CR, D. de

    1999-01-01

    The extrapolated range R ex of electrons is useful for various purposes in research and in the application of electrons, for example, in polymer modification, electron energy determination and estimation of effects associated with deep penetration of electrons. A number of works have used empirical equations to express the extrapolated range for some elements. In this work a generalized empirical equation, very simple and accurate, in the energy region 0.3 keV - 50 MeV is proposed. The extrapolated range for elements, in organic or inorganic molecules and compound materials, can be well expressed as a function of the atomic number Z or two empirical parameters Zm for molecules and Zc for compound materials instead of Z. (author)

  17. Development of technique to apply induction heating stress improvement to recirculation inlet nozzle

    International Nuclear Information System (INIS)

    Chiba, Kunihiko; Nihei, Kenichi; Ootaka, Minoru

    2009-01-01

    Stress corrosion cracking (SCC) have been found in the primary loop recirculation (PLR) systems of boiling water reactors (BWR). Residual stress in welding heat-affected zone is one of the factors of SCC, and the residual stress improvement is one of the most effective methods to prevent SCC. Induction heating stress improvement (IHSI) is one of the techniques to improve reduce residual stress. However, it is difficult to apply IHSI to the place such as the recirculation inlet nozzle where the flow stagnates. In this present study, the technique to apply IHSI to the recirculation inlet nozzle was developed using water jet which blowed into the crevice between the nozzle safe end and the thermal sleeve. (author)

  18. SU-F-T-64: An Alternative Approach to Determining the Reference Air-Kerma Rate from Extrapolation Chamber Measurements

    International Nuclear Information System (INIS)

    Schneider, T

    2016-01-01

    Purpose: Since 2008 the Physikalisch-Technische Bundesanstalt (PTB) has been offering the calibration of "1"2"5I-brachytherapy sources in terms of the reference air-kerma rate (RAKR). The primary standard is a large air-filled parallel-plate extrapolation chamber. The measurement principle is based on the fact that the air-kerma rate is proportional to the increment of ionization per increment of chamber volume at chamber depths greater than the range of secondary electrons originating from the electrode x_0. Methods: Two methods for deriving the RAKR from the measured ionization charges are: (1) to determine the RAKR from the slope of the linear fit to the so-called ’extrapolation curve’, the measured ionization charges Q vs. plate separations x or (2) to differentiate Q(x) and to derive the RAKR by a linear extrapolation towards zero plate separation. For both methods, correcting the measured data for all known influencing effects before the evaluation method is applied is a precondition. However, the discrepancy of their results is larger than the uncertainty given for the determination of the RAKR with both methods. Results: A new approach to derive the RAKR from the measurements is investigated as an alternative. The method was developed from the ground up, based on radiation transport theory. A conversion factor C(x_1, x_2) is applied to the difference of charges measured at the two plate separations x_1 and x_2. This factor is composed of quotients of three air-kerma values calculated for different plate separations in the chamber: the air kerma Ka(0) for plate separation zero, and the mean air kermas at the plate separations x_1 and x_2, respectively. The RAKR determined with method (1) yields 4.877 µGy/h, and with method (2) 4.596 µGy/h. The application of the alternative approach results in 4.810 µGy/h. Conclusion: The alternative method shall be established in the future.

  19. Hydrologic nonstationarity and extrapolating models to predict the future: overview of session and proceeding

    Directory of Open Access Journals (Sweden)

    F. H. S. Chiew

    2015-06-01

    Full Text Available This paper provides an overview of this IAHS symposium and PIAHS proceeding on "hydrologic nonstationarity and extrapolating models to predict the future". The paper provides a brief review of research on this topic, presents approaches used to account for nonstationarity when extrapolating models to predict the future, and summarises the papers in this session and proceeding.

  20. Effective ellipsoidal models for wavefield extrapolation in tilted orthorhombic media

    KAUST Repository

    Waheed, Umair Bin

    2016-04-22

    Wavefield computations using the ellipsoidally anisotropic extrapolation operator offer significant cost reduction compared to that for the orthorhombic case, especially when the symmetry planes are tilted and/or rotated. However, ellipsoidal anisotropy does not provide accurate wavefield representation or imaging for media of orthorhombic symmetry. Therefore, we propose the use of ‘effective ellipsoidally anisotropic’ models that correctly capture the kinematic behaviour of wavefields for tilted orthorhombic (TOR) media. We compute effective velocities for the ellipsoidally anisotropic medium using kinematic high-frequency representation of the TOR wavefield, obtained by solving the TOR eikonal equation. The effective model allows us to use the cheaper ellipsoidally anisotropic wave extrapolation operators. Although the effective models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including frequency dependency and caustics, if present, with reasonable accuracy. The proposed methodology offers a much better cost versus accuracy trade-off for wavefield computations in TOR media, particularly for media of low to moderate anisotropic strength. Furthermore, the computed wavefield solution is free from shear-wave artefacts as opposed to the conventional finite-difference based TOR wave extrapolation scheme. We demonstrate applicability and usefulness of our formulation through numerical tests on synthetic TOR models. © 2016 Institute of Geophysics of the ASCR, v.v.i

  1. Airflow measurement techniques applied to radon mitigation problems

    International Nuclear Information System (INIS)

    Harrje, D.T.; Gadsby, K.J.

    1989-01-01

    During the past decade a multitude of diagnostic procedures associated with the evaluation of air infiltration and air leakage sites have been developed. The spirit of international cooperation and exchange of ideas within the AIC-AIVC conferences has greatly facilitated the adoption and use of these measurement techniques in the countries participating in Annex V. But wide application of such diagnostic methods are not limited to air infiltration alone. The subject of this paper concerns the ways to evaluate and improve radon reduction in buildings using diagnostic methods directly related to developments familiar to the AIVC. Radon problems are certainly not unique to the United States, and the methods described here have to a degree been applied by researchers of other countries faced with similar problems. The radon problem involves more than a harmful pollutant of the living spaces of our buildings -- it also involves energy to operate radon removal equipment and the loss of interior conditioned air as a direct result. The techniques used for air infiltration evaluation will be shown to be very useful in dealing with the radon mitigation challenge. 10 refs., 7 figs., 1 tab

  2. A special mini-extrapolation chamber for calibration of 90Sr+90Y sources

    International Nuclear Information System (INIS)

    Oliveira, Mercia L; Caldas, Linda V E

    2005-01-01

    90 Sr+ 90 Y applicators are commonly utilized in brachytherapy, including ophthalmic procedures. The recommended instruments for the calibration of these applicators are extrapolation chambers, which are ionization chambers that allow the variation of their sensitive volume. Using the extrapolation method, the absorbed dose rate at the applicator surface can be determined. The aim of the present work was to develop a mini-extrapolation chamber for the calibration of 90 Sr+ 90 Y beta ray applicators. The developed mini-chamber has a 3.0 cm outer diameter and is 11.3 cm in length. An aluminized polyester foil is used as the entrance window while the collecting electrode is made of graphited polymethylmethacrylate. This mini-chamber was tested in 90 Sr+ 90 Y radiation beams from a beta particle check source and with a plane ophthalmic applicator, showing adequate results

  3. Database 'catalogue of techniques applied to materials and products of nuclear engineering'

    International Nuclear Information System (INIS)

    Lebedeva, E.E.; Golovanov, V.N.; Podkopayeva, I.A.; Temnoyeva, T.A.

    2002-01-01

    The database 'Catalogue of techniques applied to materials and products of nuclear engineering' (IS MERI) was developed to provide informational support for SSC RF RIAR and other enterprises in scientific investigations. This database contains information on the techniques used at RF Minatom enterprises for reactor material properties investigation. The main purpose of this system consists in the assessment of the current status of the reactor material science experimental base for the further planning of experimental activities and methodical support improvement. (author)

  4. In vitro to in vivo extrapolation of biotransformation rates for assessing bioaccumulation of hydrophobic organic chemicals in mammals.

    Science.gov (United States)

    Lee, Yung-Shan; Lo, Justin C; Otton, S Victoria; Moore, Margo M; Kennedy, Chris J; Gobas, Frank A P C

    2017-07-01

    Incorporating biotransformation in bioaccumulation assessments of hydrophobic chemicals in both aquatic and terrestrial organisms in a simple, rapid, and cost-effective manner is urgently needed to improve bioaccumulation assessments of potentially bioaccumulative substances. One approach to estimate whole-animal biotransformation rate constants is to combine in vitro measurements of hepatic biotransformation kinetics with in vitro to in vivo extrapolation (IVIVE) and bioaccumulation modeling. An established IVIVE modeling approach exists for pharmaceuticals (referred to in the present study as IVIVE-Ph) and has recently been adapted for chemical bioaccumulation assessments in fish. The present study proposes and tests an alternative IVIVE-B technique to support bioaccumulation assessment of hydrophobic chemicals with a log octanol-water partition coefficient (K OW ) ≥ 4 in mammals. The IVIVE-B approach requires fewer physiological and physiochemical parameters than the IVIVE-Ph approach and does not involve interconversions between clearance and rate constants in the extrapolation. Using in vitro depletion rates, the results show that the IVIVE-B and IVIVE-Ph models yield similar estimates of rat whole-organism biotransformation rate constants for hypothetical chemicals with log K OW  ≥ 4. The IVIVE-B approach generated in vivo biotransformation rate constants and biomagnification factors (BMFs) for benzo[a]pyrene that are within the range of empirical observations. The proposed IVIVE-B technique may be a useful tool for assessing BMFs of hydrophobic organic chemicals in mammals. Environ Toxicol Chem 2017;36:1934-1946. © 2016 SETAC. © 2016 SETAC.

  5. π π scattering by pole extrapolation methods

    International Nuclear Information System (INIS)

    Lott, F.W. III.

    1978-01-01

    A 25-inch hydrogen bubble chamber was used at the Lawrence Berkeley Laboratory Bevatron to produce 300,000 pictures of π + p interactions at an incident momentum of the π + of 2.67 GeV/c. The 2-prong events were processed using the FSD and the FOG-CLOUDY-FAIR data reduction system. Events of the nature π + p→π + pπ 0 and π + p→π + π + n with values of momentum transfer to the proton of -t less than or equal to 0.238 GeV 2 were selected. These events were used to extrapolate to the pion pole (t = m/sub π/ 2 ) in order to investigate the π π interaction with isospins of both T=1 and T=2. Two methods were used to do the extrapolation: the original Chew-Low method developed in 1959 and the Durr-Pilkuhn method developed in 1965, which takes into account centrifugal barrier penetration factors. At first it seemed that, while the Durr-Pilkuhn method gave better values for the total π π cross section, the Chew-Low method gave better values for the angular distribution. Further analysis, however, showed that, if the requirement of total OPE (one-pion-exchange) was dropped, then the Durr-Pilkuhn method gave more reasonable values of the angular distribution as well as for the total π π cross section

  6. π π scattering by pole extrapolation methods

    International Nuclear Information System (INIS)

    Lott, F.W. III.

    1977-01-01

    A 25-inch hydrogen bubble chamber was used at the Lawrence Berkeley Laboratory Bevatron to produce 300,000 pictures of π + p interactions at an incident momentum of the π + of 2.67 GeV/c. The 2-prong events were processed using the FSD and the FOG-CLOUDY-FAIR data reduction system. Events of the nature π + p → π + pπ 0 and π + p → π + π + n with values of momentum transfer to the proton of -t less than or equal to 0.238 GeV 2 were selected. These events were used to extrapolate to the pion pole (t = m/sub π/ 2 ) in order to investigate the π π interaction with isospins of both T = 1 and T = 2. Two methods were used to do the extrapolation: the original Chew-Low method developed in 1959 and the Durr-Pilkuhn method developed in 1965 which takes into account centrifugal barrier penetration factors. At first it seemed that, while the Durr-Pilkuhn method gave better values for the total π π cross section, the Chew-Low method gave better values for the angular distribution. Further analysis, however, showed that if the requirement of total OPE (one-pion-exchange) were dropped, then the Durr-Pilkuhn method gave more reasonable values of the angular distribution as well as for the total π π cross section

  7. Correction method for critical extrapolation of control-rods-rising during physical start-up of reactor

    International Nuclear Information System (INIS)

    Zhang Fan; Chen Wenzhen; Yu Lei

    2008-01-01

    During physical start-up of nuclear reactor, the curve got by lifting the con- trol rods to extrapolate to the critical state is often in protruding shape, by which the supercritical phenomena is led. In the paper, the reason why the curve was in protruding was analyzed. A correction method was introduced, and the calculations were carried out by the practical data used in a nuclear power plant. The results show that the correction method reverses the protruding shape of the extrapolating curve, and the risk of reactor supercritical phenomena can be reduced using the extrapolated curve got by the correction method during physical start-up of the reactor. (authors)

  8. Extrapolation of Extreme Response for Wind Turbines based on FieldMeasurements

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard

    2009-01-01

    extrapolation are presented. The first method is based on the same assumptions as the existing method but the statistical extrapolation is only performed for a limited number of mean wind speeds where the extreme load is likely to occur. For the second method the mean wind speeds are divided into storms which......The characteristic loads on wind turbines during operation are among others dependent on the mean wind speed, the turbulence intensity and the type and settings of the control system. These parameters must be taken into account in the assessment of the characteristic load. The characteristic load...... are assumed independent and the characteristic loads are determined from the extreme load in each storm....

  9. Empirical models of the Solar Wind : Extrapolations from the Helios & Ulysses observations back to the corona

    Science.gov (United States)

    Maksimovic, M.; Zaslavsky, A.

    2017-12-01

    We will present extrapolation of the HELIOS & Ulysses proton density, temperature & bulk velocities back to the corona. Using simple mass flux conservations we show a very good agreement between these extrapolations and the current state knowledge of these parameters in the corona, based on SOHO mesurements. These simple extrapolations could potentially be very useful for the science planning of both the Parker Solar Probe and Solar Orbiter missions. Finally will also present some modelling considerations, based on simple energy balance equations which arise from these empirical observationnal models.

  10. Extrapolated HPGe efficiency estimates based on a single calibration measurement

    International Nuclear Information System (INIS)

    Winn, W.G.

    1994-01-01

    Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of

  11. Applying Brainstorming Techniques to EFL Classroom

    OpenAIRE

    Toshiya, Oishi; 湘北短期大学; aPart-time Lecturer at Shohoku College

    2015-01-01

    This paper focuses on brainstorming techniques for English language learners. From the author's teaching experiences at Shohoku College during the academic year 2014-2015, the importance of brainstorming techniques was made evident. The author explored three elements of brainstorming techniques for writing using literaturereviews: lack of awareness, connecting to prior knowledge, and creativity. The literature reviews showed the advantage of using brainstorming techniques in an English compos...

  12. Strategies and techniques of communication and public relations applied to non-profit sector

    Directory of Open Access Journals (Sweden)

    Ioana – Julieta Josan

    2010-05-01

    Full Text Available The aim of this paper is to summarize the strategies and techniques of communication and public relations applied to non-profit sector.The approach of the paper is to identify the most appropriate strategies and techniques that non-profit sector can use to accomplish its objectives, to highlight specific differences between the strategies and techniques of the profit and non-profit sectors and to identify potential communication and public relations actions in order to increase visibility among target audience, create brand awareness and to change into positive brand sentiment the target perception about the non-profit sector.

  13. Detectors for LEP: methods and techniques

    International Nuclear Information System (INIS)

    Fabjan, C.

    1979-01-01

    This note surveys detection methods and techniques of relevance for the LEP physics programme. The basic principles of the detector physics are sketched, as recent improvement in understanding points towards improvements and also limitations in performance. Development and present status of large detector systems is presented and permits some conservative extrapolations. State-of-the-art techniques and technologies are presented and their potential use in the LEP physics programme assessed. (Auth.)

  14. Regression models in the determination of the absorbed dose with extrapolation chamber for ophthalmological applicators

    International Nuclear Information System (INIS)

    Alvarez R, J.T.; Morales P, R.

    1992-06-01

    The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, ( 90 Sr/ 90 Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)

  15. 131I-SPGP internal dosimetry: animal model and human extrapolation

    International Nuclear Information System (INIS)

    Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soprani, Juliana; Santos, Raquel Gouvea dos; Figueiredo, Suely Gomes de

    2009-01-01

    Scorpaena plumieri is commonly called moreia-ati or manganga and is the most venomous and one of the most abundant fish species of the Brazilian coast. Soprani 2006, demonstrated that SPGP - an isolated protein from S. plumieri fish- possess high antitumoral activity against malignant tumours and can be a source of template molecules for the development (design) of antitumoral drugs. In the present work, Soprani's 125 ISPGP biokinetic data were treated by MIRD formalism to perform Internal Dosimetry studies. Absorbed doses due to the 131 I-SPGP uptake were determinate in several organs of mice, as well as in the implanted tumor. Doses obtained for animal model were extrapolated to humans assuming a similar ratio for various mouse and human tissues. For the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from 131 I were considered. (author)

  16. Extrapolating Satellite Winds to Turbine Operating Heights

    DEFF Research Database (Denmark)

    Badger, Merete; Pena Diaz, Alfredo; Hahmann, Andrea N.

    2016-01-01

    Ocean wind retrievals from satellite sensors are typically performed for the standard level of 10 m. This restricts their full exploitation for wind energy planning, which requires wind information at much higher levels where wind turbines operate. A new method is presented for the vertical...... extrapolation of satellitebased wind maps. Winds near the sea surface are obtained from satellite data and used together with an adaptation of the Monin–Obukhov similarity theory to estimate the wind speed at higher levels. The thermal stratification of the atmosphere is taken into account through a long...

  17. Evaluation of functioning of an extrapolation chamber using Monte Carlo method

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Alfonso Laguardia, R.

    2015-01-01

    The extrapolation chamber is a parallel plate chamber and variable volume based on the Braff-Gray theory. It determines in absolute mode, with high accuracy the dose absorbed by the extrapolation of the ionization current measured for a null distance between the electrodes. This camera is used for dosimetry of external beta rays for radiation protection. This paper presents a simulation for evaluating the functioning of an extrapolation chamber type 23392 of PTW, using the MCNPX Monte Carlo method. In the simulation, the fluence in the air collector cavity of the chamber was obtained. The influence of the materials that compose the camera on its response against beta radiation beam was also analysed. A comparison of the contribution of primary and secondary radiation was performed. The energy deposition in the air collector cavity for different depths was calculated. The component with the higher energy deposition is the Polymethyl methacrylate block. The energy deposition in the air collector cavity for chamber depth 2500 μm is greater with a value of 9.708E-07 MeV. The fluence in the air collector cavity decreases with depth. It's value is 1.758E-04 1/cm 2 for chamber depth 500 μm. The values reported are for individual electron and photon histories. The graphics of simulated parameters are presented in the paper. (Author)

  18. Extrapolations of nuclear binding energies from new linear mass relations

    DEFF Research Database (Denmark)

    Hove, D.; Jensen, A. S.; Riisager, K.

    2013-01-01

    We present a method to extrapolate nuclear binding energies from known values for neighboring nuclei. We select four specific mass relations constructed to eliminate smooth variation of the binding energy as function nucleon numbers. The fast odd-even variations are avoided by comparing nuclei...

  19. Improvement technique of sensitized HAZ by GTAW cladding applied to a BWR power plant

    International Nuclear Information System (INIS)

    Tujimura, Hiroshi; Tamai, Yasumasa; Furukawa, Hideyasu; Kurosawa, Kouichi; Chiba, Isao; Nomura, Keiichi.

    1995-01-01

    A SCC(Stress Corrosion Cracking)-resistant technique, in which the sleeve installed by expansion is melted by GTAW process without filler metal with outside water cooling, was developed. The technique was applied to ICM (In-Core Monitor) housings of a BWR power plant in 1993. The ICM housings of which materials are type 304 Stainless Steels are sensitized with high tensile residual stresses by welding to the RPV (Reactor Pressure Vessel). As the result, ICM housings have potential of SCC initiation. Therefore, the improvement technique resistant to SCC was needed. The technique can improve chemical composition of the housing inside and residual stresses of the housing outside at the same time. Sensitization of the housing inner surface area is eliminated by replacing low-carbon with proper-ferrite microstructure clad. High tensile residual stresses of housing outside surface area is improved into compressive side. Compressive stresses of outside surface are induced by thermal stresses which are caused by inside cladding with outside water cooling. The clad is required to be low-carbon metal with proper ferrite and not to have the new sensitized HAZ (Heat Affected Zone) on the surface by cladding. The effect of the technique was qualified by SCC test, chemical composition check, ferrite content measurement and residual stresses measurement etc. All equipment for remote application were developed and qualified, too. The technique was successfully applied to a BWR plant after sufficient training

  20. Applying AI techniques to improve alarm display effectiveness

    International Nuclear Information System (INIS)

    Gross, J.M.; Birrer, S.A.; Crosberg, D.R.

    1987-01-01

    The Alarm Filtering System (AFS) addresses the problem of information overload in a control room during abnormal operations. Since operators can miss vital information during these periods, systems which emphasize important messages are beneficial. AFS uses the artificial intelligence (AI) technique of object-oriented programming to filter and dynamically prioritize alarm messages. When an alarm's status changes, AFS determines the relative importance of that change according to the current process state. AFS bases that relative importance on relationships the newly changed alarm has with other activated alarms. Evaluations of a alarm importance take place without regard to the activation sequence of alarm signals. The United States Department of Energy has applied for a patent on the approach used in this software. The approach was originally developed by EG and G Idaho for a nuclear reactor control room

  1. Projecting species' vulnerability to climate change: Which uncertainty sources matter most and extrapolate best?

    Science.gov (United States)

    Steen, Valerie; Sofaer, Helen R; Skagen, Susan K; Ray, Andrea J; Noon, Barry R

    2017-11-01

    Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water

  2. Evaluation of extrapolation methods for actual state expenditures on health care in Russian Federation

    Directory of Open Access Journals (Sweden)

    S. A. Banin

    2016-01-01

    Full Text Available Forecasting methods, extrapolation ones in particular, are used in health care for medical, biological and clinical research. The author, using accessible internet space, has not met a single publication devoted to extrapolation of financial parameters of health care activities. This determined the relevance of the material presented in the article: based on health care financing dynamics in Russia in 2000–2010 the author examined possibility of application of basic perspective extrapolation methods - moving average, exponential smoothing and least squares. It is hypothesized that all three methods can equally forecast actual public expenditures on health care in medium term in Russia’s current financial and economic conditions. The study result was evaluated in two time periods: within the studied interval and a five-year period. It was found that within the study period all methods have an average relative extrapolation error of 3–5%, which means high precision of the forecast. The study shown a specific feature of the least squares method which were gradually accumulating results so their economic interpretation became possible only in the end of the studied period. That is why the extrapolating results obtained by least squares method are not applicable in an entire study period and rather have a theoretical value. Beyond the study period, however, this feature was found to be the most corresponding to the real situation. It was the least squares method that proved to be the most appropriate for economic interpretation of the forecast results of actual public expenditures on health care. The hypothesis was not confirmed, the author received three differently directed results, while each method had independent significance and its application depended on evaluation study objectives and real social, economic and financial situation in Russian health care system.

  3. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    Energy Technology Data Exchange (ETDEWEB)

    Kadoura, Ahmad; Sun, Shuyu, E-mail: shuyu.sun@kaust.edu.sa; Salama, Amgad

    2014-08-01

    Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system's potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide.

  4. Accelerating Monte Carlo molecular simulations by reweighting and reconstructing Markov chains: Extrapolation of canonical ensemble averages and second derivatives to different temperature and density conditions

    KAUST Repository

    Kadoura, Ahmad Salim

    2014-08-01

    Accurate determination of thermodynamic properties of petroleum reservoir fluids is of great interest to many applications, especially in petroleum engineering and chemical engineering. Molecular simulation has many appealing features, especially its requirement of fewer tuned parameters but yet better predicting capability; however it is well known that molecular simulation is very CPU expensive, as compared to equation of state approaches. We have recently introduced an efficient thermodynamically consistent technique to regenerate rapidly Monte Carlo Markov Chains (MCMCs) at different thermodynamic conditions from the existing data points that have been pre-computed with expensive classical simulation. This technique can speed up the simulation more than a million times, making the regenerated molecular simulation almost as fast as equation of state approaches. In this paper, this technique is first briefly reviewed and then numerically investigated in its capability of predicting ensemble averages of primary quantities at different neighboring thermodynamic conditions to the original simulated MCMCs. Moreover, this extrapolation technique is extended to predict second derivative properties (e.g. heat capacity and fluid compressibility). The method works by reweighting and reconstructing generated MCMCs in canonical ensemble for Lennard-Jones particles. In this paper, system\\'s potential energy, pressure, isochoric heat capacity and isothermal compressibility along isochors, isotherms and paths of changing temperature and density from the original simulated points were extrapolated. Finally, an optimized set of Lennard-Jones parameters (ε, σ) for single site models were proposed for methane, nitrogen and carbon monoxide. © 2014 Elsevier Inc.

  5. Making the most of what we have: application of extrapolation approaches in wildlife transfer models

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, Nicholas A.; Barnett, Catherine L.; Wells, Claire [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi [Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Brown, Justin E.; Hosseini, Ali [Norwegian Radiation Protection Authority, P.O. Box 55, N-1332 Oesteraas (Norway); Yankovich, Tamara L. [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria); Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Willey, Neil [Centre for Research in Biosciences, University of the West of England, Coldharbour Lane, Frenchay, Bristol BS16 1QY (United Kingdom)

    2014-07-01

    Radiological environmental protection models need to predict the transfer of many radionuclides to a large number of organisms. There has been considerable development of transfer (predominantly concentration ratio) databases over the last decade. However, in reality it is unlikely we will ever have empirical data for all the species-radionuclide combinations which may need to be included in assessments. To provide default values for a number of existing models/frameworks various extrapolation approaches have been suggested (e.g. using data for a similar organism or element). This paper presents recent developments in two such extrapolation approaches, namely phylogeny and allometry. An evaluation of how extrapolation approaches have performed and the potential application of Bayesian statistics to make best use of available data will also be given. Using a Residual Maximum Likelihood (REML) mixed-model regression we initially analysed a dataset comprising 597 entries for 53 freshwater fish species from 67 sites to investigate if phylogenetic variation in transfer could be identified. The REML analysis generated an estimated mean value for each species on a common scale after taking account of the effect of the inter-site variation. Using an independent dataset, we tested the hypothesis that the REML model outputs could be used to predict radionuclide activity concentrations in other species from the results of a species which had been sampled at a specific site. The outputs of the REML analysis accurately predicted {sup 137}Cs activity concentrations in different species of fish from 27 lakes. Although initially investigated as an extrapolation approach the output of this work is a potential alternative to the highly site dependent concentration ratio model. We are currently applying this approach to a wider range of organism types and different ecosystems. An initial analysis of these results will be presented. The application of allometric, or mass

  6. 131I-CRTX internal dosimetry: animal model and human extrapolation

    International Nuclear Information System (INIS)

    Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soares, Marcella Araugio; Silveira, Marina Bicalho; Santos, Raquel Gouvea dos

    2009-01-01

    Snake venoms molecules have been shown to play a role not only in the survival and proliferation of tumor cells but also in the processes of tumor cell adhesion, migration and angiogenesis. 125 I-Crtx, a radiolabeled version of a peptide derived from Crotalus durissus terrificus snake venom, specifically binds to tumor and triggers apoptotic signalling. At the present work, 125 I-Crtx biokinetic data (evaluated in mice bearing Erlich tumor) were treated by MIRD formalism to perform Internal Dosimetry studies. Doses in several organs of mice were determinate, as well as in implanted tumor, for 131 I-Crtx. Doses results obtained for animal model were extrapolated to humans assuming a similar concentration ratio among various tissues between mouse and human. In the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from 131 I in the tissue were considered in dose calculations. (author)

  7. Extrapolation of rate constants of reactions producing H2 and O2 in radiolysis of water at high temperatures

    International Nuclear Information System (INIS)

    Leblanc, R.; Ghandi, K.; Hackman, B.; Liu, G.

    2014-01-01

    One target of our research is to extrapolate known data on the rate constants of reactions and add corrections to estimate the rate constants at the higher temperatures reached by the SCWR reactors. The focus of this work was to extrapolate known data on the rate constants of reactions that produce Hydrogen or Oxygen with a rate constant below 10 10 mol -1 s -1 at room temperature. The extrapolation is done taking into account the change in the diffusion rate of the interacting species and the cage effect with thermodynamic conditions. The extrapolations are done over a wide temperature range and under isobaric conditions. (author)

  8. Performance of a prototype of an extrapolation minichamber in various radiation beams

    International Nuclear Information System (INIS)

    Oliveira, M.L.; Caldas, L.V.E.

    2007-01-01

    An extrapolation minichamber was developed for measuring doses from weakly penetrating types of radiation. The chamber was tested at the radiotherapeutic dose level in a beam from a 90 Sr+ 90 Y check source, in a beam from a plane 90 Sr+ 90 Y ophthalmic applicator, and in several reference beams from an X-ray tube. Saturation, ion collection efficiency, stabilization time, extrapolation curves, linearity of chamber response vs. air kerma rate, and dependences of the response on the energy and irradiation angle were characterized. The results are satisfactory; they show that the chamber can be used in the dosimetry of 90 Sr+ 90 Y beta particles and low-energy X-ray beams

  9. Windtunnel Rebuilding And Extrapolation To Flight At Transsonic Speed For ExoMars

    Science.gov (United States)

    Fertig, Markus; Neeb, Dominik; Gulhan, Ali

    2011-05-01

    The static as well as the dynamic behaviour of the EXOMARS vehicle in the transonic velocity regime has been investigated experimentally by the Supersonic and Hypersonic Technology Department of DLR in order to investigate the behaviour prior to parachute opening. Since the experimental work was performed in air, a numerical extrapolation to flight by means of CFD is necessary. At low supersonic speed this extrapolation to flight was performed by the Spacecraft Department of the Institute of Flow Technology of DLR employing the CFD code TAU. Numerical as well as experimental results for the wind tunnel test at Mach 1.2 will be compared and discussed for three different angles of attack.

  10. Validation and qualification of surface-applied fibre optic strain sensors using application-independent optical techniques

    International Nuclear Information System (INIS)

    Schukar, Vivien G; Kadoke, Daniel; Kusche, Nadine; Münzenberger, Sven; Gründer, Klaus-Peter; Habel, Wolfgang R

    2012-01-01

    Surface-applied fibre optic strain sensors were investigated using a unique validation facility equipped with application-independent optical reference systems. First, different adhesives for the sensor's application were analysed regarding their material properties. Measurements resulting from conventional measurement techniques, such as thermo-mechanical analysis and dynamic mechanical analysis, were compared with measurements resulting from digital image correlation, which has the advantage of being a non-contact technique. Second, fibre optic strain sensors were applied to test specimens with the selected adhesives. Their strain-transfer mechanism was analysed in comparison with conventional strain gauges. Relative movements between the applied sensor and the test specimen were visualized easily using optical reference methods, digital image correlation and electronic speckle pattern interferometry. Conventional strain gauges showed limited opportunities for an objective strain-transfer analysis because they are also affected by application conditions. (paper)

  11. Comparative studies of parameters based on the most probable versus an approximate linear extrapolation distance estimates for circular cylindrical absorbing rod

    International Nuclear Information System (INIS)

    Wassef, W.A.

    1982-01-01

    Estimates and techniques that are valid to calculate the linear extrapolation distance for an infinitely long circular cylindrical absorbing region are reviewed. Two estimates, in particular, are put into consideration, that is the most probable and the value resulting from an approximate technique based on matching the integral transport equation inside the absorber with the diffusion approximation in the surrounding infinite scattering medium. Consequently, the effective diffusion parameters and the blackness of the cylinder are derived and subjected to comparative studies. A computer code is set up to calculate and compare the different parameters, which is useful in reactor analysis and serves to establish a beneficial estimates that are amenable to direct application to reactor design codes

  12. Irradiated food: validity of extrapolating wholesomeness data

    International Nuclear Information System (INIS)

    Taub, I.A.; Angelini, P.; Merritt, C. Jr.

    1976-01-01

    Criteria are considered for validly extrapolating the conclusions reached on the wholesomeness of an irradiated food receiving high doses to the same food receiving a lower dose. A consideration first is made of the possible chemical mechanisms that could give rise to different functional dependences of radiolytic products on dose. It is shown that such products should increase linearly with dose and the ratio of products should be constant throughout the dose range considered. The assumption, generally accepted in pharmacology, then is made that if any adverse effects related to the food are discerned in the test animals, then the intensity of these effects would increase with the concentration of radiolytic products in the food. Lastly, the need to compare data from animal studies with foods irradiated to several doses against chemical evidence obtained over a comparable dose range is considered. It is concluded that if the products depend linearly on dose and if feeding studies indicate no adverse effects, then an extrapolation to lower doses is clearly valid. This approach is illustrated for irradiated codfish. The formation of selected volatile products in samples receiving between 0.1 and 3 Mrads was examined, and their concentrations were found to increase linearly at least up to 1 Mrad. These data were compared with results from animal feeding studies establishing the wholesomeness of codfish and haddock irradiated to 0.2, 0.6 and 2.8 Mrads. It is stated, therefore, that if ocean fish, currently under consideration for onboard processing, were irradiated to 0.1 Mrad, it would be correspondingly wholesome

  13. Experiences and extrapolations from Hiroshima and Nagasaki

    International Nuclear Information System (INIS)

    Harwell, C.C.

    1985-01-01

    This paper examines the events following the atomic bombings of Hiroshima and Nagasaki in 1945 and extrapolates from these experiences to further understand the possible consequences of detonations on a local area from weapons in the current world nuclear arsenal. The first section deals with a report of the events that occurred in Hiroshima and Nagasaki just after the 1945 bombings with respect to the physical conditions of the affected areas, the immediate effects on humans, the psychological response of the victims, and the nature of outside assistance. Because there can be no experimental data to validate the effects on cities and their populations of detonations from current weapons, the data from the actual explosions on Hiroshima and Nagasaki provide a point of departure. The second section examines possible extrapolations from and comparisons with the Hiroshima and Nagasaki experiences. The limitations of drawing upon the Hiroshima and Nagasaki experiences are discussed. A comparison is made of the scale of effects from other major disasters for urban systems, such as damages from the conventional bombings of cities during World War II, the consequences of major earthquakes, the historical effects of the Black Plague and widespread famines, and other extreme natural events. The potential effects of detonating a modern 1 MT warhead on the city of Hiroshima as it exists today are simulated. This is extended to the local effects on a targeted city from a global nuclear war, and attention is directed to problems of estimating the societal effects from such a war

  14. Analytical techniques applied to study cultural heritage objects

    Energy Technology Data Exchange (ETDEWEB)

    Rizzutto, M.A.; Curado, J.F.; Bernardes, S.; Campos, P.H.O.V.; Kajiya, E.A.M.; Silva, T.F.; Rodrigues, C.L.; Moro, M.; Tabacniks, M.; Added, N., E-mail: rizzutto@if.usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Fisica

    2015-07-01

    The scientific study of artistic and cultural heritage objects have been routinely performed in Europe and the United States for decades. In Brazil this research area is growing, mainly through the use of physical and chemical characterization methods. Since 2003 the Group of Applied Physics with Particle Accelerators of the Physics Institute of the University of Sao Paulo (GFAA-IF) has been working with various methodologies for material characterization and analysis of cultural objects. Initially using ion beam analysis performed with Particle Induced X-Ray Emission (PIXE), Rutherford Backscattering (RBS) and recently Ion Beam Induced Luminescence (IBIL), for the determination of the elements and chemical compounds in the surface layers. These techniques are widely used in the Laboratory of Materials Analysis with Ion Beams (LAMFI-USP). Recently, the GFAA expanded the studies to other possibilities of analysis enabled by imaging techniques that coupled with elemental and compositional characterization provide a better understanding on the materials and techniques used in the creative process in the manufacture of objects. The imaging analysis, mainly used to examine and document artistic and cultural heritage objects, are performed through images with visible light, infrared reflectography (IR), fluorescence with ultraviolet radiation (UV), tangential light and digital radiography. Expanding more the possibilities of analysis, new capabilities were added using portable equipment such as Energy Dispersive X-Ray Fluorescence (ED-XRF) and Raman Spectroscopy that can be used for analysis 'in situ' at the museums. The results of these analyzes are providing valuable information on the manufacturing process and have provided new information on objects of different University of Sao Paulo museums. Improving the arsenal of cultural heritage analysis it was recently constructed an 3D robotic stage for the precise positioning of samples in the external beam setup

  15. Analytical techniques applied to study cultural heritage objects

    International Nuclear Information System (INIS)

    Rizzutto, M.A.; Curado, J.F.; Bernardes, S.; Campos, P.H.O.V.; Kajiya, E.A.M.; Silva, T.F.; Rodrigues, C.L.; Moro, M.; Tabacniks, M.; Added, N.

    2015-01-01

    The scientific study of artistic and cultural heritage objects have been routinely performed in Europe and the United States for decades. In Brazil this research area is growing, mainly through the use of physical and chemical characterization methods. Since 2003 the Group of Applied Physics with Particle Accelerators of the Physics Institute of the University of Sao Paulo (GFAA-IF) has been working with various methodologies for material characterization and analysis of cultural objects. Initially using ion beam analysis performed with Particle Induced X-Ray Emission (PIXE), Rutherford Backscattering (RBS) and recently Ion Beam Induced Luminescence (IBIL), for the determination of the elements and chemical compounds in the surface layers. These techniques are widely used in the Laboratory of Materials Analysis with Ion Beams (LAMFI-USP). Recently, the GFAA expanded the studies to other possibilities of analysis enabled by imaging techniques that coupled with elemental and compositional characterization provide a better understanding on the materials and techniques used in the creative process in the manufacture of objects. The imaging analysis, mainly used to examine and document artistic and cultural heritage objects, are performed through images with visible light, infrared reflectography (IR), fluorescence with ultraviolet radiation (UV), tangential light and digital radiography. Expanding more the possibilities of analysis, new capabilities were added using portable equipment such as Energy Dispersive X-Ray Fluorescence (ED-XRF) and Raman Spectroscopy that can be used for analysis 'in situ' at the museums. The results of these analyzes are providing valuable information on the manufacturing process and have provided new information on objects of different University of Sao Paulo museums. Improving the arsenal of cultural heritage analysis it was recently constructed an 3D robotic stage for the precise positioning of samples in the external beam setup

  16. Accurate Conformational Energy Differences of Carbohydrates: A Complete Basis Set Extrapolation

    Czech Academy of Sciences Publication Activity Database

    Csonka, G. I.; Kaminský, Jakub

    2011-01-01

    Roč. 7, č. 4 (2011), s. 988-997 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : MP2 * basis set extrapolation * saccharides Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.215, year: 2011

  17. Applying Metrological Techniques to Satellite Fundamental Climate Data Records

    Science.gov (United States)

    Woolliams, Emma R.; Mittaz, Jonathan PD; Merchant, Christopher J.; Hunt, Samuel E.; Harris, Peter M.

    2018-02-01

    Quantifying long-term environmental variability, including climatic trends, requires decadal-scale time series of observations. The reliability of such trend analysis depends on the long-term stability of the data record, and understanding the sources of uncertainty in historic, current and future sensors. We give a brief overview on how metrological techniques can be applied to historical satellite data sets. In particular we discuss the implications of error correlation at different spatial and temporal scales and the forms of such correlation and consider how uncertainty is propagated with partial correlation. We give a form of the Law of Propagation of Uncertainties that considers the propagation of uncertainties associated with common errors to give the covariance associated with Earth observations in different spectral channels.

  18. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  19. Extrapolation of vertical target motion through a brief visual occlusion.

    Science.gov (United States)

    Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco

    2010-03-01

    It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects.

  20. Extrapolation of ZPR sodium void measurements to the power reactor

    International Nuclear Information System (INIS)

    Beck, C.L.; Collins, P.J.; Lineberry, M.J.; Grasseschi, G.L.

    1976-01-01

    Sodium-voiding measurements of ZPPR assemblies 2 and 5 are analyzed with ENDF/B Version IV data. Computations include directional diffusion coefficients to account for streaming effects resulting from the plate structure of the critical assembly. Bias factors for extrapolating critical assembly data to the CRBR design are derived from the results of this analysis

  1. {sup 131}I-SPGP internal dosimetry: animal model and human extrapolation

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soprani, Juliana; Santos, Raquel Gouvea dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: hma@cdtn.br; Figueiredo, Suely Gomes de [Universidade Federal do Espirito Santo, (UFES), Vitoria, ES (Brazil). Dept. de Ciencias Fisiologicas. Lab. de Quimica de Proteinas

    2009-07-01

    Scorpaena plumieri is commonly called moreia-ati or manganga and is the most venomous and one of the most abundant fish species of the Brazilian coast. Soprani 2006, demonstrated that SPGP - an isolated protein from S. plumieri fish- possess high antitumoral activity against malignant tumours and can be a source of template molecules for the development (design) of antitumoral drugs. In the present work, Soprani's {sup 125}ISPGP biokinetic data were treated by MIRD formalism to perform Internal Dosimetry studies. Absorbed doses due to the {sup 131}I-SPGP uptake were determinate in several organs of mice, as well as in the implanted tumor. Doses obtained for animal model were extrapolated to humans assuming a similar ratio for various mouse and human tissues. For the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from {sup 131}I were considered. (author)

  2. Satellite SAR interferometric techniques applied to emergency mapping

    Science.gov (United States)

    Stefanova Vassileva, Magdalena; Riccardi, Paolo; Lecci, Daniele; Giulio Tonolo, Fabio; Boccardo Boccardo, Piero; Chiesa, Giuliana; Angeluccetti, Irene

    2017-04-01

    This paper aim to investigate the capabilities of the currently available SAR interferometric algorithms in the field of emergency mapping. Several tests have been performed exploiting the Copernicus Sentinel-1 data using the COTS software ENVI/SARscape 5.3. Emergency Mapping can be defined as "creation of maps, geo-information products and spatial analyses dedicated to providing situational awareness emergency management and immediate crisis information for response by means of extraction of reference (pre-event) and crisis (post-event) geographic information/data from satellite or aerial imagery". The conventional differential SAR interferometric technique (DInSAR) and the two currently available multi-temporal SAR interferometric approaches, i.e. Permanent Scatterer Interferometry (PSI) and Small BAseline Subset (SBAS), have been applied to provide crisis information useful for the emergency management activities. Depending on the considered Emergency Management phase, it may be distinguished between rapid mapping, i.e. fast provision of geospatial data regarding the area affected for the immediate emergency response, and monitoring mapping, i.e. detection of phenomena for risk prevention and mitigation activities. In order to evaluate the potential and limitations of the aforementioned SAR interferometric approaches for the specific rapid and monitoring mapping application, five main factors have been taken into account: crisis information extracted, input data required, processing time and expected accuracy. The results highlight that DInSAR has the capacity to delineate areas affected by large and sudden deformations and fulfills most of the immediate response requirements. The main limiting factor of interferometry is the availability of suitable SAR acquisition immediately after the event (e.g. Sentinel-1 mission characterized by 6-day revisiting time may not always satisfy the immediate emergency request). PSI and SBAS techniques are suitable to produce

  3. The influence of an extrapolation chamber over the low energy X-ray beam radiation field

    Energy Technology Data Exchange (ETDEWEB)

    Tanuri de F, M. T.; Da Silva, T. A., E-mail: mttf@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Pampulha, Belo Horizonte, Minas Gerais (Brazil)

    2016-10-15

    The extrapolation chambers are detectors whose sensitive volume can be modified by changing the distance between the electrodes and has been widely used for beta particles primary measurement system. In this work, was performed a PTW 23392 extrapolation chamber Monte Carlo simulation, by mean the MCNPX code. Although the sensitive volume of an extrapolation chamber can be reduced to very small size, their packaging is large enough to modify the radiation field and change the absorbed dose measurements values. Experiments were performed to calculate correction factors for this purpose. The validation of the Monte Carlo model was done by comparing the spectra obtained with a CdTe detector according to the ISO 4037 criteria. Agreements smaller than 5% for half value layers, 10% for spectral resolution and 1% for mean energy, were found. It was verified that the correction factors are dependent of the X-ray beam quality. (Author)

  4. The influence of an extrapolation chamber over the low energy X-ray beam radiation field

    International Nuclear Information System (INIS)

    Tanuri de F, M. T.; Da Silva, T. A.

    2016-10-01

    The extrapolation chambers are detectors whose sensitive volume can be modified by changing the distance between the electrodes and has been widely used for beta particles primary measurement system. In this work, was performed a PTW 23392 extrapolation chamber Monte Carlo simulation, by mean the MCNPX code. Although the sensitive volume of an extrapolation chamber can be reduced to very small size, their packaging is large enough to modify the radiation field and change the absorbed dose measurements values. Experiments were performed to calculate correction factors for this purpose. The validation of the Monte Carlo model was done by comparing the spectra obtained with a CdTe detector according to the ISO 4037 criteria. Agreements smaller than 5% for half value layers, 10% for spectral resolution and 1% for mean energy, were found. It was verified that the correction factors are dependent of the X-ray beam quality. (Author)

  5. Archaeometry: nuclear and conventional techniques applied to the archaeological research

    International Nuclear Information System (INIS)

    Esparza L, R.; Cardenas G, E.

    2005-01-01

    The book that now is presented is formed by twelve articles that approach from different perspective topics as the archaeological prospecting, the analysis of the pre hispanic and colonial ceramic, the obsidian and the mural painting, besides dating and questions about the data ordaining. Following the chronological order in which the exploration techniques and laboratory studies are required, there are presented in the first place the texts about the systematic and detailed study of the archaeological sites, later we pass to relative topics to the application of diverse nuclear techniques as PIXE, RBS, XRD, NAA, SEM, Moessbauer spectroscopy and other conventional techniques. The multidisciplinary is an aspect that highlights in this work, that which owes to the great specialization of the work that is presented even in the archaeological studies including in the open ground of the topography, mapping, excavation and, of course, in the laboratory tests. Most of the articles are the result of several years of investigation and it has been consigned in the responsibility of each article. The texts here gathered emphasize the technical aspects of each investigation, the modern compute systems applied to the prospecting and the archaeological mapping, the chemical and physical analysis of organic materials, of metal artifacts, of diverse rocks used in the pre hispanic epoch, of mural and ceramic paintings, characteristics that justly underline the potential of the collective works. (Author)

  6. English Language Teachers' Perceptions on Knowing and Applying Contemporary Language Teaching Techniques

    Science.gov (United States)

    Sucuoglu, Esen

    2017-01-01

    The aim of this study is to determine the perceptions of English language teachers teaching at a preparatory school in relation to their knowing and applying contemporary language teaching techniques in their lessons. An investigation was conducted of 21 English language teachers at a preparatory school in North Cyprus. The SPSS statistical…

  7. Comparison of precipitation nowcasting by extrapolation and statistical-advection methods

    Czech Academy of Sciences Publication Activity Database

    Sokol, Zbyněk; Kitzmiller, D.; Pešice, Petr; Mejsnar, Jan

    2013-01-01

    Roč. 123, 1 April (2013), s. 17-30 ISSN 0169-8095 R&D Projects: GA MŠk ME09033 Institutional support: RVO:68378289 Keywords : Precipitation forecast * Statistical models * Regression * Quantitative precipitation forecast * Extrapolation forecast Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.421, year: 2013 http://www.sciencedirect.com/science/article/pii/S0169809512003390

  8. Design for low dose extrapolation of carcinogenicity data. Technical report No. 24

    International Nuclear Information System (INIS)

    Wong, S.C.

    1979-06-01

    Parameters for modelling dose-response relationships in carcinogenesis models were found to be very complicated, especially for distinguishing low dose effects. The author concluded that extrapolation always bears the danger of providing misleading information

  9. Improving skill development: an exploratory study comparing a philosophical and an applied ethical analysis technique

    Science.gov (United States)

    Al-Saggaf, Yeslam; Burmeister, Oliver K.

    2012-09-01

    This exploratory study compares and contrasts two types of critical thinking techniques; one is a philosophical and the other an applied ethical analysis technique. The two techniques analyse an ethically challenging situation involving ICT that a recent media article raised to demonstrate their ability to develop the ethical analysis skills of ICT students and professionals. In particular the skill development focused on includes: being able to recognise ethical challenges and formulate coherent responses; distancing oneself from subjective judgements; developing ethical literacy; identifying stakeholders; and communicating ethical decisions made, to name a few.

  10. Determination of dose rates in beta radiation fields using extrapolation chamber and GM counter

    International Nuclear Information System (INIS)

    Borg, J.; Christensen, P.

    1995-01-01

    The extrapolation chamber measurement method is the basic method for the determination of dose rates in beta radiation fields and the method has been used for the establishment of beta calibration fields. The paper describes important details of the method and presents results from the measurements of depth-dose profiles from different beta radiation fields with E max values down to 156 keV. Results are also presented from studies of GM counters for use as survey instruments for monitoring beta dose rates at the workplace. Advantages of GM counters are a simple measurement technique and high sensitivity. GM responses were measured from exposures in different beta radiation fields using different filters in front of the GM detector and the paper discusses the possibility of using the results from GM measurements with two different filters in an unknown beta radiation field to obtain a value of the dose rate. (Author)

  11. Evaluation of Economic Merger Control Techniques Applied to the European Electricity Sector

    International Nuclear Information System (INIS)

    Vandezande, Leen; Meeus, Leonardo; Delvaux, Bram; Van Calster, Geert; Belmans, Ronnie

    2006-01-01

    With European electricity markets not yet functioning on a competitive basis and consolidation increasing, the European Commission has said it intends to more intensively apply competition law in the electricity sector. Yet economic techniques and theories used in EC merger control fail to take sufficiently into account some specific features of electricity markets. The authors offer suggestions to enhance their reliability and applicability in the electricity sector. (author)

  12. Applying traditional signal processing techniques to social media exploitation for situational understanding

    Science.gov (United States)

    Abdelzaher, Tarek; Roy, Heather; Wang, Shiguang; Giridhar, Prasanna; Al Amin, Md. Tanvir; Bowman, Elizabeth K.; Kolodny, Michael A.

    2016-05-01

    Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.

  13. Enhanced Confinement Scenarios Without Large Edge Localized Modes in Tokamaks: Control, Performance, and Extrapolability Issues for ITER

    Energy Technology Data Exchange (ETDEWEB)

    Maingi, R [PPPL

    2014-07-01

    Large edge localized modes (ELMs) typically accompany good H-mode confinement in fusion devices, but can present problems for plasma facing components because of high transient heat loads. Here the range of techniques for ELM control deployed in fusion devices is reviewed. The two baseline strategies in the ITER baseline design are emphasized: rapid ELM triggering and peak heat flux control via pellet injection, and the use of magnetic perturbations to suppress or mitigate ELMs. While both of these techniques are moderately well developed, with reasonable physical bases for projecting to ITER, differing observations between multiple devices are also discussed to highlight the needed community R & D. In addition, recent progress in ELM-free regimes, namely Quiescent H-mode, I-mode, and Enhanced Pedestal H-mode is reviewed, and open questions for extrapolability are discussed. Finally progress and outstanding issues in alternate ELM control techniques are reviewed: supersonic molecular beam injection, edge electron cyclotron heating, lower hybrid heating and/or current drive, controlled periodic jogs of the vertical centroid position, ELM pace-making via periodic magnetic perturbations, ELM elimination with lithium wall conditioning, and naturally occurring small ELM regimes.

  14. Applied methods and techniques for mechatronic systems modelling, identification and control

    CERN Document Server

    Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya

    2014-01-01

    Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...

  15. {sup 131}I-CRTX internal dosimetry: animal model and human extrapolation

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soares, Marcella Araugio; Silveira, Marina Bicalho; Santos, Raquel Gouvea dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: hma@cdtn.br

    2009-07-01

    Snake venoms molecules have been shown to play a role not only in the survival and proliferation of tumor cells but also in the processes of tumor cell adhesion, migration and angiogenesis. {sup 125}I-Crtx, a radiolabeled version of a peptide derived from Crotalus durissus terrificus snake venom, specifically binds to tumor and triggers apoptotic signalling. At the present work, {sup 125}I-Crtx biokinetic data (evaluated in mice bearing Erlich tumor) were treated by MIRD formalism to perform Internal Dosimetry studies. Doses in several organs of mice were determinate, as well as in implanted tumor, for {sup 131}I-Crtx. Doses results obtained for animal model were extrapolated to humans assuming a similar concentration ratio among various tissues between mouse and human. In the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from {sup 131}I in the tissue were considered in dose calculations. (author)

  16. Evaluating In Vitro-In Vivo Extrapolation of Toxicokinetics.

    Science.gov (United States)

    Wambaugh, John F; Hughes, Michael F; Ring, Caroline L; MacMillan, Denise K; Ford, Jermaine; Fennell, Timothy R; Black, Sherry R; Snyder, Rodney W; Sipes, Nisha S; Wetmore, Barbara A; Westerhout, Joost; Setzer, R Woodrow; Pearce, Robert G; Simmons, Jane Ellen; Thomas, Russell S

    2018-05-01

    Prioritizing the risk posed by thousands of chemicals potentially present in the environment requires exposure, toxicity, and toxicokinetic (TK) data, which are often unavailable. Relatively high throughput, in vitro TK (HTTK) assays and in vitro-to-in vivo extrapolation (IVIVE) methods have been developed to predict TK, but most of the in vivo TK data available to benchmark these methods are from pharmaceuticals. Here we report on new, in vivo rat TK experiments for 26 non-pharmaceutical chemicals with environmental relevance. Both intravenous and oral dosing were used to calculate bioavailability. These chemicals, and an additional 19 chemicals (including some pharmaceuticals) from previously published in vivo rat studies, were systematically analyzed to estimate in vivo TK parameters (e.g., volume of distribution [Vd], elimination rate). For each of the chemicals, rat-specific HTTK data were available and key TK predictions were examined: oral bioavailability, clearance, Vd, and uncertainty. For the non-pharmaceutical chemicals, predictions for bioavailability were not effective. While no pharmaceutical was absorbed at less than 10%, the fraction bioavailable for non-pharmaceutical chemicals was as low as 0.3%. Total clearance was generally more under-estimated for nonpharmaceuticals and Vd methods calibrated to pharmaceuticals may not be appropriate for other chemicals. However, the steady-state, peak, and time-integrated plasma concentrations of nonpharmaceuticals were predicted with reasonable accuracy. The plasma concentration predictions improved when experimental measurements of bioavailability were incorporated. In summary, HTTK and IVIVE methods are adequately robust to be applied to high throughput in vitro toxicity screening data of environmentally relevant chemicals for prioritizing based on human health risks.

  17. Applied research on air pollution using nuclear-related analytical techniques

    International Nuclear Information System (INIS)

    1994-01-01

    A co-ordinated research programme (CRP) on applied research on air pollution using nuclear-related techniques is a global CRP which will run from 1992-1996, and will build upon the experience gained by the Agency from the laboratory support that it has been providing for several years to BAPMoN - the Background Air Pollution Monitoring Network programme organized under the auspices of the World Meterological Organization. The purpose of this CRP is to promote the use of nuclear analytical techniques in air pollution studies, e.g. NAA, XFR, and PIXE for the analysis of toxic and other trace elements in suspended particulate matter (including air filter samples), rainwater and fog-water samples, and in biological indicators of air pollution (e.g. lichens and mosses). The main purposes of the core programme are i) to support the use of nuclear and nuclear-related analytical techniques for practically-oriented research and monitoring studies on air pollution ii) to identify major sources of air pollution affecting each of the participating countries with particular reference to toxic heavy metals, and iii) to obtain comparative data on pollution levels in areas of high pollution (e.g. a city centre or a populated area downwind of a large pollution source) and low pollution (e.g. rural areas). This document reports the discussions held during the first Research Co-ordination Meeting (RCM) for the CRP which took place at the IAEA Headquarters in Vienna. Refs, figs and tabs

  18. Renormalization techniques applied to the study of density of states in disordered systems

    International Nuclear Information System (INIS)

    Ramirez Ibanez, J.

    1985-01-01

    A general scheme for real space renormalization of formal scattering theory is presented and applied to the calculation of density of states (DOS) in some finite width systems. This technique is extended in a self-consistent way, to the treatment of disordered and partially ordered chains. Numerical results of moments and DOS are presented in comparison with previous calculations. In addition, a self-consistent theory for the magnetic order problem in a Hubbard chain is derived and a parametric transition is observed. Properties of localization of the electronic states in disordered chains are studied through various decimation averaging techniques and using numerical simulations. (author) [pt

  19. Extrapolation of rate constants of reactions producing H{sub 2} and O{sub 2} in radiolysis of water at high temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Leblanc, R.; Ghandi, K.; Hackman, B.; Liu, G. [Mount Allison Univ., Sackville, NB (Canada)

    2014-07-01

    One target of our research is to extrapolate known data on the rate constants of reactions and add corrections to estimate the rate constants at the higher temperatures reached by the SCWR reactors. The focus of this work was to extrapolate known data on the rate constants of reactions that produce Hydrogen or Oxygen with a rate constant below 10{sup 10} mol{sup -1} s{sup -1} at room temperature. The extrapolation is done taking into account the change in the diffusion rate of the interacting species and the cage effect with thermodynamic conditions. The extrapolations are done over a wide temperature range and under isobaric conditions. (author)

  20. Flavor extrapolation in lattice QCD

    International Nuclear Information System (INIS)

    Duffy, W.C.

    1984-01-01

    Explicit calculation of the effect of virtual quark-antiquark pairs in lattice QCD has eluded researchers. To include their effect explicitly one must calculate the determinant of the fermion-fermion coupling matrix. Owing to the large number of sites in a continuum limit size lattice, direct evaluation of this term requires an unrealistic amount of computer time. The effect of the virtual pairs can be approximated by ignoring this term and adjusting lattice couplings to reproduce experimental results. This procedure is called the valence approximation since it ignores all but the minimal number of quarks needed to describe hadrons. In this work the effect of the quark-antiquark pairs has been incorporated in a theory with an effective negative number of quark flavors contributing to the closed loops. Various particle masses and decay constants have been calculated for this theory and for one with no virtual pairs. The author attempts to extrapolate results towards positive numbers of quark flavors. The results show approximate agreement with experimental measurements and demonstrate the smoothness of lattice expectations in the number of quark flavors

  1. Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media

    KAUST Repository

    Waheed, Umair bin

    2014-05-01

    Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.

  2. Assessing ecological effects of radionuclides: data gaps and extrapolation issues

    International Nuclear Information System (INIS)

    Garnier-Laplace, Jacqueline; Gilek, Michael; Sundell-Bergman, Synnoeve; Larsson, Carl-Magnus

    2004-01-01

    By inspection of the FASSET database on radiation effects on non-human biota, one of the major difficulties in the implementation of ecological risk assessments for radioactive pollutants is found to be the lack of data for chronic low-level exposure. A critical review is provided of a number of extrapolation issues that arise in undertaking an ecological risk assessment: acute versus chronic exposure regime; radiation quality including relative biological effectiveness and radiation weighting factors; biological effects from an individual to a population level, including radiosensitivity and lifestyle variations throughout the life cycle; single radionuclide versus multi-contaminants. The specificities of the environmental situations of interest (mainly chronic low-level exposure regimes) emphasise the importance of reproductive parameters governing the demography of the population within a given ecosystem and, as a consequence, the structure and functioning of that ecosystem. As an operational conclusion to keep in mind for any site-specific risk assessment, the present state-of-the-art on extrapolation issues allows us to grade the magnitude of the uncertainties as follows: one species to another > acute to chronic = external to internal = mixture of stressors> individual to population> ecosystem structure to function

  3. Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media

    KAUST Repository

    Waheed, Umair bin; Alkhalifah, Tariq Ali

    2014-01-01

    Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.

  4. Applying Evolutionary Genetics to Developmental Toxicology and Risk Assessment

    Science.gov (United States)

    Leung, Maxwell C. K.; Procter, Andrew C.; Goldstone, Jared V.; Foox, Jonathan; DeSalle, Robert; Mattingly, Carolyn J.; Siddall, Mark E.; Timme-Laragy, Alicia R.

    2018-01-01

    Evolutionary thinking continues to challenge our views on health and disease. Yet, there is a communication gap between evolutionary biologists and toxicologists in recognizing the connections among developmental pathways, high-throughput screening, and birth defects in humans. To increase our capability in identifying potential developmental toxicants in humans, we propose to apply evolutionary genetics to improve the experimental design and data interpretation with various in vitro and whole-organism models. We review five molecular systems of stress response and update 18 consensual cell-cell signaling pathways that are the hallmark for early development, organogenesis, and differentiation; and revisit the principles of teratology in light of recent advances in high-throughput screening, big data techniques, and systems toxicology. Multiscale systems modeling plays an integral role in the evolutionary approach to cross-species extrapolation. Phylogenetic analysis and comparative bioinformatics are both valuable tools in identifying and validating the molecular initiating events that account for adverse developmental outcomes in humans. The discordance of susceptibility between test species and humans (ontogeny) reflects their differences in evolutionary history (phylogeny). This synthesis not only can lead to novel applications in developmental toxicity and risk assessment, but also can pave the way for applying an evo-devo perspective to the study of developmental origins of health and disease. PMID:28267574

  5. MULTIVARIATE TECHNIQUES APPLIED TO EVALUATION OF LIGNOCELLULOSIC RESIDUES FOR BIOENERGY PRODUCTION

    Directory of Open Access Journals (Sweden)

    Thiago de Paula Protásio

    2013-12-01

    Full Text Available http://dx.doi.org/10.5902/1980509812361The evaluation of lignocellulosic wastes for bioenergy production demands to consider several characteristicsand properties that may be correlated. This fact demands the use of various multivariate analysis techniquesthat allow the evaluation of relevant energetic factors. This work aimed to apply cluster analysis and principalcomponents analyses for the selection and evaluation of lignocellulosic wastes for bioenergy production.8 types of residual biomass were used, whose the elemental components (C, H, O, N, S content, lignin, totalextractives and ashes contents, basic density and higher and lower heating values were determined. Bothmultivariate techniques applied for evaluation and selection of lignocellulosic wastes were efficient andsimilarities were observed between the biomass groups formed by them. Through the interpretation of thefirst principal component obtained, it was possible to create a global development index for the evaluationof the viability of energetic uses of biomass. The interpretation of the second principal component alloweda contrast between nitrogen and sulfur contents with oxygen content.

  6. A generalized sound extrapolation method for turbulent flows

    Science.gov (United States)

    Zhong, Siyang; Zhang, Xin

    2018-02-01

    Sound extrapolation methods are often used to compute acoustic far-field directivities using near-field flow data in aeroacoustics applications. The results may be erroneous if the volume integrals are neglected (to save computational cost), while non-acoustic fluctuations are collected on the integration surfaces. In this work, we develop a new sound extrapolation method based on an acoustic analogy using Taylor's hypothesis (Taylor 1938 Proc. R. Soc. Lon. A 164, 476-490. (doi:10.1098/rspa.1938.0032)). Typically, a convection operator is used to filter out the acoustically inefficient components in the turbulent flows, and an acoustics dominant indirect variable Dcp‧ is solved. The sound pressure p' at the far field is computed from Dcp‧ based on the asymptotic properties of the Green's function. Validations results for benchmark problems with well-defined sources match well with the exact solutions. For aeroacoustics applications: the sound predictions by the aerofoil-gust interaction are close to those by an earlier method specially developed to remove the effect of vortical fluctuations (Zhong & Zhang 2017 J. Fluid Mech. 820, 424-450. (doi:10.1017/jfm.2017.219)); for the case of vortex shedding noise from a cylinder, the off-body predictions by the proposed method match well with the on-body Ffowcs-Williams and Hawkings result; different integration surfaces yield close predictions (of both spectra and far-field directivities) for a co-flowing jet case using an established direct numerical simulation database. The results suggest that the method may be a potential candidate for sound projection in aeroacoustics applications.

  7. Extrapolation of systemic bioavailability assessing skin absorption and epidermal and hepatic metabolism of aromatic amine hair dyes in vitro

    Energy Technology Data Exchange (ETDEWEB)

    Manwaring, John, E-mail: manwaring.jd@pg.com [Procter & Gamble Inc., Mason Business Center, Mason, OH 45040 (United States); Rothe, Helga [Procter & Gamble Service GmbH, Sulzbacher Str. 40, 65823 Schwalbach am Taunus (Germany); Obringer, Cindy; Foltz, David J.; Baker, Timothy R.; Troutman, John A. [Procter & Gamble Inc., Mason Business Center, Mason, OH 45040 (United States); Hewitt, Nicola J. [SWS, Erzhausen (Germany); Goebel, Carsten [Procter & Gamble Service GmbH, Sulzbacher Str. 40, 65823 Schwalbach am Taunus (Germany)

    2015-09-01

    Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passage through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis–Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte K{sub m} and V{sub max} values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and C{sub max} was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. - Highlights: • An entirely in silico/in vitro approach to predict in vivo exposure to dermally applied hair dyes • Skin penetration and epidermal conversion assessed in human

  8. Extrapolation of systemic bioavailability assessing skin absorption and epidermal and hepatic metabolism of aromatic amine hair dyes in vitro

    International Nuclear Information System (INIS)

    Manwaring, John; Rothe, Helga; Obringer, Cindy; Foltz, David J.; Baker, Timothy R.; Troutman, John A.; Hewitt, Nicola J.; Goebel, Carsten

    2015-01-01

    Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passage through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis–Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte K m and V max values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and C max was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. - Highlights: • An entirely in silico/in vitro approach to predict in vivo exposure to dermally applied hair dyes • Skin penetration and epidermal conversion assessed in human skin explants and

  9. Testing an extrapolation chamber in computed tomography standard beams

    Science.gov (United States)

    Castro, M. C.; Silva, N. F.; Caldas, L. V. E.

    2018-03-01

    The computed tomography (CT) is responsible for the highest dose values to the patients. Therefore, the radiation doses in this procedure must be accurate. However, there is no primary standard system for this kind of radiation beam yet. In order to search for a CT primary standard, an extrapolation ionization chamber built at the Calibration Laboratory (LCI) of the Instituto de Pesquisas Energéticas e Nucleares (IPEN), was tested in this work. The results showed to be within the international recommended limits.

  10. Failure of the straight-line DCS boundary when extrapolated to the hypobaric realm.

    Science.gov (United States)

    Conkin, J; Van Liew, H D

    1992-11-01

    The lowest pressure (P2) to which a diver can ascend without developing decompression sickness (DCS) after becoming equilibrated at some higher pressure (P1) is described by a straight line with a negative y-intercept. We tested whether extrapolation of such a line also predicts safe decompression to altitude. We substituted tissue nitrogen pressure (P1N2) calculated for a compartment with a 360-min half-time for P1 values; this allows data from hypobaric exposures to be plotted on a P2 vs. P1N2 graph, even if the subject breathes oxygen before ascent. In literature sources, we found 40 reports of human exposures in hypobaric chambers that fell in the region of a P2 vs. P1N2 plot where the extrapolation from hyperbaric data predicted that the decompression should be free of DCS. Of 4,576 exposures, 785 persons suffered decompression sickness (17%), indicating that extrapolation of the diver line to altitude is not valid. Over the pressure range spanned by human hypobaric exposures and hyperbaric air exposures, the best separation between no DCS and DCS on a P2 vs. P1N2 plot seems to be a curve which approximates a straight line in the hyperbaric region but bends toward the origin in the hypobaric region.

  11. Free magnetic energy and relative magnetic helicity diagnostics for the quality of NLFF field extrapolations

    Science.gov (United States)

    Moraitis, Kostas; Archontis, Vasilis; Tziotziou, Konstantinos; Georgoulis, Manolis K.

    We calculate the instantaneous free magnetic energy and relative magnetic helicity of solar active regions using two independent approaches: a) a non-linear force-free (NLFF) method that requires only a single photospheric vector magnetogram, and b) well known semi-analytical formulas that require the full three-dimensional (3D) magnetic field structure. The 3D field is obtained either from MHD simulations, or from observed magnetograms via respective NLFF field extrapolations. We find qualitative agreement between the two methods and, quantitatively, a discrepancy not exceeding a factor of 4. The comparison of the two methods reveals, as a byproduct, two independent tests for the quality of a given force-free field extrapolation. We find that not all extrapolations manage to achieve the force-free condition in a valid, divergence-free, magnetic configuration. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.

  12. Application of the largest Lyapunov exponent and non-linear fractal extrapolation algorithm to short-term load forecasting

    International Nuclear Information System (INIS)

    Wang Jianzhou; Jia Ruiling; Zhao Weigang; Wu Jie; Dong Yao

    2012-01-01

    Highlights: ► The maximal predictive step size is determined by the largest Lyapunov exponent. ► A proper forecasting step size is applied to load demand forecasting. ► The improved approach is validated by the actual load demand data. ► Non-linear fractal extrapolation method is compared with three forecasting models. ► Performance of the models is evaluated by three different error measures. - Abstract: Precise short-term load forecasting (STLF) plays a key role in unit commitment, maintenance and economic dispatch problems. Employing a subjective and arbitrary predictive step size is one of the most important factors causing the low forecasting accuracy. To solve this problem, the largest Lyapunov exponent is adopted to estimate the maximal predictive step size so that the step size in the forecasting is no more than this maximal one. In addition, in this paper a seldom used forecasting model, which is based on the non-linear fractal extrapolation (NLFE) algorithm, is considered to develop the accuracy of predictions. The suitability and superiority of the two solutions are illustrated through an application to real load forecasting using New South Wales electricity load data from the Australian National Electricity Market. Meanwhile, three forecasting models: the gray model, the seasonal autoregressive integrated moving average approach and the support vector machine method, which received high approval in STLF, are selected to compare with the NLFE algorithm. Comparison results also show that the NLFE model is outstanding, effective, practical and feasible.

  13. Edge database analysis for extrapolation to ITER

    International Nuclear Information System (INIS)

    Shimada, M.; Janeschitz, G.; Stambaugh, R.D.

    1999-01-01

    An edge database has been archived to facilitate cross-machine comparisons of SOL and edge pedestal characteristics, and to enable comparison with theoretical models with an aim to extrapolate to ITER. The SOL decay lengths of power, density and temperature become broader for increasing density and q 95 . The power decay length is predicted to be 1.4-3.5 cm (L-mode) and 1.4-2.7 cm (H-mode) at the midplane in ITER. Analysis of Type I ELMs suggests that each giant ELM on ITER would exceed the ablation threshold of the divertor plates. Theoretical models are proposed for the H-mode transition, for Type I and Type III ELMs and are compared with the edge pedestal database. (author)

  14. Source‐receiver two‐way wave extrapolation for prestack exploding‐reflector modeling and migration

    KAUST Repository

    Alkhalifah, Tariq Ali; Fomel, Sergey

    2010-01-01

    While most of the modern seismic imaging methods perform imaging by separating input data into parts (shot gathers), we develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield backward in time. While computationally extensive, this approach has the potential of generating accurate images, free of artifacts associated with conventional approaches. We derive novel high‐order partial differential equations in source‐receiver‐time domain. The fourth order nature of the extrapolation in time has four solutions two of which correspond to the ingoing and outgoing P‐waves and reduces to the zero‐offset exploding‐reflector solutions when the source coincides with the receiver. Using asymptotic approximations, we develop an approach to extrapolating the full prestack wavefield forward or backward in time.

  15. Source‐receiver two‐way wave extrapolation for prestack exploding‐reflector modeling and migration

    KAUST Repository

    Alkhalifah, Tariq Ali

    2010-10-17

    While most of the modern seismic imaging methods perform imaging by separating input data into parts (shot gathers), we develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield backward in time. While computationally extensive, this approach has the potential of generating accurate images, free of artifacts associated with conventional approaches. We derive novel high‐order partial differential equations in source‐receiver‐time domain. The fourth order nature of the extrapolation in time has four solutions two of which correspond to the ingoing and outgoing P‐waves and reduces to the zero‐offset exploding‐reflector solutions when the source coincides with the receiver. Using asymptotic approximations, we develop an approach to extrapolating the full prestack wavefield forward or backward in time.

  16. Investigation of the shear bond strength to dentin of universal adhesives applied with two different techniques

    Directory of Open Access Journals (Sweden)

    Elif Yaşa

    2017-09-01

    Full Text Available Objective: The aim of this study was to evaluate the shear bond strength of universal adhesives applied with self-etch and etch&rinse techniques to dentin. Materials and Method: Fourty-eight sound extracted human third molars were used in this study. Occlusal enamel was removed in order to expose the dentinal surface, and the surface was flattened. Specimens were randomly divided into four groups and were sectioned vestibulo-lingually using a diamond disc. The universal adhesives: All Bond Universal (Group 1a and 1b, Gluma Bond Universal (Group 2a and 2b and Single Bond Universal (Group 3a and 3b were applied onto the tooth specimens either with self-etch technique (a or with etch&rinse technique (b according to the manufacturers’ instructions. Clearfil SE Bond (Group 4a; self-etch and Optibond FL (Group 4b; etch&rinse were used as control groups. Then the specimens were restored with a nanohybrid composite resin (Filtek Z550. After thermocycling, shear bond strength test was performed with a universal test machine at a crosshead speed of 0.5 mm/min. Fracture analysis was done under a stereomicroscope (×40 magnification. Data were analyzed using two-way ANOVA and post-hoc Tukey tests. Results: Statistical analysis showed significant differences in shear bond strength values between the universal adhesives (p<0.05. Significantly higher bond strength values were observed in self-etch groups (a in comparison to etch&rinse groups (b (p<0.05. Among all groups, Single Bond Universal showed the greatest shear bond strength values, whereas All Bond Universal showed the lowest shear bond strength values with both application techniques. Conclusion: Dentin bonding strengths of universal adhesives applied with different techniques may vary depending on the adhesive material. For the universal bonding agents tested in this study, the etch&rinse technique negatively affected the bond strength to dentin.

  17. Characterization of an extrapolation chamber for low-energy X-rays: Experimental and Monte Carlo preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Neves, Lucio P., E-mail: lpneves@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Silva, Eric A.B., E-mail: ebrito@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Perini, Ana P., E-mail: aperini@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Maidana, Nora L., E-mail: nmaidana@if.usp.br [Universidade de Sao Paulo, Instituto de Fisica, Travessa R 187, 05508-900 Sao Paulo, SP (Brazil); Caldas, Linda V.E., E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil)

    2012-07-15

    The extrapolation chamber is a parallel-plate ionization chamber that allows variation of its air-cavity volume. In this work, an experimental study and MCNP-4C Monte Carlo code simulations of an ionization chamber designed and constructed at the Calibration Laboratory at IPEN to be used as a secondary dosimetry standard for low-energy X-rays are reported. The results obtained were within the international recommendations, and the simulations showed that the components of the extrapolation chamber may influence its response up to 11.0%. - Highlights: Black-Right-Pointing-Pointer A homemade extrapolation chamber was studied experimentally and with Monte Carlo. Black-Right-Pointing-Pointer It was characterized as a secondary dosimetry standard, for low energy X-rays. Black-Right-Pointing-Pointer Several characterization tests were performed and the results were satisfactory. Black-Right-Pointing-Pointer Simulation showed that its components may influence the response up to 11.0%. Black-Right-Pointing-Pointer This chamber may be used as a secondary standard at our laboratory.

  18. The digital geometric phase technique applied to the deformation evaluation of MEMS devices

    International Nuclear Information System (INIS)

    Liu, Z W; Xie, H M; Gu, C Z; Meng, Y G

    2009-01-01

    Quantitative evaluation of the structure deformation of microfabricated electromechanical systems is of importance for the design and functional control of microsystems. In this investigation, a novel digital geometric phase technique was developed to meet the deformation evaluation requirement of microelectromechanical systems (MEMS). The technique is performed on the basis of regular artificial lattices, instead of a natural atom lattice. The regular artificial lattices with a pitch ranging from micrometer to nanometer will be directly fabricated on the measured surface of MEMS devices by using a focused ion beam (FIB). Phase information can be obtained from the Bragg filtered images after fast Fourier transform (FFT) and inverse fast Fourier transform (IFFT) of the scanning electronic microscope (SEM) images. Then the in-plane displacement field and the local strain field related to the phase information will be evaluated. The obtained results show that the technique can be well applied to deformation measurement with nanometer sensitivity and stiction force estimation of a MEMS device

  19. Early tumour detection: a transillumination, time-resolved technique

    International Nuclear Information System (INIS)

    Behin-Ain, S.; Van Doorn, T.; Patterson, J.

    2000-01-01

    Full text: Research into transillumination techniques for the detection of tumours in soft tissue has been ongoing for over 70 years. The resolution and contrast, however, remain severely limited by scatter. Single photon detection techniques, with ideally infinite extinction coefficients, have been proposed to accumulate sub-hertz photon transmitted frequencies in the early part of a transmitted pulse. Computer based simulations have been undertaken to examine the theoretical performance requirements of the detector and the resultant image qualities that may be expected with this imaging technique. This paper reports on the computational techniques required for implementing these simulations in an efficient manner. Controlled Monte Carlo (CMC) and Convolution of Layers (CL) techniques were employed to constrain the photon to those having more chance of detection and hence enhance the detection statistics. Extrapolation techniques are proposed to reconstruct the early part of the temporal profile. Computational methods were implemented to evaluate Path Integrals, which are otherwise overly complex to evaluate. CMC and CL reduce the computational time by more than 10 orders of magnitude by only tracking those photons more likely to reach the detector. In the case of an optically thick medium with high scattering coefficient, extrapolation techniques are used to reconstruct the early part of temporal profile. Analytical solutions were found to be too involved for the simplest geometries. However the CL and implementation of computational techniques make Path integrals a useful analytical tool to compliment full Monte Carlo techniques. Results have shown that these methods collectively enable detection of small inhomogeneites within soft tissues. Reduced computation times and full reconstruction of the temporal profile of transmitted photons through optically thick medium enable fast simulations of single photon detectors to be achieved with the above described

  20. Combining monoenergetic extrapolations from dual-energy CT with iterative reconstructions. Reduction of coil and clip artifacts from intracranial aneurysm therapy

    Energy Technology Data Exchange (ETDEWEB)

    Winklhofer, Sebastian; Baltsavias, Gerasimos; Michels, Lars; Valavanis, Antonios [University of Zurich, Department of Neuroradiology, University Hospital Zurich, Zurich (Switzerland); Hinzpeter, Ricarda; Stocker, Daniel; Alkadhi, Hatem [University of Zurich, Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich (Switzerland); Burkhardt, Jan-Karl; Regli, Luca [University of Zurich, Department of Neurosurgery, University Hospital Zurich, Zurich (Switzerland)

    2018-03-15

    To compare and to combine iterative metal artifact reduction (MAR) and virtual monoenergetic extrapolations (VMEs) from dual-energy computed tomography (DECT) for reducing metal artifacts from intracranial clips and coils. Fourteen clips and six coils were scanned in a phantom model with DECT at 100 and 150SnkVp. Four datasets were reconstructed: non-corrected images (filtered-back projection), iterative MAR, VME from DECT at 120 keV, and combined iterative MAR + VME images. Artifact severity scores and visibility of simulated, contrast-filled, adjacent vessels were assessed qualitatively and quantitatively by two independent, blinded readers. Iterative MAR, VME, and combined iterative MAR + VME resulted in a significant reduction of qualitative (p < 0.001) and quantitative clip artifacts (p < 0.005) and improved the visibility of adjacent vessels (p < 0.05) compared to non-corrected images, with lowest artifact scores found in combined iterative MAR + VME images. Titanium clips demonstrated less artifacts than Phynox clips (p < 0.05), and artifact scores increased with clip size. Coil artifacts increased with coil size but were reducible when applying iterative MAR + VME compared to non-corrected images. However, no technique improved the severe artifacts from large, densely packed coils. Combining iterative MAR with VME allows for an improved metal artifact reduction from clips and smaller, loosely packed coils. Limited value was found for large and densely packed coils. (orig.)

  1. Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora

    Directory of Open Access Journals (Sweden)

    Ryosuke Takahira

    2016-10-01

    Full Text Available One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, but in 1990 Hilberg raised doubt regarding a correct interpretation of these experiments. This article provides an in-depth empirical analysis, using 20 corpora of up to 7.8 gigabytes across six languages (English, French, Russian, Korean, Chinese, and Japanese, to conclude that the entropy rate is positive. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes were proposed previously, here we use a new stretched exponential extrapolation function that has a smaller error of fit. Thus, we conclude that the entropy rates of human languages are positive but approximately 20% smaller than without extrapolation. Although the entropy rate estimates depend on the script kind, the exponent of the ansatz function turns out to be constant across different languages and governs the complexity of natural language in general. In other words, in spite of typological differences, all languages seem equally hard to learn, which partly confirms Hilberg’s hypothesis.

  2. Méthodologie de l'extrapolation des réacteurs chimiques Methodology for Scaling Up Chemical Reactors

    Directory of Open Access Journals (Sweden)

    Trambouze P.

    2006-11-01

    Full Text Available Après un exposé général relatif à la méthodologie du développement des procédés, applicable à l'extrapolation des réacteurs, est présenté un rapide examen critique des deux principales techniques mises en oeuvre, à savoir : - la théorie de la similitude ; - l'élaboration de modèles mathématiques. Deux exemples pratiques, relatifs aux réacteurs homogènes et aux réacteurs catalytiques à lit fixe et deux phases fluides, sont ensuite examinés à la lumière des considérations générales précédentes. After giving a general description of process-development methodology applicable to scaling up reactors, this article makes a quick critical examination of the two main techniques involved, i. e. : (a the theory of similarity, and (b the compiling of mathematical models. Two practical examples relating to homogeneous reactors and trickle-bed catalytic reactors are then examined in the light of the preceding general considerations.

  3. Neutron scattering techniques for betaine calcium chloride dihydrate under applied external field (temperature, electric field and hydrostatic pressure)

    International Nuclear Information System (INIS)

    Hernandez, O.

    1997-01-01

    We have studied with neutron scattering techniques betaine calcium chloride dihydrate (BCCD), a dielectric aperiodic crystal which displays a Devil's staircase type phase diagram made up of several incommensurate and commensurate phases, having a range of stability very sensitive to temperature, electric field and hydrostatic pressure. We have measured a global hysteresis of δ(T) of about 2-3 K in the two incommensurate phases. A structural study of the modulated commensurate phases 1/4 and 1/5 allows us to evidence that the atomic modulation functions are anharmonic. The relevance of the modelization of the modulated structure by polar Ising pseudo-spins is then directly established. On the basis of group theory calculation in the four dimensional super-space, we interpret this anharmonic modulation as a soliton regime with respect to the lowest-temperature non modulated ferroelectric phase. The continuous character of the transition to the lowest-temperature non modulated phase and the diffuse scattering observed in this phase are accounted for the presence of ferroelectric domains separated by discommensurations. Furthermore, we have shown that X-rays induce in BCCD a strong variation with time of irradiation of the intensity of satellite peaks, and more specifically for third order ones. This is why the 'X-rays' structural model is found more harmonic than the 'neutron' one. Under electric field applied along the vector b axis, we confirm that commensurate phases with δ = even/odd are favoured and hence are polar along this direction. We have evidenced at 10 kV / cm two new higher order commensurate phases in the phase INC2, corroborating the idea of a 'complete' Devil's air-case phase diagram. A phenomenon of generalized coexistence of phases occurs above 5 kV / cm. We have characterized at high field phase transitions between 'coexisting' phases, which are distinguishable from classical lock-in transitions. Under hydrostatic pressure, our results contradict

  4. [Molecular techniques applied in species identification of Toxocara].

    Science.gov (United States)

    Fogt, Renata

    2006-01-01

    Toxocarosis is still an important and actual problem in human medicine. It can manifest as visceral (VLM), ocular (OLM) or covert (CT) larva migrans syndroms. Complicated life cycle of Toxocara, lack of easy and practical methods of species differentiation of the adult nematode and embarrassing in recognition of the infection in definitive hosts create difficulties in fighting with the infection. Although studies on human toxocarosis have been continued for over 50 years there is no conclusive answer, which of species--T. canis or T. cati constitutes a greater risk of transmission of the nematode to man. Neither blood serological examinations nor microscopic observations of the morphological features of the nematode give the satisfied answer on the question. Since the 90-ths molecular methods were developed for species identification and became useful tools being widely applied in parasitological diagnosis. This paper cover the survey of methods of DNA analyses used for identification of Toxocara species. The review may be helpful for researchers focused on Toxocara and toxocarosis as well as on detection of new species. The following techniques are described: PCR (Polymerase Chain Reaction), RFLP (Restriction Fragment Length Polymorphism), RAPD (Random Amplified Polymorphic DNA) and SSCP (Single Strand Conformation Polymorphism).

  5. Novel extrapolation method in the Monte Carlo shell model

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of 56 Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g 9/2 -shell calculation of 64 Ge.

  6. SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Liu, C; Kumarasiri, A; Chetvertkov, M; Gordon, J; Chetty, I; Siddiqui, F; Kim, J [Henry Ford Health System, Detroit, MI (United States)

    2014-06-01

    Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformably registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (μ±σ) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.

  7. Neural extrapolation of motion for a ball rolling down an inclined plane.

    Science.gov (United States)

    La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka

    2014-01-01

    It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion.

  8. Neural extrapolation of motion for a ball rolling down an inclined plane.

    Directory of Open Access Journals (Sweden)

    Barbara La Scaleia

    Full Text Available It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions and slope (30°, 45° or 60°. In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1. However, even when participants punched an imaginary moving ball (Experiment 2 or drew in air the imaginary trajectory (Experiment 3, they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion.

  9. Vibration monitoring/diagnostic techniques, as applied to reactor coolant pumps

    International Nuclear Information System (INIS)

    Sculthorpe, B.R.; Johnson, K.M.

    1986-01-01

    With the increased awareness of reactor coolant pump (RCP) cracked shafts, brought about by the catastrophic shaft failure at Crystal River number3, Florida Power and Light Company, in conjunction with Bently Nevada Corporation, undertook a test program at St. Lucie Nuclear Unit number2, to confirm the integrity of all four RCP pump shafts. Reactor coolant pumps play a major roll in the operation of nuclear-powered generation facilities. The time required to disassemble and physically inspect a single RCP shaft would be lengthy, monetarily costly to the utility and its customers, and cause possible unnecessary man-rem exposure to plant personnel. When properly applied, vibration instrumentation can increase unit availability/reliability, as well as provide enhanced diagnostic capability. This paper reviews monitoring benefits and diagnostic techniques applicable to RCPs/motor drives

  10. Volcanic Monitoring Techniques Applied to Controlled Fragmentation Experiments

    Science.gov (United States)

    Kueppers, U.; Alatorre-Ibarguengoitia, M. A.; Hort, M. K.; Kremers, S.; Meier, K.; Scharff, L.; Scheu, B.; Taddeucci, J.; Dingwell, D. B.

    2010-12-01

    Volcanic eruptions are an inevitable natural threat. The range of eruptive styles is large and short term fluctuations of explosivity or vent position pose a large risk that is not necessarily confined to the immediate vicinity of a volcano. Explosive eruptions rather may also affect aviation, infrastructure and climate, regionally as well as globally. Multiparameter monitoring networks are deployed on many active volcanoes to record signs of magmatic processes and help elucidate the secrets of volcanic phenomena. However, our mechanistic understanding of many processes hiding in recorded signals is still poor. As a direct consequence, a solid interpretation of the state of a volcano is still a challenge. In an attempt to bridge this gap, we combined volcanic monitoring and experimental volcanology. We performed 15 well-monitored, field-based, experiments and fragmented natural rock samples from Colima volcano (Mexico) by rapid decompression. We used cylindrical samples of 60 mm height and 25 mm and 60 mm diameter, respectively, and 25 and 35 vol.% open porosity. The applied pressure range was from 4 to 18 MPa. Using different experimental set-ups, the pressurised volume above the samples ranged from 60 - 170 cm3. The experiments were performed at ambient conditions and at controlled sample porosity and size, confinement geometry, and applied pressure. The experiments have been thoroughly monitored with 1) Doppler Radar (DR), 2) high-speed and high-definition cameras, 3) acoustic and infrasound sensors, 4) pressure transducers, and 5) electrically conducting wires. Our aim was to check for common results achieved by the different approaches and, if so, calibrate state-of-the-art monitoring tools. We present how the velocity of the ejected pyroclasts was measured by and evaluated for the different approaches and how it was affected by the experimental conditions and sample characteristics. We show that all deployed instruments successfully measured the pyroclast

  11. Removal of benzaldehyde from a water/ethanol mixture by applying scavenging techniques

    DEFF Research Database (Denmark)

    Mitic, Aleksandar; Skov, Thomas; Gernaey, Krist V.

    2017-01-01

    A presence of carbonyl compounds is very common in the food industry. The nature of such compounds is to be reactive and thus many products involve aldehydes/ketones in their synthetic routes. By contrast, the high reactivity of carbonyl compounds could also lead to formation of undesired compounds......, such as genotoxic impurities. It can therefore be important to remove carbonyl compounds by implementing suitable removal techniques, with the aim of protecting final product quality. This work is focused on benzaldehyde as a model component, studying its removal from a water/ethanol mixture by applying different...

  12. Wire-mesh and ultrasound techniques applied for the characterization of gas-liquid slug flow

    Energy Technology Data Exchange (ETDEWEB)

    Ofuchi, Cesar Y.; Sieczkowski, Wytila Chagas; Neves Junior, Flavio; Arruda, Lucia V.R.; Morales, Rigoberto E.M.; Amaral, Carlos E.F.; Silva, Marco J. da [Federal University of Technology of Parana, Curitiba, PR (Brazil)], e-mails: ofuchi@utfpr.edu.br, wytila@utfpr.edu.br, neves@utfpr.edu.br, lvrarruda@utfpr.edu.br, rmorales@utfpr.edu.br, camaral@utfpr.edu.br, mdasilva@utfpr.edu.br

    2010-07-01

    Gas-liquid two-phase flows are found in a broad range of industrial applications, such as chemical, petrochemical and nuclear industries and quite often determine the efficiency and safety of process and plants. Several experimental techniques have been proposed and applied to measure and quantify two-phase flows so far. In this experimental study the wire-mesh sensor and an ultrasound technique are used and comparatively evaluated to study two-phase slug flows in horizontal pipes. The wire-mesh is an imaging technique and thus appropriated for scientific studies while ultrasound-based technique is robust and non-intrusive and hence well suited for industrial applications. Based on the measured raw data it is possible to extract some specific slug flow parameters of interest such as mean void fraction and characteristic frequency. The experiments were performed in the Thermal Sciences Laboratory (LACIT) at UTFPR, Brazil, in which an experimental two-phase flow loop is available. The experimental flow loop comprises a horizontal acrylic pipe of 26 mm diameter and 9 m length. Water and air were used to produce the two phase flow under controlled conditions. The results show good agreement between the techniques. (author)

  13. Extrapolated surface dose measurements using a NdFeB magnetic deflector for 6 MV x-ray beams.

    Science.gov (United States)

    Damrongkijudom, N; Butson, M; Rosenfeld, A

    2007-03-01

    Extrapolated surface dose measurements have been performed using radiographic film to measure 2-Dimensional maps of skin and surface dose with and without a magnetic deflector device aimed at reducing surface dose. Experiments are also performed using an Attix parallel plate ionisation chamber for comparison to radiographic film extrapolation surface dose analysis. Extrapolated percentage surface dose assessments from radiographic film at the central axis of a 6 MV x-ray beam with magnetic deflector for field size 10 x 10 cm2, 15 x 15 cm2 and 20 x 20 cm2 are 9 +/- 3%, 13 +/- 3% and 16 +/- 3%, these compared to 14 +/- 3%, 19 +/- 3%, and 27 +/- 3% for open fields, respectively. Results from Attix chamber for the same field size are 12 +/- 1%, 15 +/- 1% and 18 +/- 1%, these compared to 16 +/- 1%, 21 +/- 1% and 27 +/- 1% for open fields, respectively. Results are also shown for profiles measured in-plane and cross-plane to the magnetic deflector and compared to open field data. Results have shown that the surface dose is reduced at all sites within the treatment field with larger reductions seen on one side of the field due to the sweeping nature of the designed magnetic field. Radiographic film extrapolation provides an advanced surface dose assessment and has matched well with Attix chamber results. Film measurement allows for easy 2 dimensional dose assessments.

  14. Case study: how to apply data mining techniques in a healthcare data warehouse.

    Science.gov (United States)

    Silver, M; Sakata, T; Su, H C; Herman, C; Dolins, S B; O'Shea, M J

    2001-01-01

    Healthcare provider organizations are faced with a rising number of financial pressures. Both administrators and physicians need help analyzing large numbers of clinical and financial data when making decisions. To assist them, Rush-Presbyterian-St. Luke's Medical Center and Hitachi America, Ltd. (HAL), Inc., have partnered to build an enterprise data warehouse and perform a series of case study analyses. This article focuses on one analysis, which was performed by a team of physicians and computer science researchers, using a commercially available on-line analytical processing (OLAP) tool in conjunction with proprietary data mining techniques developed by HAL researchers. The initial objective of the analysis was to discover how to use data mining techniques to make business decisions that can influence cost, revenue, and operational efficiency while maintaining a high level of care. Another objective was to understand how to apply these techniques appropriately and to find a repeatable method for analyzing data and finding business insights. The process used to identify opportunities and effect changes is described.

  15. Nowcasting of precipitation by an NWP model using assimilation of extrapolated radar reflectivity

    Czech Academy of Sciences Publication Activity Database

    Sokol, Zbyněk; Zacharov, Petr, jr.

    2012-01-01

    Roč. 138, č. 665 (2012), s. 1072-1082 ISSN 0035-9009 Institutional support: RVO:68378289 Keywords : precipitation forecast * radar extrapolation Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 3.327, year: 2012 http://onlinelibrary.wiley.com/doi/10.1002/qj.970/abstract

  16. Extrapolated experimental critical parameters of unreflected and steel-reflected massive enriched uranium metal spherical and hemispherical assemblies

    International Nuclear Information System (INIS)

    Rothe, R.E.

    1997-12-01

    Sixty-nine critical configurations of up to 186 kg of uranium are reported from very early experiments (1960s) performed at the Rocky Flats Critical Mass Laboratory near Denver, Colorado. Enriched (93%) uranium metal spherical and hemispherical configurations were studied. All were thick-walled shells except for two solid hemispheres. Experiments were essentially unreflected; or they included central and/or external regions of mild steel. No liquids were involved. Critical parameters are derived from extrapolations beyond subcritical data. Extrapolations, rather than more precise interpolations between slightly supercritical and slightly subcritical configurations, were necessary because experiments involved manually assembled configurations. Many extrapolations were quite long; but the general lack of curvature in the subcritical region lends credibility to their validity. In addition to delayed critical parameters, a procedure is offered which might permit the determination of prompt critical parameters as well for the same cases. This conjectured procedure is not based on any strong physical arguments

  17. A comparison of high-order explicit Runge–Kutta, extrapolation, and deferred correction methods in serial and parallel

    KAUST Repository

    Ketcheson, David I.

    2014-06-13

    We compare the three main types of high-order one-step initial value solvers: extrapolation, spectral deferred correction, and embedded Runge–Kutta pairs. We consider orders four through twelve, including both serial and parallel implementations. We cast extrapolation and deferred correction methods as fixed-order Runge–Kutta methods, providing a natural framework for the comparison. The stability and accuracy properties of the methods are analyzed by theoretical measures, and these are compared with the results of numerical tests. In serial, the eighth-order pair of Prince and Dormand (DOP8) is most efficient. But other high-order methods can be more efficient than DOP8 when implemented in parallel. This is demonstrated by comparing a parallelized version of the wellknown ODEX code with the (serial) DOP853 code. For an N-body problem with N = 400, the experimental extrapolation code is as fast as the tuned Runge–Kutta pair at loose tolerances, and is up to two times as fast at tight tolerances.

  18. Nuclear lattice simulations using symmetry-sign extrapolation

    Energy Technology Data Exchange (ETDEWEB)

    Laehde, Timo A.; Luu, Thomas [Forschungszentrum Juelich, Institute for Advanced Simulation, Institut fuer Kernphysik, and Juelich Center for Hadron Physics, Juelich (Germany); Lee, Dean [North Carolina State University, Department of Physics, Raleigh, NC (United States); Meissner, Ulf G. [Universitaet Bonn, Helmholtz-Institut fuer Strahlen- und Kernphysik and Bethe Center for Theoretical Physics, Bonn (Germany); Forschungszentrum Juelich, Institute for Advanced Simulation, Institut fuer Kernphysik, and Juelich Center for Hadron Physics, Juelich (Germany); Forschungszentrum Juelich, JARA - High Performance Computing, Juelich (Germany); Epelbaum, Evgeny; Krebs, Hermann [Ruhr-Universitaet Bochum, Institut fuer Theoretische Physik II, Bochum (Germany); Rupak, Gautam [Mississippi State University, Department of Physics and Astronomy, Mississippi State, MS (United States)

    2015-07-15

    Projection Monte Carlo calculations of lattice Chiral Effective Field Theory suffer from sign oscillations to a varying degree dependent on the number of protons and neutrons. Hence, such studies have hitherto been concentrated on nuclei with equal numbers of protons and neutrons, and especially on the alpha nuclei where the sign oscillations are smallest. Here, we introduce the ''symmetry-sign extrapolation'' method, which allows us to use the approximate Wigner SU(4) symmetry of the nuclear interaction to systematically extend the Projection Monte Carlo calculations to nuclear systems where the sign problem is severe. We benchmark this method by calculating the ground-state energies of the {sup 12}C, {sup 6}He and {sup 6}Be nuclei, and discuss its potential for studies of neutron-rich halo nuclei and asymmetric nuclear matter. (orig.)

  19. Ultrasonic computerized tomography (CT) for temperature measurements with limited projection data based on extrapolated filtered back projection (FBP) method

    International Nuclear Information System (INIS)

    Zhu Ning; Jiang Yong; Kato, Seizo

    2005-01-01

    This study uses ultrasound in combination with tomography to obtain three-dimensional temperature measurements using projection data obtained from limited projection angle. The main feature of the new computerized tomography (CT) reconstruction algorithm is to employ extrapolation scheme to make up for the incomplete projection data, it is based on the conventional filtered back projection (FBP) method while on top of that taking into account the correlation between the projection data and Fourier transform-based extrapolation. Computer simulation is conducted to verify the above algorithm. An experimental 3D temperature distribution measurement is also carried out to validate the proposed algorithm. The simulation and experimental results demonstrate that the extrapolated FBP CT algorithm is highly effective in dealing with projection data from limited projection angle

  20. Discrete classification technique applied to TV advertisements liking recognition system based on low-cost EEG headsets.

    Science.gov (United States)

    Soria Morillo, Luis M; Alvarez-Garcia, Juan A; Gonzalez-Abril, Luis; Ortega Ramírez, Juan A

    2016-07-15

    In this paper a new approach is applied to the area of marketing research. The aim of this paper is to recognize how brain activity responds during the visualization of short video advertisements using discrete classification techniques. By means of low cost electroencephalography devices (EEG), the activation level of some brain regions have been studied while the ads are shown to users. We may wonder about how useful is the use of neuroscience knowledge in marketing, or what could provide neuroscience to marketing sector, or why this approach can improve the accuracy and the final user acceptance compared to other works. By using discrete techniques over EEG frequency bands of a generated dataset, C4.5, ANN and the new recognition system based on Ameva, a discretization algorithm, is applied to obtain the score given by subjects to each TV ad. The proposed technique allows to reach more than 75 % of accuracy, which is an excellent result taking into account the typology of EEG sensors used in this work. Furthermore, the time consumption of the algorithm proposed is reduced up to 30 % compared to other techniques presented in this paper. This bring about a battery lifetime improvement on the devices where the algorithm is running, extending the experience in the ubiquitous context where the new approach has been tested.

  1. Potential Hydraulic Modelling Errors Associated with Rheological Data Extrapolation in Laminar Flow

    International Nuclear Information System (INIS)

    Shadday, Martin A. Jr.

    1997-01-01

    The potential errors associated with the modelling of flows of non-Newtonian slurries through pipes, due to inadequate rheological models and extrapolation outside of the ranges of data bases, are demonstrated. The behaviors of both dilatant and pseudoplastic fluids with yield stresses, and the errors associated with treating them as Bingham plastics, are investigated

  2. Solution of the fully fuzzy linear systems using iterative techniques

    International Nuclear Information System (INIS)

    Dehghan, Mehdi; Hashemi, Behnam; Ghatee, Mehdi

    2007-01-01

    This paper mainly intends to discuss the iterative solution of fully fuzzy linear systems which we call FFLS. We employ Dubois and Prade's approximate arithmetic operators on LR fuzzy numbers for finding a positive fuzzy vector x-tilde which satisfies A-tildex-tilde=b, where A-tilde and b-tilde are a fuzzy matrix and a fuzzy vector, respectively. Please note that the positivity assumption is not so restrictive in applied problems. We transform FFLS and propose iterative techniques such as Richardson, Jacobi, Jacobi overrelaxation (JOR), Gauss-Seidel, successive overrelaxation (SOR), accelerated overrelaxation (AOR), symmetric and unsymmetric SOR (SSOR and USSOR) and extrapolated modified Aitken (EMA) for solving FFLS. In addition, the methods of Newton, quasi-Newton and conjugate gradient are proposed from nonlinear programming for solving a fully fuzzy linear system. Various numerical examples are also given to show the efficiency of the proposed schemes

  3. Mass extrapolation of quarks and leptons to higher generations

    Energy Technology Data Exchange (ETDEWEB)

    Barik, N [Utkal Univ., Bhubaneswar (India). Dept. of Physics

    1981-05-01

    An empirical mass formula is tested for the basic fermion sequences of charged quarks and leptons. This relation is a generalization of Barut's mass formula for the lepton sequence (e, ..mu.., tau ....). It is found that successful mass extrapolation to the third and possibly to other higher generations (N > 2) can be obtained with the first and second generation masses as inputs, which predicts the top quark mass msub(t) to be around 20 GeV. This also leads to the mass ratios between members of two different sequences (i) and (i') corresponding to the same higher generations (N > 2).

  4. Mass extrapolation of quarks and leptons to higher generations

    International Nuclear Information System (INIS)

    Barik, N.

    1981-01-01

    An empirical mass formula is tested for the basic fermion sequences of charged quarks and leptons. This relation is a generalization of Barut's mass formula for the lepton sequence (e, μ, tau ....). It is found that successful mass extrapolation to the third and possibly to other higher generations (N > 2) can be obtained with the first and second generation masses as inputs, which predicts the top quark mass msub(t) to be around 20 GeV. This also leads to the mass ratios between members of two different sequences (i) and (i') corresponding to the same higher generations (N > 2). (author)

  5. Chiral and continuum extrapolation of partially-quenched hadron masses

    International Nuclear Information System (INIS)

    Chris Allton; Wes Armour; Derek Leinweber; Anthony Thomas; Ross Young

    2005-01-01

    Using the finite-range regularization (FRR) of chiral effective field theory, the chiral extrapolation formula for the vector meson mass is derived for the case of partially-quenched QCD. We re-analyze the dynamical fermion QCD data for the vector meson mass from the CP-PACS collaboration. A global fit, including finite lattice spacing effects, of all 16 of their ensembles is performed. We study the FRR method together with a naive polynomial approach and find excellent agreement (∼1%) with the experimental value of M ρ from the former approach. These results are extended to the case of the nucleon mass

  6. Chiral and continuum extrapolation of partially-quenched hadron masses

    Energy Technology Data Exchange (ETDEWEB)

    Chris Allton; Wes Armour; Derek Leinweber; Anthony Thomas; Ross Young

    2005-09-29

    Using the finite-range regularization (FRR) of chiral effective field theory, the chiral extrapolation formula for the vector meson mass is derived for the case of partially-quenched QCD. We re-analyze the dynamical fermion QCD data for the vector meson mass from the CP-PACS collaboration. A global fit, including finite lattice spacing effects, of all 16 of their ensembles is performed. We study the FRR method together with a naive polynomial approach and find excellent agreement ({approx}1%) with the experimental value of M{sub {rho}} from the former approach. These results are extended to the case of the nucleon mass.

  7. EXTRAPOLATION METHOD FOR MAXIMAL AND 24-H AVERAGE LTE TDD EXPOSURE ESTIMATION.

    Science.gov (United States)

    Franci, D; Grillo, E; Pavoncello, S; Coltellacci, S; Buccella, C; Aureli, T

    2018-01-01

    The Long-Term Evolution (LTE) system represents the evolution of the Universal Mobile Telecommunication System technology. This technology introduces two duplex modes: Frequency Division Duplex and Time Division Duplex (TDD). Despite having experienced a limited expansion in the European countries since the debut of the LTE technology, a renewed commercial interest for LTE TDD technology has recently been shown. Therefore, the development of extrapolation procedures optimised for TDD systems becomes crucial, especially for the regulatory authorities. This article presents an extrapolation method aimed to assess the exposure to LTE TDD sources, based on the detection of the Cell-Specific Reference Signal power level. The method introduces a βTDD parameter intended to quantify the fraction of the LTE TDD frame duration reserved for downlink transmission. The method has been validated by experimental measurements performed on signals generated by both a vector signal generator and a test Base Transceiver Station installed at Linkem S.p.A facility in Rome. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Mathematical Model and Artificial Intelligent Techniques Applied to a Milk Industry through DSM

    Science.gov (United States)

    Babu, P. Ravi; Divya, V. P. Sree

    2011-08-01

    The resources for electrical energy are depleting and hence the gap between the supply and the demand is continuously increasing. Under such circumstances, the option left is optimal utilization of available energy resources. The main objective of this chapter is to discuss about the Peak load management and overcome the problems associated with it in processing industries such as Milk industry with the help of DSM techniques. The chapter presents a generalized mathematical model for minimizing the total operating cost of the industry subject to the constraints. The work presented in this chapter also deals with the results of application of Neural Network, Fuzzy Logic and Demand Side Management (DSM) techniques applied to a medium scale milk industrial consumer in India to achieve the improvement in load factor, reduction in Maximum Demand (MD) and also the consumer gets saving in the energy bill.

  9. WE-DE-201-05: Evaluation of a Windowless Extrapolation Chamber Design and Monte Carlo Based Corrections for the Calibration of Ophthalmic Applicators

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, J; Culberson, W; DeWerd, L [University of Wisconsin Medical Radiation Research Center, Madison, WI (United States); Soares, C [NIST (retired), Gaithersburg, MD (United States)

    2016-06-15

    Purpose: To test the validity of a windowless extrapolation chamber used to measure surface dose rate from planar ophthalmic applicators and to compare different Monte Carlo based codes for deriving correction factors. Methods: Dose rate measurements were performed using a windowless, planar extrapolation chamber with a {sup 90}Sr/{sup 90}Y Tracerlab RA-1 ophthalmic applicator previously calibrated at the National Institute of Standards and Technology (NIST). Capacitance measurements were performed to estimate the initial air gap width between the source face and collecting electrode. Current was measured as a function of air gap, and Bragg-Gray cavity theory was used to calculate the absorbed dose rate to water. To determine correction factors for backscatter, divergence, and attenuation from the Mylar entrance window found in the NIST extrapolation chamber, both EGSnrc Monte Carlo user code and Monte Carlo N-Particle Transport Code (MCNP) were utilized. Simulation results were compared with experimental current readings from the windowless extrapolation chamber as a function of air gap. Additionally, measured dose rate values were compared with the expected result from the NIST source calibration to test the validity of the windowless chamber design. Results: Better agreement was seen between EGSnrc simulated dose results and experimental current readings at very small air gaps (<100 µm) for the windowless extrapolation chamber, while MCNP results demonstrated divergence at these small gap widths. Three separate dose rate measurements were performed with the RA-1 applicator. The average observed difference from the expected result based on the NIST calibration was −1.88% with a statistical standard deviation of 0.39% (k=1). Conclusion: EGSnrc user code will be used during future work to derive correction factors for extrapolation chamber measurements. Additionally, experiment results suggest that an entrance window is not needed in order for an extrapolation

  10. UFOs in the LHC: Observations, studies and extrapolations

    CERN Document Server

    Baer, T; Cerutti, F; Ferrari, A; Garrel, N; Goddard, B; Holzer, EB; Jackson, S; Lechner, A; Mertens, V; Misiowiec, M; Nebot del Busto, E; Nordt, A; Uythoven, J; Vlachoudis, V; Wenninger, J; Zamantzas, C; Zimmermann, F; Fuster, N

    2012-01-01

    Unidentified falling objects (UFOs) are potentially a major luminosity limitation for nominal LHC operation. They are presumably micrometer sized dust particles which lead to fast beam losses when they interact with the beam. With large-scale increases and optimizations of the beam loss monitor (BLM) thresholds, their impact on LHC availability was mitigated from mid 2011 onwards. For higher beam energy and lower magnet quench limits, the problem is expected to be considerably worse, though. In 2011/12, the diagnostics for UFO events were significantly improved: dedicated experiments and measurements in the LHC and in the laboratory were made and complemented by FLUKA simulations and theoretical studies. The state of knowledge, extrapolations for nominal LHC operation and mitigation strategies are presented

  11. Scintillation counting: an extrapolation into the future

    International Nuclear Information System (INIS)

    Ross, H.H.

    1983-01-01

    Progress in scintillation counting is intimately related to advances in a variety of other disciplines such as photochemistry, photophysics, and instrumentation. And while there is steady progress in the understanding of luminescent phenomena, there is a virtual explosion in the application of semiconductor technology to detectors, counting systems, and data processing. The exponential growth of this technology has had, and will continue to have, a profound effect on the art of scintillation spectroscopy. This paper will review key events in technology that have had an impact on the development of scintillation science (solid and liquid) and will attempt to extrapolate future directions based on existing and projected capability in associated fields. Along the way there have been occasional pitfalls and several false starts; these too will be discussed as a reminder that if you want the future to be different than the past, study the past

  12. Communication: Predicting virial coefficients and alchemical transformations by extrapolating Mayer-sampling Monte Carlo simulations

    Science.gov (United States)

    Hatch, Harold W.; Jiao, Sally; Mahynski, Nathan A.; Blanco, Marco A.; Shen, Vincent K.

    2017-12-01

    Virial coefficients are predicted over a large range of both temperatures and model parameter values (i.e., alchemical transformation) from an individual Mayer-sampling Monte Carlo simulation by statistical mechanical extrapolation with minimal increase in computational cost. With this extrapolation method, a Mayer-sampling Monte Carlo simulation of the SPC/E (extended simple point charge) water model quantitatively predicted the second virial coefficient as a continuous function spanning over four orders of magnitude in value and over three orders of magnitude in temperature with less than a 2% deviation. In addition, the same simulation predicted the second virial coefficient if the site charges were scaled by a constant factor, from an increase of 40% down to zero charge. This method is also shown to perform well for the third virial coefficient and the exponential parameter for a Lennard-Jones fluid.

  13. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    OpenAIRE

    Amany AlShawi

    2016-01-01

    Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers...

  14. Non destructive assay techniques applied to nuclear materials

    International Nuclear Information System (INIS)

    Gavron, A.

    2001-01-01

    Nondestructive assay is a suite of techniques that has matured and become precise, easily implementable, and remotely usable. These techniques provide elaborate safeguards of nuclear material by providing the necessary information for materials accounting. NDA techniques are ubiquitous, reliable, essentially tamper proof, and simple to use. They make the world a safer place to live in, and they make nuclear energy viable. (author)

  15. Effect of the reinforcement bar arrangement on the efficiency of electrochemical chloride removal technique applied to reinforced concrete structures

    International Nuclear Information System (INIS)

    Garces, P.; Sanchez de Rojas, M.J.; Climent, M.A.

    2006-01-01

    This paper reports on the research done to find out the effect that different bar arrangements may have on the efficiency of the electrochemical chloride removal (ECR) technique when applied to a reinforced concrete structural member. Five different types of bar arrangements were considered, corresponding to typical structural members such as columns (with single and double bar reinforcing), slabs, beams and footings. ECR was applied in several steps. We observe that the extraction efficiency depends on the reinforcing bar arrangement. A uniform layer set-up favours chloride extraction. Electrochemical techniques were also used to estimate the reinforcing bar corrosion states, as well as measure the corrosion potential, and instant corrosion rate based on the polarization resistance technique. After ECR treatment, a reduction in the corrosion levels is observed falling short of the depassivation threshold

  16. Parametric methods of describing and extrapolating the characteristics of long-term strength of refractory materials

    International Nuclear Information System (INIS)

    Tsvilyuk, I.S.; Avramenko, D.S.

    1986-01-01

    This paper carries out the comparative analysis of the suitability of parametric methods for describing and extrapolating the results of longterm tests on refractory materials. Diagrams are presented of the longterm strength of niobium based alloys tested in a vacuum of 1.3 X 10 -3 Pa. The predicted values and variance of the estimate of endurance of refractory alloys are presented by parametric dependences. The longterm strength characteristics can be described most adequately by the Manson-Sakkop and Sherby-Dorn methods. Several methods must be used to ensure the reliable extrapolation of the longterm strength characteristics to the time period an order of magnitude longer than the experimental data. The most suitable method cannot always be selected on the basis of the correlation ratio

  17. Motion Capture Technique Applied Research in Sports Technique Diagnosis

    Directory of Open Access Journals (Sweden)

    Zhiwu LIU

    2014-09-01

    Full Text Available The motion capture technology system definition is described in the paper, and its components are researched, the key parameters are obtained from motion technique, the quantitative analysis are made on technical movements, the method of motion capture technology is proposed in sport technical diagnosis. That motion capture step includes calibration system, to attached landmarks to the tester; to capture trajectory, and to analyze the collected data.

  18. Nuclear radioactive techniques applied to materials research

    CERN Document Server

    Correia, João Guilherme; Wahl, Ulrich

    2011-01-01

    In this paper we review materials characterization techniques using radioactive isotopes at the ISOLDE/CERN facility. At ISOLDE intense beams of chemically clean radioactive isotopes are provided by selective ion-sources and high-resolution isotope separators, which are coupled on-line with particle accelerators. There, new experiments are performed by an increasing number of materials researchers, which use nuclear spectroscopic techniques such as Mössbauer, Perturbed Angular Correlations (PAC), beta-NMR and Emission Channeling with short-lived isotopes not available elsewhere. Additionally, diffusion studies and traditionally non-radioactive techniques as Deep Level Transient Spectroscopy, Hall effect and Photoluminescence measurements are performed on radioactive doped samples, providing in this way the element signature upon correlation of the time dependence of the signal with the isotope transmutation half-life. Current developments, applications and perspectives of using radioactive ion beams and tech...

  19. H/V spectral ratios technique application in the city of Bucharest: Can we get rid of source effect?

    International Nuclear Information System (INIS)

    Grecu, B.; Radulian, M.; Mandrescu, N.; Panza, G.F.

    2006-06-01

    The main issue of this paper is to show that, contrary to many examples of monitored strong earthquakes in different urban areas, the intensity and spectral characteristics of the strong ground motion induced in the Bucharest area, by Vrancea intermediate- depth earthquakes, is controlled by the coupled source-site properties rather than by the local site conditions alone. Our results have important implications on the strategy to follow when assessing the seismic microzoning for Bucharest city: we recommend the application of deterministic approaches rather than empirical techniques, like H/V spectral ratios. However, when applied to noise data, the H/V spectral technique succeeds in reproducing the predominant frequency response characteristic for the sedimentary cover beneath the city and the relatively uniform distribution of this structure over the city area. Our results strongly disagree with any strategy of extrapolation from small and moderate earthquakes to strong earthquakes for microzoning purposes. (author)

  20. Applying decision-making techniques to Civil Engineering Projects

    Directory of Open Access Journals (Sweden)

    Fam F. Abdel-malak

    2017-12-01

    Full Text Available Multi-Criteria Decision-Making (MCDM techniques are found to be useful tools in project managers’ hands to overcome decision-making (DM problems in Civil Engineering Projects (CEPs. The main contribution of this paper includes selecting and studying the popular MCDM techniques that uses different and wide ranges of data types in CEPs. A detailed study including advantages and pitfalls of using the Analytic Hierarchy Process (AHP and Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (Fuzzy TOPSIS is introduced. Those two techniques are selected for the purpose of forming a package that covers most available data types in CEPs. The results indicated that AHP has a structure which simplifies complicated problems, while Fuzzy TOPSIS uses the advantages of linguistic variables to solve the issue of undocumented data and ill-defined problems. Furthermore, AHP is a simple technique that depends on pairwise comparisons of factors and natural attributes, beside it is preferable for widely spread hierarchies. On the other hand, Fuzzy TOPSIS needs more information but works well for the one-tier decision tree as well as it shows more flexibility to work in fuzzy environments. The two techniques have the facility to be integrated and combined in a new module to support most of the decisions required in CEPs. Keywords: Decision-making, AHP, Fuzzy TOPSIS, CBA, Civil Engineering Projects

  1. Applying Nonverbal Techniques to Organizational Diagnosis.

    Science.gov (United States)

    Tubbs, Stewart L.; Koske, W. Cary

    Ongoing research programs conducted at General Motors Institute are motivated by the practical objective of improving the company's organizational effectiveness. Computer technology is being used whenever possible; for example, a technique developed by Herman Chernoff was used to process data from a survey of employee attitudes into 18 different…

  2. A novel source convergence acceleration scheme for Monte Carlo criticality calculations, part I: Theory

    International Nuclear Information System (INIS)

    Griesheimer, D. P.; Toth, B. E.

    2007-01-01

    A novel technique for accelerating the convergence rate of the iterative power method for solving eigenvalue problems is presented. Smoothed Residual Acceleration (SRA) is based on a modification to the well known fixed-parameter extrapolation method for power iterations. In SRA the residual vector is passed through a low-pass filter before the extrapolation step. Filtering limits the extrapolation to the lower order Eigenmodes, improving the stability of the method and allowing the use of larger extrapolation parameters. In simple tests SRA demonstrates superior convergence acceleration when compared with an optimal fixed-parameter extrapolation scheme. The primary advantage of SRA is that it can be easily applied to Monte Carlo criticality calculations in order to reduce the number of discard cycles required before a stationary fission source distribution is reached. A simple algorithm for applying SRA to Monte Carlo criticality problems is described. (authors)

  3. Software engineering techniques applied to agricultural systems an object-oriented and UML approach

    CERN Document Server

    Papajorgji, Petraq J

    2014-01-01

    Software Engineering Techniques Applied to Agricultural Systems presents cutting-edge software engineering techniques for designing and implementing better agricultural software systems based on the object-oriented paradigm and the Unified Modeling Language (UML). The focus is on the presentation of  rigorous step-by-step approaches for modeling flexible agricultural and environmental systems, starting with a conceptual diagram representing elements of the system and their relationships. Furthermore, diagrams such as sequential and collaboration diagrams are used to explain the dynamic and static aspects of the software system.    This second edition includes: a new chapter on Object Constraint Language (OCL), a new section dedicated to the Model-VIEW-Controller (MVC) design pattern, new chapters presenting details of two MDA-based tools – the Virtual Enterprise and Olivia Nova, and a new chapter with exercises on conceptual modeling.  It may be highly useful to undergraduate and graduate students as t...

  4. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  5. An efficient wave extrapolation method for tilted orthorhombic media using effective ellipsoidal models

    KAUST Repository

    Waheed, Umair bin

    2014-08-01

    The wavefield extrapolation operator for ellipsoidally anisotropic (EA) media offers significant cost reduction compared to that for the orthorhombic case, especially when the symmetry planes are tilted and/or rotated. However, ellipsoidal anisotropy does not provide accurate focusing for media of orthorhombic anisotropy. Therefore, we develop effective EA models that correctly capture the kinematic behavior of the wavefield for tilted orthorhombic (TOR) media. Specifically, we compute effective source-dependent velocities for the EA model using kinematic high-frequency representation of the TOR wavefield. The effective model allows us to use the cheaper EA wavefield extrapolation operator to obtain approximate wavefield solutions for a TOR model. Despite the fact that the effective EA models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TOR media, particularly for media of low to moderate complexity. We demonstrate applicability of the proposed approach on a layered TOR model.

  6. An efficient wave extrapolation method for tilted orthorhombic media using effective ellipsoidal models

    KAUST Repository

    Waheed, Umair bin; Alkhalifah, Tariq Ali

    2014-01-01

    The wavefield extrapolation operator for ellipsoidally anisotropic (EA) media offers significant cost reduction compared to that for the orthorhombic case, especially when the symmetry planes are tilted and/or rotated. However, ellipsoidal anisotropy does not provide accurate focusing for media of orthorhombic anisotropy. Therefore, we develop effective EA models that correctly capture the kinematic behavior of the wavefield for tilted orthorhombic (TOR) media. Specifically, we compute effective source-dependent velocities for the EA model using kinematic high-frequency representation of the TOR wavefield. The effective model allows us to use the cheaper EA wavefield extrapolation operator to obtain approximate wavefield solutions for a TOR model. Despite the fact that the effective EA models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TOR media, particularly for media of low to moderate complexity. We demonstrate applicability of the proposed approach on a layered TOR model.

  7. Recent developments and evaluation of selected geochemical techniques applied to uranium exploration

    International Nuclear Information System (INIS)

    Wenrich-Verbeek, K.J.; Cadigan, R.A.; Felmlee, J.K.; Reimer, G.M.; Spirakis, C.S.

    1976-01-01

    Various geochemical techniques for uranium exploration are currently under study by the geochemical techniques team of the Branch of Uranium and Thorium Resources, US Geological Survey. Radium-226 and its parent uranium-238 occur in mineral spring water largely independently of the geochemistry of the solutions and thus are potential indicators of uranium in source rocks. Many radioactive springs, hot or cold, are believed to be related to hydrothermal systems which contain uranium at depth. Radium, when present in the water, is co-precipitated in iron and/or manganese oxides and hydroxides or in barium sulphate associated with calcium carbonate spring deposits. Studies of surface water samples have resulted in improved standardized sample treatment and collection procedures. Stream discharge has been shown to have a significant effect on uranium concentration, while conductivity shows promise as a ''pathfinder'' for uranium. Turbid samples behave differently and consequently must be treated with more caution than samples from clear streams. Both water and stream sediments should be sampled concurrently, as anomalous uranium concentrations may occur in only one of these media and would be overlooked if only one, the wrong one, were analysed. The fission-track technique has been applied to uranium determinations in the above water studies. The advantages of the designed sample collecting system are that only a small quantity, typically one drop, of water is required and sample manipulation is minimized, thereby reducing contamination risks. The fission-track analytical technique is effective at the uranium concentration levels commonly found in natural waters (5.0-0.01 μg/litre). Landsat data were used to detect alteration associated with uranium deposits. Altered areas were detected but were not uniquely defined. Nevertheless, computer processing of Landsat data did suggest a smaller size target for further evaluation and thus is useful as an exploration tool

  8. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    Science.gov (United States)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  9. Quantitative Cross-Species Extrapolation between Humans and Fish: The Case of the Anti-Depressant Fluoxetine

    Science.gov (United States)

    Margiotta-Casaluci, Luigi; Owen, Stewart F.; Cumming, Rob I.; de Polo, Anna; Winter, Matthew J.; Panter, Grace H.; Rand-Weaver, Mariann; Sumpter, John P.

    2014-01-01

    Fish are an important model for the pharmacological and toxicological characterization of human pharmaceuticals in drug discovery, drug safety assessment and environmental toxicology. However, do fish respond to pharmaceuticals as humans do? To address this question, we provide a novel quantitative cross-species extrapolation approach (qCSE) based on the hypothesis that similar plasma concentrations of pharmaceuticals cause comparable target-mediated effects in both humans and fish at similar level of biological organization (Read-Across Hypothesis). To validate this hypothesis, the behavioural effects of the anti-depressant drug fluoxetine on the fish model fathead minnow (Pimephales promelas) were used as test case. Fish were exposed for 28 days to a range of measured water concentrations of fluoxetine (0.1, 1.0, 8.0, 16, 32, 64 µg/L) to produce plasma concentrations below, equal and above the range of Human Therapeutic Plasma Concentrations (HTPCs). Fluoxetine and its metabolite, norfluoxetine, were quantified in the plasma of individual fish and linked to behavioural anxiety-related endpoints. The minimum drug plasma concentrations that elicited anxiolytic responses in fish were above the upper value of the HTPC range, whereas no effects were observed at plasma concentrations below the HTPCs. In vivo metabolism of fluoxetine in humans and fish was similar, and displayed bi-phasic concentration-dependent kinetics driven by the auto-inhibitory dynamics and saturation of the enzymes that convert fluoxetine into norfluoxetine. The sensitivity of fish to fluoxetine was not so dissimilar from that of patients affected by general anxiety disorders. These results represent the first direct evidence of measured internal dose response effect of a pharmaceutical in fish, hence validating the Read-Across hypothesis applied to fluoxetine. Overall, this study demonstrates that the qCSE approach, anchored to internal drug concentrations, is a powerful tool to guide the

  10. Quantitative cross-species extrapolation between humans and fish: the case of the anti-depressant fluoxetine.

    Directory of Open Access Journals (Sweden)

    Luigi Margiotta-Casaluci

    Full Text Available Fish are an important model for the pharmacological and toxicological characterization of human pharmaceuticals in drug discovery, drug safety assessment and environmental toxicology. However, do fish respond to pharmaceuticals as humans do? To address this question, we provide a novel quantitative cross-species extrapolation approach (qCSE based on the hypothesis that similar plasma concentrations of pharmaceuticals cause comparable target-mediated effects in both humans and fish at similar level of biological organization (Read-Across Hypothesis. To validate this hypothesis, the behavioural effects of the anti-depressant drug fluoxetine on the fish model fathead minnow (Pimephales promelas were used as test case. Fish were exposed for 28 days to a range of measured water concentrations of fluoxetine (0.1, 1.0, 8.0, 16, 32, 64 µg/L to produce plasma concentrations below, equal and above the range of Human Therapeutic Plasma Concentrations (H(TPCs. Fluoxetine and its metabolite, norfluoxetine, were quantified in the plasma of individual fish and linked to behavioural anxiety-related endpoints. The minimum drug plasma concentrations that elicited anxiolytic responses in fish were above the upper value of the H(TPC range, whereas no effects were observed at plasma concentrations below the H(TPCs. In vivo metabolism of fluoxetine in humans and fish was similar, and displayed bi-phasic concentration-dependent kinetics driven by the auto-inhibitory dynamics and saturation of the enzymes that convert fluoxetine into norfluoxetine. The sensitivity of fish to fluoxetine was not so dissimilar from that of patients affected by general anxiety disorders. These results represent the first direct evidence of measured internal dose response effect of a pharmaceutical in fish, hence validating the Read-Across hypothesis applied to fluoxetine. Overall, this study demonstrates that the qCSE approach, anchored to internal drug concentrations, is a powerful tool

  11. Spatial extrapolation of light use efficiency model parameters to predict gross primary production

    Directory of Open Access Journals (Sweden)

    Karsten Schulz

    2011-12-01

    Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.

  12. NEW TECHNIQUES APPLIED IN ECONOMICS. ARTIFICIAL NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    Constantin Ilie

    2009-05-01

    Full Text Available The present paper has the objective to inform the public regarding the use of new techniques for the modeling, simulate and forecast of system from different field of activity. One of those techniques is Artificial Neural Network, one of the artificial in

  13. Application of Avco data analysis and prediction techniques (ADAPT) to prediction of sunspot activity

    Science.gov (United States)

    Hunter, H. E.; Amato, R. A.

    1972-01-01

    The results are presented of the application of Avco Data Analysis and Prediction Techniques (ADAPT) to derivation of new algorithms for the prediction of future sunspot activity. The ADAPT derived algorithms show a factor of 2 to 3 reduction in the expected 2-sigma errors in the estimates of the 81-day running average of the Zurich sunspot numbers. The report presents: (1) the best estimates for sunspot cycles 20 and 21, (2) a comparison of the ADAPT performance with conventional techniques, and (3) specific approaches to further reduction in the errors of estimated sunspot activity and to recovery of earlier sunspot historical data. The ADAPT programs are used both to derive regression algorithm for prediction of the entire 11-year sunspot cycle from the preceding two cycles and to derive extrapolation algorithms for extrapolating a given sunspot cycle based on any available portion of the cycle.

  14. Diagnostic extrapolation of gross primary production from flux tower sites to the globe

    Science.gov (United States)

    Beer, Christian; Reichstein, Markus; Tomelleri, Enrico; Ciais, Philippe; Jung, Martin; Carvalhais, Nuno; Rödenbeck, Christian; Baldocchi, Dennis; Luyssaert, Sebastiaan; Papale, Dario

    2010-05-01

    The uptake of atmospheric CO2 by plant photosynthesis is the largest global carbon flux and is thought of driving most terrestrial carbon cycle processes. While the photosynthesis processes at the leaf and canopy levels are quite well understood, so far only very crude estimates of its global integral, the Gross Primary Production (GPP) can be found in the literature. Existing estimates have been lacking sound empirical basis. Reasons for such limitations lie in the absence of direct estimates of ecosystem-level GPP and methodological difficulties in scaling local carbon flux measurements to global scale across heterogeneous vegetation. Here, we present global estimates of GPP based on different diagnostic approaches. These up-scaling schemes integrated high-resolution remote sensing products, such as land cover, the fraction of photosynthetically active radiation (fAPAR) and leaf-area index, with carbon flux measurements from the global network of eddy covariance stations (FLUXNET). In addition, meteorological datasets from diverse sources and river runoff observations were used. All the above-mentioned approaches were also capable of estimating uncertainties. With six novel or newly parameterized and highly diverse up-scaling schemes we consistently estimated a global GPP of 122 Pg C y-1. In the quantification of the total uncertainties, we considered uncertainties arising from the measurement technique and data processing (i.e. partitioning into GPP and respiration). Furthermore, we accounted for the uncertainties of drivers and the structural uncertainties of the extrapolation approach. The total propagation led to a global uncertainty of 15 % of the mean value. Although our mean GPP estimate of 122 Pg C y-1 is similar to the previous postulate by Intergovernmental Panel on Climate Change in 2001, we estimated a different variability among ecoregions. The tropics accounted for 32 % of GPP showing a greater importance of tropical ecosystems for the global carbon

  15. Extrapolation for exposure duration in oral toxicity: A quantitative analysis of historical toxicity data

    NARCIS (Netherlands)

    Groeneveld, C.N.; Hakkert, B.C.; Bos, P.M.J.; Heer, C.de

    2004-01-01

    For human risk assessment, experimental data often have to be extrapolated for exposure duration, which is generally done by means of default values. The purpose of the present study was twofold. First, to derive a statistical distribution for differences in exposure duration that can be used in a

  16. Hazard characterisation of chemicals in food and diet : dose response, mechanisms and extrapolation issues

    NARCIS (Netherlands)

    Dybing, E.; Doe, J.; Groten, J.; Kleiner, J.; O'Brien, J.; Renwick, A.G.; Schlatter, J.; Steinberg, P.; Tritscher, A.; Walker, R.; Younes, M.

    2002-01-01

    Hazard characterisation of low molecular weight chemicals in food and diet generally use a no-observed-adverse-effect level (NOAEL) or a benchmark dose as the starting point. For hazards that are considered not to have thresholds for their mode of action, low-dose extrapolation and other modelling

  17. Time-series-analysis techniques applied to nuclear-material accounting

    International Nuclear Information System (INIS)

    Pike, D.H.; Morrison, G.W.; Downing, D.J.

    1982-05-01

    This document is designed to introduce the reader to the applications of Time Series Analysis techniques to Nuclear Material Accountability data. Time series analysis techniques are designed to extract information from a collection of random variables ordered by time by seeking to identify any trends, patterns, or other structure in the series. Since nuclear material accountability data is a time series, one can extract more information using time series analysis techniques than by using other statistical techniques. Specifically, the objective of this document is to examine the applicability of time series analysis techniques to enhance loss detection of special nuclear materials. An introductory section examines the current industry approach which utilizes inventory differences. The error structure of inventory differences is presented. Time series analysis techniques discussed include the Shewhart Control Chart, the Cumulative Summation of Inventory Differences Statistics (CUSUM) and the Kalman Filter and Linear Smoother

  18. Applying the GNSS Volcanic Ash Plume Detection Technique to Consumer Navigation Receivers

    Science.gov (United States)

    Rainville, N.; Palo, S.; Larson, K. M.

    2017-12-01

    Global Navigation Satellite Systems (GNSS) such as the Global Positioning System (GPS) rely on predictably structured and constant power RF signals to fulfill their primary use for navigation and timing. When the received strength of GNSS signals deviates from the expected baseline, it is typically due to a change in the local environment. This can occur when signal reflections from the ground are modified by changes in snow or soil moisture content, as well as by attenuation of the signal from volcanic ash. This effect allows GNSS signals to be used as a source for passive remote sensing. Larson et al. (2017) have developed a detection technique for volcanic ash plumes based on the attenuation seen at existing geodetic GNSS sites. Since these existing networks are relatively sparse, this technique has been extended to use lower cost consumer GNSS receiver chips to enable higher density measurements of volcanic ash. These low-cost receiver chips have been integrated into a fully stand-alone sensor, with independent power, communications, and logging capabilities as part of a Volcanic Ash Plume Receiver (VAPR) network. A mesh network of these sensors transmits data to a local base-station which then streams the data real-time to a web accessible server. Initial testing of this sensor network has uncovered that a different detection approach is necessary when using consumer GNSS receivers and antennas. The techniques to filter and process the lower quality data from consumer receivers will be discussed and will be applied to initial results from a functioning VAPR network installation.

  19. Measurement of the surface field on open magnetic samples by the extrapolation method

    Czech Academy of Sciences Publication Activity Database

    Perevertov, Oleksiy

    2005-01-01

    Roč. 76, - (2005), 104701/1-104701/7 ISSN 0034-6748 R&D Projects: GA ČR(CZ) GP202/04/P010; GA AV ČR(CZ) 1QS100100508 Institutional research plan: CEZ:AV0Z10100520 Keywords : magnetic field measurement * extrapolation * air gaps * magnetic permeability Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.235, year: 2005

  20. Extrapolation of bulk rock elastic moduli of different rock types to high pressure conditions and comparison with texture-derived elastic moduli

    Science.gov (United States)

    Ullemeyer, Klaus; Lokajíček, Tomás; Vasin, Roman N.; Keppler, Ruth; Behrmann, Jan H.

    2018-02-01

    In this study elastic moduli of three different rock types of simple (calcite marble) and more complex (amphibolite, micaschist) mineralogical compositions were determined by modeling of elastic moduli using texture (crystallographic preferred orientation; CPO) data, experimental investigation and extrapolation. 3D models were calculated using single crystal elastic moduli, and CPO measured using time-of-flight neutron diffraction at the SKAT diffractometer in Dubna (Russia) and subsequently analyzed using Rietveld Texture Analysis. To define extrinsic factors influencing elastic behaviour, P-wave and S-wave velocity anisotropies were experimentally determined at 200, 400 and 600 MPa confining pressure. Functions describing variations of the elastic moduli with confining pressure were then used to predict elastic properties at 1000 MPa, revealing anisotropies in a supposedly crack-free medium. In the calcite marble elastic anisotropy is dominated by the CPO. Velocities continuously increase, while anisotropies decrease from measured, over extrapolated to CPO derived data. Differences in velocity patterns with sample orientation suggest that the foliation forms an important mechanical anisotropy. The amphibolite sample shows similar magnitudes of extrapolated and CPO derived velocities, however the pattern of CPO derived velocity is closer to that measured at 200 MPa. Anisotropy decreases from the extrapolated to the CPO derived data. In the micaschist, velocities are higher and anisotropies are lower in the extrapolated data, in comparison to the data from measurements at lower pressures. Generally our results show that predictions for the elastic behavior of rocks at great depths are possible based on experimental data and those computed from CPO. The elastic properties of the lower crust can, thus, be characterized with an improved degree of confidence using extrapolations. Anisotropically distributed spherical micro-pores are likely to be preserved, affecting

  1. How Can Synchrotron Radiation Techniques Be Applied for Detecting Microstructures in Amorphous Alloys?

    Directory of Open Access Journals (Sweden)

    Gu-Qing Guo

    2015-11-01

    Full Text Available In this work, how synchrotron radiation techniques can be applied for detecting the microstructure in metallic glass (MG is studied. The unit cells are the basic structural units in crystals, though it has been suggested that the co-existence of various clusters may be the universal structural feature in MG. Therefore, it is a challenge to detect microstructures of MG even at the short-range scale by directly using synchrotron radiation techniques, such as X-ray diffraction and X-ray absorption methods. Here, a feasible scheme is developed where some state-of-the-art synchrotron radiation-based experiments can be combined with simulations to investigate the microstructure in MG. By studying a typical MG composition (Zr70Pd30, it is found that various clusters do co-exist in its microstructure, and icosahedral-like clusters are the popular structural units. This is the structural origin where there is precipitation of an icosahedral quasicrystalline phase prior to phase transformation from glass to crystal when heating Zr70Pd30 MG.

  2. A new mini-extrapolation chamber for beta source uniformity measurements

    International Nuclear Information System (INIS)

    Oliveira, M.L.; Caldas, L.V.E.

    2006-01-01

    According to recent international recommendations, beta particle sources should be specified in terms of absorbed dose rates to water at the reference point. However, because of the clinical use of these sources, additional information should be supplied in the calibration reports. This additional information include the source uniformity. A new small volume extrapolation chamber was designed and constructed at the Calibration Laboratory at Instituto de Pesquisas Energeticas e Nucleares, IPEN, Brazil, for the calibration of 90 Sr+ 90 Y ophthalmic plaques. This chamber can be used as a primary standard for the calibration of this type of source. Recent additional studies showed the feasibility of the utilization of this chamber to perform source uniformity measurements. Because of the small effective electrode area, it is possible to perform independent measurements by varying the chamber position by small steps. The aim of the present work was to study the uniformity of a 90 Sr+ 90 Y plane ophthalmic plaque utilizing the mini extrapolation chamber developed at IPEN. The uniformity measurements were performed by varying the chamber position by steps of 2 mm in the source central axis (x-and y-directions) and by varying the chamber position off-axis by 3 mm steps. The results obtained showed that this small volume chamber can be used for this purpose with a great advantage: it is a direct method, being unnecessary a previously calibration of the measurement device in relation to a reference instrument, and it provides real -time results, reducing the time necessary for the study and the determination of the uncertainties related to the measurements. (authors)

  3. Improving in vitro to in vivo extrapolation by incorporating toxicokinetic measurements: A case study of lindane-induced neurotoxicity

    Energy Technology Data Exchange (ETDEWEB)

    Croom, Edward L.; Shafer, Timothy J.; Evans, Marina V.; Mundy, William R.; Eklund, Chris R.; Johnstone, Andrew F.M.; Mack, Cina M.; Pegram, Rex A., E-mail: pegram.rex@epa.gov

    2015-02-15

    Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicity. Lindane cell and media concentrations in vitro, together with in vitro concentration-response data for lindane effects on neuronal network firing rates, were compared to in vivo data and model simulations as an exercise in extrapolation for chemical-induced neurotoxicity in rodents and humans. Time- and concentration-dependent lindane dosimetry was determined in primary cultures of rat cortical neurons in vitro using “faux” (without electrodes) microelectrode arrays (MEAs). In vivo data were derived from literature values, and physiologically based pharmacokinetic (PBPK) modeling was used to extrapolate from rat to human. The previously determined EC{sub 50} for increased firing rates in primary cultures of cortical neurons was 0.6 μg/ml. Media and cell lindane concentrations at the EC{sub 50} were 0.4 μg/ml and 7.1 μg/ml, respectively, and cellular lindane accumulation was time- and concentration-dependent. Rat blood and brain lindane levels during seizures were 1.7–1.9 μg/ml and 5–11 μg/ml, respectively. Brain lindane levels associated with seizures in rats and those predicted for humans (average = 7 μg/ml) by PBPK modeling were very similar to in vitro concentrations detected in cortical cells at the EC{sub 50} dose. PBPK model predictions matched literature data and timing. These findings indicate that in vitro MEA results are predictive of in vivo responses to lindane and demonstrate a successful modeling approach for IVIVE of rat and human neurotoxicity. - Highlights: • In vitro to in vivo extrapolation for lindane neurotoxicity was performed. • Dosimetry of lindane in a micro-electrode array (MEA) test system was assessed. • Cell concentrations at the MEA EC

  4. Applying advanced digital signal processing techniques in industrial radioisotopes applications

    International Nuclear Information System (INIS)

    Mahmoud, H.K.A.E.

    2012-01-01

    Radioisotopes can be used to obtain signals or images in order to recognize the information inside the industrial systems. The main problems of using these techniques are the difficulty of identification of the obtained signals or images and the requirement of skilled experts for the interpretation process of the output data of these applications. Now, the interpretation of the output data from these applications is performed mainly manually, depending heavily on the skills and the experience of trained operators. This process is time consuming and the results typically suffer from inconsistency and errors. The objective of the thesis is to apply the advanced digital signal processing techniques for improving the treatment and the interpretation of the output data from the different Industrial Radioisotopes Applications (IRA). This thesis focuses on two IRA; the Residence Time Distribution (RTD) measurement and the defect inspection of welded pipes using a gamma source (gamma radiography). In RTD measurement application, this thesis presents methods for signal pre-processing and modeling of the RTD signals. Simulation results have been presented for two case studies. The first case study is a laboratory experiment for measuring the RTD in a water flow rig. The second case study is an experiment for measuring the RTD in a phosphate production unit. The thesis proposes an approach for RTD signal identification in the presence of noise. In this approach, after signal processing, the Mel Frequency Cepstral Coefficients (MFCCs) and polynomial coefficients are extracted from the processed signal or from one of its transforms. The Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT), and Discrete Sine Transform (DST) have been tested and compared for efficient feature extraction. Neural networks have been used for matching of the extracted features. Furthermore, the Power Density Spectrum (PDS) of the RTD signal has been also used instead of the discrete

  5. Image computing techniques to extrapolate data for dust tracking in case of an experimental accident simulation in a nuclear fusion plant.

    Science.gov (United States)

    Camplani, M; Malizia, A; Gelfusa, M; Barbato, F; Antonelli, L; Poggi, L A; Ciparisse, J F; Salgado, L; Richetta, M; Gaudio, P

    2016-01-01

    In this paper, a preliminary shadowgraph-based analysis of dust particles re-suspension due to loss of vacuum accident (LOVA) in ITER-like nuclear fusion reactors has been presented. Dust particles are produced through different mechanisms in nuclear fusion devices, one of the main issues is that dust particles are capable of being re-suspended in case of events such as LOVA. Shadowgraph is based on an expanded collimated beam of light emitted by a laser or a lamp that emits light transversely compared to the flow field direction. In the STARDUST facility, the dust moves in the flow, and it causes variations of refractive index that can be detected by using a CCD camera. The STARDUST fast camera setup allows to detect and to track dust particles moving in the vessel and then to obtain information about the velocity field of dust mobilized. In particular, the acquired images are processed such that per each frame the moving dust particles are detected by applying a background subtraction technique based on the mixture of Gaussian algorithm. The obtained foreground masks are eventually filtered with morphological operations. Finally, a multi-object tracking algorithm is used to track the detected particles along the experiment. For each particle, a Kalman filter-based tracker is applied; the particles dynamic is described by taking into account position, velocity, and acceleration as state variable. The results demonstrate that it is possible to obtain dust particles' velocity field during LOVA by automatically processing the data obtained with the shadowgraph approach.

  6. A line array based near field imaging technique for characterising acoustical properties of elongated targets

    NARCIS (Netherlands)

    Driessen, F.P.G.

    1995-01-01

    With near field imaging techniques the acoustical pressure waves at distances other than the recorded can be calculated. Normally, acquisition on a two dimensional plane is necessary and extrapolation is performed by a Rayleigh integral. A near field single line instead of two dimensional plane

  7. Extrapolation of systemic bioavailability assessing skin absorption and epidermal and hepatic metabolism of aromatic amine hair dyes in vitro.

    Science.gov (United States)

    Manwaring, John; Rothe, Helga; Obringer, Cindy; Foltz, David J; Baker, Timothy R; Troutman, John A; Hewitt, Nicola J; Goebel, Carsten

    2015-09-01

    Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passage through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis-Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte Km and Vmax values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and Cmax was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. Copyright © 2015. Published by Elsevier Inc.

  8. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  9. Ion backscattering techniques applied in materials science research

    International Nuclear Information System (INIS)

    Sood, D.K.

    1978-01-01

    The applications of Ion Backscattering Technique (IBT) to material analysis have expanded rapidly during the last decade. It is now regarded as an analysis tool indispensable for a versatile materials research program. The technique consists of simply shooting a beam of monoenergetic ions (usually 4 He + ions at about 2 MeV) onto a target, and measuring their energy distribution after backscattering at a fixed angle. Simple Rutherford scattering analysis of the backscattered ion spectrum yields information on the mass, the absolute amount and the depth profile of elements present upto a few microns of the target surface. The technique is nondestructive, quick, quantitative and the only known method of analysis which gives quantitative results without recourse to calibration standards. Its major limitations are the inability to separate elements of similar mass and a complete absence of chemical-binding information. A typical experimental set up and spectrum analysis have been described. Examples, some of them based on the work at the Bhabha Atomic Research Centre, Bombay, have been given to illustrate the applications of this technique to semiconductor technology, thin film materials science and nuclear energy materials. Limitations of IBT have been illustrated and a few remedies to partly overcome these limitations are presented. (auth.)

  10. Markov chain Monte Carlo techniques applied to parton distribution functions determination: Proof of concept

    Science.gov (United States)

    Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane

    2017-07-01

    We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.

  11. Direct observations of the viscosity of Earth's outer core and extrapolation of measurements of the viscosity of liquid iron

    International Nuclear Information System (INIS)

    Smylie, D E; Brazhkin, Vadim V; Palmer, Andrew

    2009-01-01

    Estimates vary widely as to the viscosity of Earth's outer fluid core. Directly observed viscosity is usually orders of magnitude higher than the values extrapolated from high-pressure high-temperature laboratory experiments, which are close to those for liquid iron at atmospheric pressure. It turned out that this discrepancy can be removed by extrapolating via the widely known Arrhenius activation model modified by lifting the commonly used assumption of pressure-independent activation volume (which is possible due to the discovery that at high pressures the activation volume increases strongly with pressure, resulting in 10 2 Pa s at the top of the fluid core, and in 10 11 Pa s at its bottom). There are of course many uncertainties affecting this extrapolation process. This paper reviews two viscosity determination methods, one for the top and the other for the bottom of the outer core, the former of which relies on the decay of free core nutations and yields 2371 ± 1530 Pa s, while the other relies on the reduction in the rotational splitting of the two equatorial translational modes of the solid inner core oscillations and yields an average of 1.247 ± 0.035 Pa s. Encouraged by the good performance of the Arrhenius extrapolation, a differential form of the Arrhenius activation model is used to interpolate along the melting temperature curve and to find the viscosity profile across the entire outer core. The viscosity variation is found to be nearly log-linear between the measured boundary values. (methodological notes)

  12. Enhanced performance of CdS/CdTe thin-film devices through temperature profiling techniques applied to close-spaced sublimation deposition

    Energy Technology Data Exchange (ETDEWEB)

    Xiaonan Li; Sheldon, P.; Moutinho, H.; Matson, R. [National Renewable Energy Lab., Golden, CO (United States)

    1996-05-01

    The authors describe a methodology developed and applied to the close-spaced sublimation technique for thin-film CdTe deposition. The developed temperature profiles consisted of three discrete temperature segments, which the authors called the nucleation, plugging, and annealing temperatures. They have demonstrated that these temperature profiles can be used to grow large-grain material, plug pinholes, and improve CdS/CdTe photovoltaic device performance by about 15%. The improved material and device properties have been obtained while maintaining deposition temperatures compatible with commercially available substrates. This temperature profiling technique can be easily applied to a manufacturing environment by adjusting the temperature as a function of substrate position instead of time.

  13. Dynamic Aperture Extrapolation in Presence of Tune Modulation

    CERN Document Server

    Giovannozzi, Massimo; Todesco, Ezio

    1998-01-01

    In hadron colliders, such as the Large Hadron Collider (LHC) to be built at CERN, the long-term stability of the single-particle motion is mostly determined by the field-shape quality of the superconducting magnets. The mechanism of particle loss may be largely enhanced by modulation of betatron tunes, induced either by synchro-betatron coupling (via the residual uncorrected chromaticity), or by unavoidable power supply ripple. This harmful effect is investigated in a simple dynamical system model, the Henon map with modulated linear frequencies. Then, a realistic accelerator model describing the injection optics of the LHC lattice is analyzed. Orbital data obtained with long-term tracking simulations ($10^5$-$10^7$ turns) are post-processed to obtain the dynamic aperture. It turns out that the dynamic aperture can be interpolated using a simple mpirical formula, and it decays proportionally to a power of the inverse logarithm of the number of turns. Furthermore, the extrapolation of tracking data at $10^5$ t...

  14. Application of the EXtrapolated Efficiency Method (EXEM) to infer the gamma-cascade detection efficiency in the actinide region

    Energy Technology Data Exchange (ETDEWEB)

    Ducasse, Q. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); CEA-Cadarache, DEN/DER/SPRC/LEPh, 13108 Saint Paul lez Durance (France); Jurado, B., E-mail: jurado@cenbg.in2p3.fr [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); Mathieu, L.; Marini, P. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); Morillon, B. [CEA DAM DIF, 91297 Arpajon (France); Aiche, M.; Tsekhanovich, I. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France)

    2016-08-01

    The study of transfer-induced gamma-decay probabilities is very useful for understanding the surrogate-reaction method and, more generally, for constraining statistical-model calculations. One of the main difficulties in the measurement of gamma-decay probabilities is the determination of the gamma-cascade detection efficiency. In Boutoux et al. (2013) [10] we developed the EXtrapolated Efficiency Method (EXEM), a new method to measure this quantity. In this work, we have applied, for the first time, the EXEM to infer the gamma-cascade detection efficiency in the actinide region. In particular, we have considered the {sup 238}U(d,p){sup 239}U and {sup 238}U({sup 3}He,d){sup 239}Np reactions. We have performed Hauser–Feshbach calculations to interpret our results and to verify the hypothesis on which the EXEM is based. The determination of fission and gamma-decay probabilities of {sup 239}Np below the neutron separation energy allowed us to validate the EXEM.

  15. Application of the EXtrapolated Efficiency Method (EXEM) to infer the gamma-cascade detection efficiency in the actinide region

    International Nuclear Information System (INIS)

    Ducasse, Q.; Jurado, B.; Mathieu, L.; Marini, P.; Morillon, B.; Aiche, M.; Tsekhanovich, I.

    2016-01-01

    The study of transfer-induced gamma-decay probabilities is very useful for understanding the surrogate-reaction method and, more generally, for constraining statistical-model calculations. One of the main difficulties in the measurement of gamma-decay probabilities is the determination of the gamma-cascade detection efficiency. In Boutoux et al. (2013) [10] we developed the EXtrapolated Efficiency Method (EXEM), a new method to measure this quantity. In this work, we have applied, for the first time, the EXEM to infer the gamma-cascade detection efficiency in the actinide region. In particular, we have considered the "2"3"8U(d,p)"2"3"9U and "2"3"8U("3He,d)"2"3"9Np reactions. We have performed Hauser–Feshbach calculations to interpret our results and to verify the hypothesis on which the EXEM is based. The determination of fission and gamma-decay probabilities of "2"3"9Np below the neutron separation energy allowed us to validate the EXEM.

  16. Emotional experience is subject to social and technological change: extrapolating to the future

    OpenAIRE

    Scherer, Klaus R.

    2001-01-01

    While the emotion mechanism is generally considered to be evolutionarily continuous, suggesting a certain degree of universality of emotional responding, there is evidence that emotional experience may differ across cultures and historical periods. This article extrapolates potential changes in future emotional experiences that can be expected to be caused by rapid social and technological change. Specifically, four issues are discussed: (1) the effect of social change on emotions that are st...

  17. Cross-Species Extrapolation of Models for Predicting Lead Transfer from Soil to Wheat Grain.

    Directory of Open Access Journals (Sweden)

    Ke Liu

    Full Text Available The transfer of Pb from the soil to crops is a serious food hygiene security problem in China because of industrial, agricultural, and historical contamination. In this study, the characteristics of exogenous Pb transfer from 17 Chinese soils to a popular wheat variety (Xiaoyan 22 were investigated. In addition, bioaccumulation prediction models of Pb in grain were obtained based on soil properties. The results of the analysis showed that pH and OC were the most important factors contributing to Pb uptake by wheat grain. Using a cross-species extrapolation approach, the Pb uptake prediction models for cultivar Xiaoyan 22 in different soil Pb levels were satisfactorily applied to six additional non-modeled wheat varieties to develop a prediction model for each variety. Normalization of the bioaccumulation factor (BAF to specific soil physico-chemistry is essential, because doing so could significantly reduce the intra-species variation of different wheat cultivars in predicted Pb transfer and eliminate the influence of soil properties on ecotoxicity parameters for organisms of interest. Finally, the prediction models were successfully verified against published data (including other wheat varieties and crops and used to evaluate the ecological risk of Pb for wheat in contaminated agricultural soils.

  18. An Extrapolation of a Radical Equation More Accurately Predicts Shelf Life of Frozen Biological Matrices.

    Science.gov (United States)

    De Vore, Karl W; Fatahi, Nadia M; Sass, John E

    2016-08-01

    Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.

  19. Turbulent flux modelling with a simple 2-layer soil model and extrapolated surface temperature applied at Nam Co Lake basin on the Tibetan Plateau

    Directory of Open Access Journals (Sweden)

    T. Gerken

    2012-04-01

    Full Text Available This paper introduces a surface model with two soil-layers for use in a high-resolution circulation model that has been modified with an extrapolated surface temperature, to be used for the calculation of turbulent fluxes. A quadratic temperature profile based on the layer mean and base temperature is assumed in each layer and extended to the surface. The model is tested at two sites on the Tibetan Plateau near Nam Co Lake during four days during the 2009 Monsoon season. In comparison to a two-layer model without explicit surface temperature estimate, there is a greatly reduced delay in diurnal flux cycles and the modelled surface temperature is much closer to observations. Comparison with a SVAT model and eddy covariance measurements shows an overall reasonable model performance based on RMSD and cross correlation comparisons between the modified and original model. A potential limitation of the model is the need for careful initialisation of the initial soil temperature profile, that requires field measurements. We show that the modified model is capable of reproducing fluxes of similar magnitudes and dynamics when compared to more complex methods chosen as a reference.

  20. Characterization of an extrapolation chamber and radiochromic films for verifying the metrological coherence among beta radiation fields

    International Nuclear Information System (INIS)

    Castillo, Jhonny Antonio Benavente

    2011-01-01

    The metrological coherence among standard systems is a requirement for assuring the reliability of dosimetric quantities measurements in ionizing radiation field. Scientific and technologic improvements happened in beta radiation metrology with the installment of the new beta secondary standard BSS2 in Brazil and with the adoption of the internationally recommended beta reference radiations. The Dosimeter Calibration Laboratory of the Development Center for Nuclear Technology (LCD/CDTN), in Belo Horizonte, implemented the BSS2 and methodologies are investigated for characterizing the beta radiation fields by determining the field homogeneity, the accuracy and uncertainties in the absorbed dose in air measurements. In this work, a methodology to be used for verifying the metrological coherence among beta radiation fields in standard systems was investigated; an extrapolation chamber and radiochromic films were used and measurements were done in terms of absorbed dose in air. The reliability of both the extrapolation chamber and the radiochromic film was confirmed and their calibrations were done in the LCD/CDTN in 90 Sr/ 90 Y, 85 Kr and 147 Pm beta radiation fields. The angular coefficients of the extrapolation curves were determined with the chamber; the field mapping and homogeneity were obtained from dose profiles and isodose with the radiochromic films. A preliminary comparison between the LCD/CDTN and the Instrument Calibration Laboratory of the Nuclear and Energy Research Institute / Sao Paulo (LCI/IPEN) was carried out. Results with the extrapolation chamber measurements showed in terms of absorbed dose in air rates showed differences between both laboratories up to de -I % e 3%, for 90 Sr/ 90 Y, 85 Kr and 147 Pm beta radiation fields, respectively. Results with the EBT radiochromic films for 0.1, 0.3 and 0.15 Gy absorbed dose in air, for the same beta radiation fields, showed differences up to 3%, -9% and -53%. The beta radiation field mappings with

  1. Advanced gamma spectrum processing technique applied to the analysis of scattering spectra for determining material thickness

    International Nuclear Information System (INIS)

    Hoang Duc Tam; VNUHCM-University of Science, Ho Chi Minh City; Huynh Dinh Chuong; Tran Thien Thanh; Vo Hoang Nguyen; Hoang Thi Kieu Trang; Chau Van Tao

    2015-01-01

    In this work, an advanced gamma spectrum processing technique is applied to analyze experimental scattering spectra for determining the thickness of C45 heat-resistant steel plates. The single scattering peak of scattering spectra is taken as an advantage to measure the intensity of single scattering photons. Based on these results, the thickness of steel plates is determined with a maximum deviation of real thickness and measured thickness of about 4 %. Monte Carlo simulation using MCNP5 code is also performed to cross check the results, which yields a maximum deviation of 2 %. These results strongly confirm the capability of this technique in analyzing gamma scattering spectra, which is a simple, effective and convenient method for determining material thickness. (author)

  2. -Error Estimates of the Extrapolated Crank-Nicolson Discontinuous Galerkin Approximations for Nonlinear Sobolev Equations

    Directory of Open Access Journals (Sweden)

    Lee HyunYoung

    2010-01-01

    Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.

  3. Basic principles of applied nuclear techniques

    International Nuclear Information System (INIS)

    Basson, J.K.

    1976-01-01

    The technological applications of radioactive isotopes and radiation in South Africa have grown steadily since the first consignment of man-made radioisotopes reached this country in 1948. By the end of 1975 there were 412 authorised non-medical organisations (327 industries) using hundreds of sealed sources as well as their fair share of the thousands of radioisotope consignments, annually either imported or produced locally (mainly for medical purposes). Consequently, it is necessary for South African technologists to understand the principles of radioactivity in order to appreciate the industrial applications of nuclear techniques [af

  4. Verification of absorbed dose rates in reference beta radiation fields: measurements with an extrapolation chamber and radiochromic film

    International Nuclear Information System (INIS)

    Reynaldo, S. R.; Benavente C, J. A.; Da Silva, T. A.

    2015-10-01

    Beta Secondary Standard 2 (Bss 2) provides beta radiation fields with certified values of absorbed dose to tissue and the derived operational radiation protection quantities. As part of the quality assurance, metrology laboratories are required to verify the reliability of the Bss-2 system by performing additional verification measurements. In the CDTN Calibration Laboratory, the absorbed dose rates and their angular variation in the 90 Sr/ 90 Y and 85 Kr beta radiation fields were studied. Measurements were done with a 23392 model PTW extrapolation chamber and with Gafchromic radiochromic films on a PMMA slab phantom. In comparison to the certificate values provided by the Bss-2, absorbed dose rates measured with the extrapolation chamber differed from -1.4 to 2.9% for the 90 Sr/ 90 Y and -0.3% for the 85 Kr fields; their angular variation showed differences lower than 2% for incidence angles up to 40-degrees and it reached 11% for higher angles, when compared to ISO values. Measurements with the radiochromic film showed an asymmetry of the radiation field that is caused by a misalignment. Differences between the angular variations of absorbed dose rates determined by both dosimetry systems suggested that some correction factors for the extrapolation chamber that were not considered should be determined. (Author)

  5. Determination of the most appropriate method for extrapolating overall survival data from a placebo-controlled clinical trial of lenvatinib for progressive, radioiodine-refractory differentiated thyroid cancer.

    Science.gov (United States)

    Tremblay, Gabriel; Livings, Christopher; Crowe, Lydia; Kapetanakis, Venediktos; Briggs, Andrew

    2016-01-01

    Cost-effectiveness models for the treatment of long-term conditions often require information on survival beyond the period of available data. This paper aims to identify a robust and reliable method for the extrapolation of overall survival (OS) in patients with radioiodine-refractory differentiated thyroid cancer receiving lenvatinib or placebo. Data from 392 patients (lenvatinib: 261, placebo: 131) from the SELECT trial are used over a 34-month period of follow-up. A previously published criterion-based approach is employed to ascertain credible estimates of OS beyond the trial data. Parametric models with and without a treatment covariate and piecewise models are used to extrapolate OS, and a holistic approach, where a series of statistical and visual tests are considered collectively, is taken in determining the most appropriate extrapolation model. A piecewise model, in which the Kaplan-Meier survivor function is used over the trial period and an extrapolated tail is based on the Exponential distribution, is identified as the optimal model. In the absence of long-term survival estimates from clinical trials, survival estimates often need to be extrapolated from the available data. The use of a systematic method based on a priori determined selection criteria provides a transparent approach and reduces the risk of bias. The extrapolated OS estimates will be used to investigate the potential long-term benefits of lenvatinib in the treatment of radioiodine-refractory differentiated thyroid cancer patients and populate future cost-effectiveness analyses.

  6. Direct activity determination of Mn-54 and Zn-65 by a non-extrapolation liquid scintillation method

    CSIR Research Space (South Africa)

    Simpson, BRS

    2004-02-01

    Full Text Available . The simple decay scheme exhibited by these radionuclides, with the emission of an energetic gamma ray, allows the absolute activity to be determined from 4pie-gamma data by direct calculation without the need for efficiency extrapolation. The method, which...

  7. Bulk rock elastic moduli at high pressures, derived from the mineral textures and from extrapolated laboratory data

    International Nuclear Information System (INIS)

    Ullemeyer, K; Keppler, R; Lokajíček, T; Vasin, R N; Behrmann, J H

    2015-01-01

    The elastic anisotropy of bulk rock depends on the mineral textures, the crack fabric and external parameters like, e.g., confining pressure. The texture-related contribution to elastic anisotropy can be predicted from the mineral textures, the largely sample-dependent contribution of the other parameters must be determined experimentally. Laboratory measurements of the elastic wave velocities are mostly limited to pressures of the intermediate crust. We describe a method, how the elastic wave velocity trends and, by this means, the elastic constants can be extrapolated to the pressure conditions of the lower crust. The extrapolated elastic constants are compared to the texture-derived ones. Pronounced elastic anisotropy is evident for phyllosilicate minerals, hence, the approach is demonstrated for two phyllosilicate-rich gneisses with approximately identical volume fractions of the phyllosilicates but different texture types. (paper)

  8. Condition monitoring and signature analysis techniques as applied to Madras Atomic Power Station (MAPS) [Paper No.: VIA - 1

    International Nuclear Information System (INIS)

    Rangarajan, V.; Suryanarayana, L.

    1981-01-01

    The technique of vibration signature analysis for identifying the machine troubles in their early stages is explained. The advantage is that a timely corrective action can be planned to avoid breakdowns and unplanned shutdowns. At the Madras Atomic Power Station (MAPS), this technique is applied to regularly monitor vibrations of equipment and thus is serving as a tool for doing corrective maintenance of equipment. Case studies of application of this technique to main boiler feed pumps, moderation pump motors, centrifugal chiller, ventilation system fans, thermal shield ventilation fans, filtered water pumps, emergency process sea water pumps, and antifriction bearings of MAPS are presented. Condition monitoring during commissioning and subsequent operation could indicate defects. Corrective actions which were taken are described. (M.G.B.)

  9. Large-timestep techniques for particle-in-cell simulation of systems with applied fields that vary rapidly in space

    International Nuclear Information System (INIS)

    Friedman, A.; Grote, D.P.

    1996-10-01

    Under conditions which arise commonly in space-charge-dominated beam applications, the applied focusing, bending, and accelerating fields vary rapidly with axial position, while the self-fields (which are, on average, comparable in strength to the applied fields) vary smoothly. In such cases it is desirable to employ timesteps which advance the particles over distances greater than the characteristic scales over which the applied fields vary. Several related concepts are potentially applicable: sub-cycling of the particle advance relative to the field solution, a higher-order time-advance algorithm, force-averaging by integration along approximate orbits, and orbit-averaging. We report on our investigations into the utility of such techniques for systems typical of those encountered in accelerator studies for heavy-ion beam-driven inertial fusion

  10. Using wavelet denoising and mathematical morphology in the segmentation technique applied to blood cells images.

    Science.gov (United States)

    Boix, Macarena; Cantó, Begoña

    2013-04-01

    Accurate image segmentation is used in medical diagnosis since this technique is a noninvasive pre-processing step for biomedical treatment. In this work we present an efficient segmentation method for medical image analysis. In particular, with this method blood cells can be segmented. For that, we combine the wavelet transform with morphological operations. Moreover, the wavelet thresholding technique is used to eliminate the noise and prepare the image for suitable segmentation. In wavelet denoising we determine the best wavelet that shows a segmentation with the largest area in the cell. We study different wavelet families and we conclude that the wavelet db1 is the best and it can serve for posterior works on blood pathologies. The proposed method generates goods results when it is applied on several images. Finally, the proposed algorithm made in MatLab environment is verified for a selected blood cells.

  11. The use of natural analogues in the long-term extrapolation of glass corrosion processes

    International Nuclear Information System (INIS)

    Lutze, W.; Grambow, B.; Ewing, R.C.; Jercinovic, M.J.

    1987-01-01

    One of the most critical aspects of nuclear waste management is the extrapolation of materials and systems behavior from short term experiments, typically on the order of one year, over comparatively very long periods of time. Safety and risk analyses have to rely on extrapolations and the respective findings have to be evaluated in the frame of licensing procedures. In this unique situation, any source of information that can lend support to the credibility of predicted behavior, should be exploited and investigated with great care. There are natural systems, e.g. the Oklo reactor, which can provide evidence of radionuclide migration over very long periods of time and thus help to answer specific questions of interest. Natural glasses and minerals can serve as analogues for both glass and crystalline nuclear waste forms, and the alteration of the natural materials can be studied to infer information on the behavior of the man-made products in geologic environments. This paper reviews most of the work performed by the authors and their colleagues in this field together with information available from literature and discusses the extent to which natural glasses can be used to validate or verify predictions. (author)

  12. Extrapolation of model tests measurements of whipping to identify the dimensioning sea states for container ships

    DEFF Research Database (Denmark)

    Storhaug, Gaute; Andersen, Ingrid Marie Vincent

    2015-01-01

    to small storms. Model tests of three container ships have been carried out in different sea states under realistic assumptions. Preliminary extrapolation of the measured data suggested that moderate storms are dimensioning when whipping is included due to higher maximum speed in moderate storms...

  13. extrap: Software to assist the selection of extrapolation methods for moving-boat ADCP streamflow measurements

    Science.gov (United States)

    Mueller, David S.

    2013-01-01

    Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity

  14. Biomechanical study of the funnel technique applied in thoracic ...

    African Journals Online (AJOL)

    of vertebra was made for injury model of anterior and central column ... data were collected to eliminate creep and relaxation of soft tissues in .... 3 Pullout strength curve for Magerl technique (A) and Funnel technique (B). 210x164mm (72 x 72 ...

  15. A simple pulse shape discrimination technique applied to a silicon strip detector

    International Nuclear Information System (INIS)

    Figuera, P.; Lu, J.; Amorini, F.; Cardella, G.; DiPietro, A.; Papa, M.; Musumarra, A.; Pappalardo, G.; Rizzo, F.; Tudisco, S.

    2001-01-01

    Full text: Since the early sixties, it has been known that the shape of signals from solid state detectors can be used for particle identification. Recently, this idea has been revised in a group of papers where it has been shown that the shape of current signals from solid state detectors is mainly governed by the combination of plasma erosion time and charge carrier collection time effects. We will present the results of a systematic study on a pulse shape identification method which, contrary to the techniques proposed, is based on the use of the same electronic chain normally used in the conventional time of flight technique. The method is based on the use of charge preamplifiers, low polarization voltages (i.e. just above full depletion ones), rear side injection of the incident particles, and on a proper setting of the constant fraction discriminators which enhances the dependence of the timing output on the rise time of the input signals (which depends on the charge and energy of the incident ions). The method has been applied to an annular Si strip detector with an inner radius of about 16 mm and an outer radius of about 88 mm. The detector, manufactured by Eurisys Measures (Type Ips.73.74.300.N9), is 300 microns thick and consists of 8 independent sectors each divided into 9 circular strips. On beam tests have been performed at the cyclotron of the Laboratori Nazionali del Sud in Catania using a 25.7 MeV/nucleon 58 Ni beam impinging on a 51 V and 45 Sc composite target. Excellent charge identification from H up to the Ni projectile has been observed and typical charge identification thresholds are: ∼ 1.7 MeV/nucleon for Z ≅ 6, ∼ 3.0 MeV/nucleon for Z ≅ 11, and ∼ 5.5 MeV/nucleon for Z ≅ 20. Isotope identification up to A ≅ 13 has been observed with an energy threshold of about 6 MeV/nucleon. The identification quality has been studied as a function of the constant fraction settings. The method has been applied to all the 72 independent strips

  16. Applying machine-learning techniques to Twitter data for automatic hazard-event classification.

    Science.gov (United States)

    Filgueira, R.; Bee, E. J.; Diaz-Doce, D.; Poole, J., Sr.; Singh, A.

    2017-12-01

    The constant flow of information offered by tweets provides valuable information about all sorts of events at a high temporal and spatial resolution. Over the past year we have been analyzing in real-time geological hazards/phenomenon, such as earthquakes, volcanic eruptions, landslides, floods or the aurora, as part of the GeoSocial project, by geo-locating tweets filtered by keywords in a web-map. However, not all the filtered tweets are related with hazard/phenomenon events. This work explores two classification techniques for automatic hazard-event categorization based on tweets about the "Aurora". First, tweets were filtered using aurora-related keywords, removing stop words and selecting the ones written in English. For classifying the remaining between "aurora-event" or "no-aurora-event" categories, we compared two state-of-art techniques: Support Vector Machine (SVM) and Deep Convolutional Neural Networks (CNN) algorithms. Both approaches belong to the family of supervised learning algorithms, which make predictions based on labelled training dataset. Therefore, we created a training dataset by tagging 1200 tweets between both categories. The general form of SVM is used to separate two classes by a function (kernel). We compared the performance of four different kernels (Linear Regression, Logistic Regression, Multinomial Naïve Bayesian and Stochastic Gradient Descent) provided by Scikit-Learn library using our training dataset to build the SVM classifier. The results shown that the Logistic Regression (LR) gets the best accuracy (87%). So, we selected the SVM-LR classifier to categorise a large collection of tweets using the "dispel4py" framework.Later, we developed a CNN classifier, where the first layer embeds words into low-dimensional vectors. The next layer performs convolutions over the embedded word vectors. Results from the convolutional layer are max-pooled into a long feature vector, which is classified using a softmax layer. The CNN's accuracy

  17. Applying Mixed Methods Techniques in Strategic Planning

    Science.gov (United States)

    Voorhees, Richard A.

    2008-01-01

    In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…

  18. 2D and 3D optical diagnostic techniques applied to Madonna dei Fusi by Leonardo da Vinci

    Science.gov (United States)

    Fontana, R.; Gambino, M. C.; Greco, M.; Marras, L.; Materazzi, M.; Pampaloni, E.; Pelagotti, A.; Pezzati, L.; Poggi, P.; Sanapo, C.

    2005-06-01

    3D measurement and modelling have been traditionally applied to statues, buildings, archeological sites or similar large structures, but rarely to paintings. Recently, however, 3D measurements have been performed successfully also on easel paintings, allowing to detect and document the painting's surface. We used 3D models to integrate the results of various 2D imaging techniques on a common reference frame. These applications show how the 3D shape information, complemented with 2D colour maps as well as with other types of sensory data, provide the most interesting information. The 3D data acquisition was carried out by means of two devices: a high-resolution laser micro-profilometer, composed of a commercial distance meter mounted on a scanning device, and a laser-line scanner. The 2D data acquisitions were carried out using a scanning device for simultaneous RGB colour imaging and IR reflectography, and a UV fluorescence multispectral image acquisition system. We present here the results of the techniques described, applied to the analysis of an important painting of the Italian Reinassance: `Madonna dei Fusi', attributed to Leonardo da Vinci.

  19. Molecular Target Homology as a Basis for Species Extrapolation to Assess the Ecological Risk of Veterinary Drugs

    Science.gov (United States)

    Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...

  20. Forests and methane - at the intersection of science and politics, experimentation and extrapolation, objectivity and subjectivity

    International Nuclear Information System (INIS)

    Peyron, Jean-Luc

    2005-01-01

    According to recent information, vegetation is thought to be a major source of methane. This phenomenon had not been contemplated until now and still remains to be explained. According to the authors and on the basis of rough extrapolations, it may cast light on some missing pieces in the global methane balance. The initial reaction by commentators following this discovery was to discuss its consequences on the strategy to fight the greenhouse effect considering methane's considerable impact on global warming. However, a preliminary analysis based on opinions from a range of experts underscores three aspects - the experimental discovery needs to be confirmed and explained before drawing any hasty conclusions; extrapolations performed so far on a global scale are highly inadequate and probably overestimated; implications for fighting the greenhouse effect are limited because the phenomenon in question is a natural one and not extensive enough to offset the benefits of forests as a sink for carbon dioxide. (authors)

  1. Applying modern psychometric techniques to melodic discrimination testing: Item response theory, computerised adaptive testing, and automatic item generation.

    Science.gov (United States)

    Harrison, Peter M C; Collins, Tom; Müllensiefen, Daniel

    2017-06-15

    Modern psychometric theory provides many useful tools for ability testing, such as item response theory, computerised adaptive testing, and automatic item generation. However, these techniques have yet to be integrated into mainstream psychological practice. This is unfortunate, because modern psychometric techniques can bring many benefits, including sophisticated reliability measures, improved construct validity, avoidance of exposure effects, and improved efficiency. In the present research we therefore use these techniques to develop a new test of a well-studied psychological capacity: melodic discrimination, the ability to detect differences between melodies. We calibrate and validate this test in a series of studies. Studies 1 and 2 respectively calibrate and validate an initial test version, while Studies 3 and 4 calibrate and validate an updated test version incorporating additional easy items. The results support the new test's viability, with evidence for strong reliability and construct validity. We discuss how these modern psychometric techniques may also be profitably applied to other areas of music psychology and psychological science in general.

  2. Verification of absorbed dose rates in reference beta radiation fields: measurements with an extrapolation chamber and radiochromic film

    Energy Technology Data Exchange (ETDEWEB)

    Reynaldo, S. R. [Development Centre of Nuclear Technology, Posgraduate Course in Science and Technology of Radiations, Minerals and Materials / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Benavente C, J. A.; Da Silva, T. A., E-mail: sirr@cdtn.br [Development Centre of Nuclear Technology / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)

    2015-10-15

    Beta Secondary Standard 2 (Bss 2) provides beta radiation fields with certified values of absorbed dose to tissue and the derived operational radiation protection quantities. As part of the quality assurance, metrology laboratories are required to verify the reliability of the Bss-2 system by performing additional verification measurements. In the CDTN Calibration Laboratory, the absorbed dose rates and their angular variation in the {sup 90}Sr/{sup 90}Y and {sup 85}Kr beta radiation fields were studied. Measurements were done with a 23392 model PTW extrapolation chamber and with Gafchromic radiochromic films on a PMMA slab phantom. In comparison to the certificate values provided by the Bss-2, absorbed dose rates measured with the extrapolation chamber differed from -1.4 to 2.9% for the {sup 90}Sr/{sup 90}Y and -0.3% for the {sup 85}Kr fields; their angular variation showed differences lower than 2% for incidence angles up to 40-degrees and it reached 11% for higher angles, when compared to ISO values. Measurements with the radiochromic film showed an asymmetry of the radiation field that is caused by a misalignment. Differences between the angular variations of absorbed dose rates determined by both dosimetry systems suggested that some correction factors for the extrapolation chamber that were not considered should be determined. (Author)

  3. Solution of the finite Milne problem in stochastic media with RVT Technique

    Science.gov (United States)

    Slama, Howida; El-Bedwhey, Nabila A.; El-Depsy, Alia; Selim, Mustafa M.

    2017-12-01

    This paper presents the solution to the Milne problem in the steady state with isotropic scattering phase function. The properties of the medium are considered as stochastic ones with Gaussian or exponential distributions and hence the problem treated as a stochastic integro-differential equation. To get an explicit form for the radiant energy density, the linear extrapolation distance, reflectivity and transmissivity in the deterministic case the problem is solved using the Pomraning-Eddington method. The obtained solution is found to be dependent on the optical space variable and thickness of the medium which are considered as random variables. The random variable transformation (RVT) technique is used to find the first probability density function (1-PDF) of the solution process. Then the stochastic linear extrapolation distance, reflectivity and transmissivity are calculated. For illustration, numerical results with conclusions are provided.

  4. Determination of the bulk melting temperature of nickel using Monte Carlo simulations: Inaccuracy of extrapolation from cluster melting temperatures

    Science.gov (United States)

    Los, J. H.; Pellenq, R. J. M.

    2010-02-01

    We have determined the bulk melting temperature Tm of nickel according to a recent interatomic interaction model via Monte Carlo simulation by two methods: extrapolation from cluster melting temperatures based on the Pavlov model (a variant of the Gibbs-Thompson model) and by calculation of the liquid and solid Gibbs free energies via thermodynamic integration. The result of the latter, which is the most reliable method, gives Tm=2010±35K , to be compared to the experimental value of 1726 K. The cluster extrapolation method, however, gives a 325° higher value of Tm=2335K . This remarkable result is shown to be due to a barrier for melting, which is associated with a nonwetting behavior.

  5. Personnel contamination protection techniques applied during the TMI-2 [Three Mile Island Unit 2] cleanup

    International Nuclear Information System (INIS)

    Hildebrand, J.E.

    1988-01-01

    The severe damage to the Three Mile Island Unit 2 (TMI-2) core and the subsequent discharge of reactor coolant to the reactor and auxiliary buildings resulted in extremely hostile radiological environments in the TMI-2 plant. High fission product surface contamination and radiation levels necessitated the implementation of innovative techniques and methods in performing cleanup operations while assuring effective as low as reasonably achievable (ALARA) practices. The approach utilized by GPU Nuclear throughout the cleanup in applying protective clothing requirements was to consider the overall health risk to the worker including factors such as cardiopulmonary stress, visual and hearing acuity, and heat stress. In applying protective clothing requirements, trade-off considerations had to be made between preventing skin contaminations and possibly overprotecting the worker, thus impacting his ability to perform his intended task at maximum efficiency and in accordance with ALARA principles. The paper discusses the following topics: protective clothing-general use, beta protection, skin contamination, training, personnel access facility, and heat stress

  6. Relationship Between Magnitude of Applied Spin Recovery Moment and Ensuing Number of Recovery Turns

    Science.gov (United States)

    Anglin, Ernie L.

    1967-01-01

    An analytical study has been made to investigate the relationship between the magnitude of the applied spin recovery moment and the ensuing number of turns made during recovery from a developed spin with a view toward determining how to interpolate or extrapolate spin recovery results with regard to determining the amount of control required for a satisfactory recovery. Five configurations were used which are considered to be representative of modern airplanes: a delta-wing fighter, a stub-wing research vehicle, a boostglide configuration, a supersonic trainer, and a sweptback-wing fighter. The results obtained indicate that there is a direct relationship between the magnitude of the applied spin recovery moments and the ensuing number of recovery turns made and that this relationship can be expressed in either simple multiplicative or exponential form. Either type of relationship was adequate for interpolating or extrapolating to predict turns required for recovery with satisfactory accuracy for configurations having relatively steady recovery motions. Any two recoveries from the same developed spin condition can be used as a basis for the predicted results provided these recoveries are obtained with the same ratio of recovery control deflections. No such predictive method can be expected to give satisfactory results for oscillatory recoveries.

  7. Determination of the most appropriate method for extrapolating overall survival data from a placebo-controlled clinical trial of lenvatinib for progressive, radioiodine-refractory differentiated thyroid cancer

    Directory of Open Access Journals (Sweden)

    Tremblay G

    2016-06-01

    Full Text Available Gabriel Tremblay,1 Christopher Livings,2 Lydia Crowe,2 Venediktos Kapetanakis,2 Andrew Briggs3 1Global Health Economics and Health Technology Assessment, Eisai Inc., Woodcliff Lake, NJ, USA; 2Health Economics, Decision Resources Group, Bicester, Oxfordshire, 3Health Economics and Health Technology Assessment, Institute of Health and Wellbeing, University of Glasgow, Glasgow, UK Background: Cost-effectiveness models for the treatment of long-term conditions often require information on survival beyond the period of available data. Objectives: This paper aims to identify a robust and reliable method for the extrapolation of overall survival (OS in patients with radioiodine-refractory differentiated thyroid cancer receiving lenvatinib or placebo. Methods: Data from 392 patients (lenvatinib: 261, placebo: 131 from the SELECT trial are used over a 34-month period of follow-up. A previously published criterion-based approach is employed to ascertain credible estimates of OS beyond the trial data. Parametric models with and without a treatment covariate and piecewise models are used to extrapolate OS, and a holistic approach, where a series of statistical and visual tests are considered collectively, is taken in determining the most appropriate extrapolation model. Results: A piecewise model, in which the Kaplan–Meier survivor function is used over the trial period and an extrapolated tail is based on the Exponential distribution, is identified as the optimal model. Conclusion: In the absence of long-term survival estimates from clinical trials, survival estimates often need to be extrapolated from the available data. The use of a systematic method based on a priori determined selection criteria provides a transparent approach and reduces the risk of bias. The extrapolated OS estimates will be used to investigate the potential long-term benefits of lenvatinib in the treatment of radioiodine-refractory differentiated thyroid cancer patients and

  8. Calibration of a scintillation dosemeter for beta rays using an extrapolation ionization chamber

    International Nuclear Information System (INIS)

    Hakanen, A.T.; Sipilae, P.M.; Kosunen, A.

    2004-01-01

    A scintillation dosemeter is calibrated for 90 Sr/ 90 Y beta rays from an ophthalmic applicator, using an extrapolation ionization chamber as a reference instrument. The calibration factor for the scintillation dosemeter agrees with that given by the manufacturer of the dosemeter within ca. 2%. The estimated overall uncertainty of the present calibration is ca. 6% (2 sd). A calibrated beta-ray ophthalmic applicator can be used as a reference source for further calibrations performed in the laboratory or in the hospital

  9. A Systematic Approach to Applying Lean Techniques to Optimize an Office Process at the Y-12 National Security Complex

    Energy Technology Data Exchange (ETDEWEB)

    Credille, Jennifer [Y-12 National Security Complex, Oak Ridge, TN (United States); Univ. of Tennessee, Knoxville, TN (United States); Owens, Elizabeth [Y-12 National Security Complex, Oak Ridge, TN (United States); Univ. of Tennessee, Knoxville, TN (United States)

    2017-10-11

    This capstone offers the introduction of Lean concepts to an office activity to demonstrate the versatility of Lean. Traditionally Lean has been associated with process improvements as applied to an industrial atmosphere. However, this paper will demonstrate that implementing Lean concepts within an office activity can result in significant process improvements. Lean first emerged with the conception of the Toyota Production System. This innovative concept was designed to improve productivity in the automotive industry by eliminating waste and variation. Lean has also been applied to office environments, however the limited literature reveals most Lean techniques within an office are restricted to one or two techniques. Our capstone confronts these restrictions by introducing a systematic approach that utilizes multiple Lean concepts. The approach incorporates: system analysis, system reliability, system requirements, and system feasibility. The methodical Lean outline provides tools for a successful outcome, which ensures the process is thoroughly dissected and can be achieved for any process in any work environment.

  10. Comparison of various state equations for approximation and extrapolation of experimental hydrogen molar volumes in wide temperature and pressure intervals

    International Nuclear Information System (INIS)

    Didyk, A.Yu.; Altynov, V.A.; Wisniewski, R.

    2009-01-01

    The numerical analysis of practically all existing formulae such as expansion series, Tait, logarithm, Van der Waals and virial equations for interpolation of experimental molar volumes versus high pressure was carried out. One can conclude that extrapolating dependences of molar volumes versus pressure and temperature can be valid. It was shown that virial equations can be used for fitting experimental data at relatively low pressures P<3 kbar too in distinction to other equations. Direct solving of a linear equation of the third order relatively to volume using extrapolated virial coefficients allows us to obtain good agreement between existing experimental data for high pressure and calculated values

  11. The differential dieaway technique applied to the measurement of the fissile content of drums of cement encapsulated waste

    International Nuclear Information System (INIS)

    Swinhoe, M.T.

    1986-01-01

    This report describes calculations of the differential dieaway technique as applied to cement encapsulated waste. The main difference from previous applications of the technique are that only one detector position is used (diametrically opposite the neutron source) and the chamber walls are made of concrete. The results show that by rotating the drum the response to fissile material across the central plane of the drum can be made relatively uniform. The absolute size of the response is about 0.4. counts per minute per gram fissile for a neutron source of 10 8 neutrons per second. Problems of neutron and gamma background and water content are considered. (author)

  12. Non-destructive electrochemical techniques applied to the corrosion evaluation of the liner structures in nuclear power plants

    International Nuclear Information System (INIS)

    Martinez, I.; Castillo, A.; Andrade, C.

    2008-01-01

    The liner structure in nuclear power plants provides containment for the operation and therefore the study of its durability and integrity during its service life is an important issue. There are several causes for the deterioration of the liner, which in general involve corrosion due to its metallic nature. The present paper is aimed at describing the assessment of corrosion problems of two liners from two different nuclear power plants, which were evaluated using non-destructive electrochemical techniques. In spite of the testing difficulties arisen, from the results extracted it can be concluded that the electrochemical techniques applied are adequate for the corrosion evaluation. They provide important information about the integrity of the structure and allow for its evolution with time to be assessed

  13. Combined effect of external irradiation and radiostrontium administration (extrapolation of experimental data)

    International Nuclear Information System (INIS)

    Kiradzhiev, G.

    1987-01-01

    Assessment was made of the activities of strontium-89 and strontium-90, which may aggravate the effect of external irradiation, causing changes in peripheral blood leucocytes. Extrapolation of the results was carried out on the basis of the so called radiosensitivity coefficients (laboratory rat/man). Inference is drawn that summing of the effects of the radiation factors may be expected in cases of external irradiation with 100 Gy and oral administration of 150-200 MBq strontium-89 or 60-90 MBq strontium-90 and through the air passages of 110-150 MBq strontium-89 or 40-60 MBq strontium-90

  14. Modeling the systemic retention of beryllium in rat. Extrapolation to human

    International Nuclear Information System (INIS)

    Montero Prieto, M.; Vidania Munoz, R. de

    1994-01-01

    In this work, we analyzed different approaches, assayed in order to numerically describe the systemic behaviour of Beryllium. The experimental results used in this work, were previously obtained by Furchner et al. (1973), using Sprague-Dawley rats, and others animal species. Furchner's work includes the obtained model for whole body retention in rats, but not for each target organ. In this work we present the results obtained by modeling the kinetic behaviour of Beryllium in several target organs. The results of this kind of models were used in order to establish correlations among the estimated kinetic constants. The parameters of the model were extrapolated to humans and, finally, compared with others previously published. (Author) 12 refs

  15. Modeling of systematic retention of beryllium in rats. Extrapolation to humans

    International Nuclear Information System (INIS)

    Montero Prieto, M.; Vidania Munoz, R. de.

    1994-01-01

    In this work, we analyzed different approaches, assayed in order to numerically describe the systemic behaviour of Beryllium. The experimental results used in this work, were previously obtained by Furchner et al. (1973), using Sprague-Dawley rats, and other animal species. Furchner's work includes the obtained model for whole body retention in rats but not for each target organ. In this work we present the results obtained by modeling the kinetic behaviour of Beryllium in several target organs. The results of this kind of models were used in order to establish correlations among the estimated kinetic constants. The parameters of the model were extrapolated to humans and, finally, compared with other previously published

  16. Extrapolation of the FOM 1 MW free-electron maser to a multi-megawatt millimeter microwave source

    NARCIS (Netherlands)

    Caplan, M.; Valentini, M.; Verhoeven, A.; Urbanus, W.; Tulupov, A.

    1997-01-01

    A Free-Electron Maser is now under test at the FOM Institute (Rijnhuizen, Netherlands) with the goal of producing 1 MW long pulse to CW microwave output in the range 130-250 GHz with wall plug efficiencies of 60%. An extrapolated version of this device is proposed, which by scaling up beam current

  17. Beyond Astro 101: A First Report on Applying Interactive Education Techniques to an Astronphysics Class for Majors

    Science.gov (United States)

    Perrin, Marshall D.; Ghez, A. M.

    2009-05-01

    Learner-centered interactive instruction methods now have a proven track record in improving learning in "Astro 101" courses for non-majors, but have rarely been applied to higher-level astronomy courses. Can we hope for similar gains in classes aimed at astrophysics majors, or is the subject matter too fundamentally different for those techniques to apply? We present here an initial report on an updated calculus-based Introduction to Astrophysics class at UCLA that suggests such techniques can indeed result in increased learning for major students. We augmented the traditional blackboard-derivation lectures and challenging weekly problem sets by adding online questions on pre-reading assignments (''just-in-time teaching'') and frequent multiple-choice questions in class ("Think-Pair-Share''). We describe our approach, and present examples of the new Think-Pair-Share questions developed for this more sophisticated material. Our informal observations after one term are that with this approach, students are more engaged and alert, and score higher on exams than typical in previous years. This is anecdotal evidence, not hard data yet, and there is clearly a vast amount of work to be done in this area. But our first impressions strongly encourage us that interactive methods should be able improve the astrophysics major just as they have improved Astro 101.

  18. Object oriented programming techniques applied to device access and control

    International Nuclear Information System (INIS)

    Goetz, A.; Klotz, W.D.; Meyer, J.

    1992-01-01

    In this paper a model, called the device server model, has been presented for solving the problem of device access and control faced by all control systems. Object Oriented Programming techniques were used to achieve a powerful yet flexible solution. The model provides a solution to the problem which hides device dependancies. It defines a software framework which has to be respected by implementors of device classes - this is very useful for developing groupware. The decision to implement remote access in the root class means that device servers can be easily integrated in a distributed control system. A lot of the advantages and features of the device server model are due to the adoption of OOP techniques. The main conclusion that can be drawn from this paper is that 1. the device access and control problem is adapted to being solved with OOP techniques, 2. OOP techniques offer a distinct advantage over traditional programming techniques for solving the device access problem. (J.P.N.)

  19. NONLINEAR FORCE-FREE FIELD EXTRAPOLATION OF A CORONAL MAGNETIC FLUX ROPE SUPPORTING A LARGE-SCALE SOLAR FILAMENT FROM A PHOTOSPHERIC VECTOR MAGNETOGRAM

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Chaowei; Wu, S. T.; Hu, Qiang [Center for Space Plasma and Aeronomic Research, The University of Alabama in Huntsville, Huntsville, AL 35899 (United States); Feng, Xueshang, E-mail: cwjiang@spaceweather.ac.cn, E-mail: wus@uah.edu, E-mail: qh0001@uah.edu, E-mail: fengx@spaceweather.ac.cn [SIGMA Weather Group, State Key Laboratory for Space Weather, Center for Space Science and Applied Research, Chinese Academy of Sciences, Beijing 100190 (China)

    2014-05-10

    Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength ≲ 100 G), where the PIL is very fragmented due to small parasitic polarities on both sides of the PIL and the transverse field has a low signal-to-noise ratio. Thus, extrapolating a large-scale FR in such a case represents a far more difficult challenge. We demonstrate that our CESE-MHD-NLFFF code is sufficient for the challenge. The numerically reproduced magnetic dips of the extrapolated FR match observations of the filament and its barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.

  20. A systematic review of applying modern software engineering techniques to developing robotic systems

    Directory of Open Access Journals (Sweden)

    Claudia Pons

    2012-01-01

    Full Text Available Robots have become collaborators in our daily life. While robotic systems become more and more complex, the need to engineer their software development grows as well. The traditional approaches used in developing these software systems are reaching their limits; currently used methodologies and tools fall short of addressing the needs of such complex software development. Separating robotics’ knowledge from short-cycled implementation technologies is essential to foster reuse and maintenance. This paper presents a systematic review (SLR of the current use of modern software engineering techniques for developing robotic software systems and their actual automation level. The survey was aimed at summarizing existing evidence concerning applying such technologies to the field of robotic systems to identify any gaps in current research to suggest areas for further investigation and provide a background for positioning new research activities.

  1. An acceleration technique for the Gauss-Seidel method applied to symmetric linear systems

    Directory of Open Access Journals (Sweden)

    Jesús Cajigas

    2014-06-01

    Full Text Available A preconditioning technique to improve the convergence of the Gauss-Seidel method applied to symmetric linear systems while preserving symmetry is proposed. The preconditioner is of the form I + K and can be applied an arbitrary number of times. It is shown that under certain conditions the application of the preconditioner a finite number of steps reduces the matrix to a diagonal. A series of numerical experiments using matrices from spatial discretizations of partial differential equations demonstrates that both versions of the preconditioner, point and block version, exhibit lower iteration counts than its non-symmetric version. Resumen. Se propone una técnica de precondicionamiento para mejorar la convergencia del método Gauss-Seidel aplicado a sistemas lineales simétricos pero preservando simetría. El precondicionador es de la forma I + K y puede ser aplicado un número arbitrario de veces. Se demuestra que bajo ciertas condiciones la aplicación del precondicionador un número finito de pasos reduce la matriz del sistema precondicionado a una diagonal. Una serie de experimentos con matrices que provienen de la discretización de ecuaciones en derivadas parciales muestra que ambas versiones del precondicionador, por punto y por bloque, muestran un menor número de iteraciones en comparación con la versión que no preserva simetría.

  2. Multiple time step molecular dynamics in the optimized isokinetic ensemble steered with the molecular theory of solvation: Accelerating with advanced extrapolation of effective solvation forces

    International Nuclear Information System (INIS)

    Omelyan, Igor; Kovalenko, Andriy

    2013-01-01

    We develop efficient handling of solvation forces in the multiscale method of multiple time step molecular dynamics (MTS-MD) of a biomolecule steered by the solvation free energy (effective solvation forces) obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model complemented with the Kovalenko-Hirata closure approximation). To reduce the computational expenses, we calculate the effective solvation forces acting on the biomolecule by using advanced solvation force extrapolation (ASFE) at inner time steps while converging the 3D-RISM-KH integral equations only at large outer time steps. The idea of ASFE consists in developing a discrete non-Eckart rotational transformation of atomic coordinates that minimizes the distances between the atomic positions of the biomolecule at different time moments. The effective solvation forces for the biomolecule in a current conformation at an inner time step are then extrapolated in the transformed subspace of those at outer time steps by using a modified least square fit approach applied to a relatively small number of the best force-coordinate pairs. The latter are selected from an extended set collecting the effective solvation forces obtained from 3D-RISM-KH at outer time steps over a broad time interval. The MTS-MD integration with effective solvation forces obtained by converging 3D-RISM-KH at outer time steps and applying ASFE at inner time steps is stabilized by employing the optimized isokinetic Nosé-Hoover chain (OIN) ensemble. Compared to the previous extrapolation schemes used in combination with the Langevin thermostat, the ASFE approach substantially improves the accuracy of evaluation of effective solvation forces and in combination with the OIN thermostat enables a dramatic increase of outer time steps. We demonstrate on a fully flexible model of alanine dipeptide in aqueous solution that the MTS-MD/OIN/ASFE/3D-RISM-KH multiscale method of molecular dynamics

  3. Performance of an extrapolation chamber in computed tomography standard beams

    International Nuclear Information System (INIS)

    Castro, Maysa C.; Silva, Natália F.; Caldas, Linda V.E.

    2017-01-01

    Among the medical uses of ionizing radiations, the computed tomography (CT) diagnostic exams are responsible for the highest dose values to the patients. The dosimetry procedure in CT scanner beams makes use of pencil ionization chambers with sensitive volume lengths of 10 cm. The aim of its calibration is to compare the values that are obtained with the instrument to be calibrated and a standard reference system. However, there is no primary standard system for this kind of radiation beam. Therefore, an extrapolation ionization chamber built at the Calibration Laboratory (LCI), was used to establish a CT primary standard. The objective of this work was to perform some characterization tests (short- and medium-term stabilities, saturation curve, polarity effect and ion collection efficiency) in the standard X-rays beams established for computed tomography at the LCI. (author)

  4. Performance of an extrapolation chamber in computed tomography standard beams

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Maysa C.; Silva, Natália F.; Caldas, Linda V.E., E-mail: mcastro@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-07-01

    Among the medical uses of ionizing radiations, the computed tomography (CT) diagnostic exams are responsible for the highest dose values to the patients. The dosimetry procedure in CT scanner beams makes use of pencil ionization chambers with sensitive volume lengths of 10 cm. The aim of its calibration is to compare the values that are obtained with the instrument to be calibrated and a standard reference system. However, there is no primary standard system for this kind of radiation beam. Therefore, an extrapolation ionization chamber built at the Calibration Laboratory (LCI), was used to establish a CT primary standard. The objective of this work was to perform some characterization tests (short- and medium-term stabilities, saturation curve, polarity effect and ion collection efficiency) in the standard X-rays beams established for computed tomography at the LCI. (author)

  5. Corrosion allowances for sodium heated steam generators: evaluation of effects and extrapolation to component life time

    Energy Technology Data Exchange (ETDEWEB)

    Grosser, E E; Menken, G

    1975-07-01

    Steam generator tubes are subjected to two categories of corrosion; metal/sodium reactions and metal/water-steam interactions. Referring to these environmental conditions the relevant parameters are discussed. The influences of these parameters on the sodium corrosion and water/steam-reactions are evaluated. Extrapolations of corrosion values to steam generator design conditions are performed and discussed in detail. (author)

  6. Corrosion allowances for sodium heated steam generators: evaluation of effects and extrapolation to component life time

    International Nuclear Information System (INIS)

    Grosser, E.E.; Menken, G.

    1975-01-01

    Steam generator tubes are subjected to two categories of corrosion; metal/sodium reactions and metal/water-steam interactions. Referring to these environmental conditions the relevant parameters are discussed. The influences of these parameters on the sodium corrosion and water/steam-reactions are evaluated. Extrapolations of corrosion values to steam generator design conditions are performed and discussed in detail. (author)

  7. Laser--Doppler anemometry technique applied to two-phase dispersed flows in a rectangular channel

    International Nuclear Information System (INIS)

    Lee, S.L.; Srinivasan, J.

    1979-01-01

    A new optical technique using Laser--Doppler anemometry has been applied to the local measurement of turbulent upward flow of a dilute water droplet--air two-phase dispersion in a vertical rectangular channel. Individually examined were over 20,000 droplet signals coming from each of a total of ten transversely placed measuring points, the closest of which to the channel wall was 250 μ away from the wall. Two flows of different patterns due to different imposed flow conditions were investigated, one with and the other without a liquid film formed on the channel wall. Reported are the size and number density distribution and the axial and lateral velocity distributions for the droplets as well as the axial and lateral velocity distributions for the air

  8. Digital filtering techniques applied to electric power systems protection; Tecnicas de filtragem digital aplicadas a protecao de sistemas eletricos de potencia

    Energy Technology Data Exchange (ETDEWEB)

    Brito, Helio Glauco Ferreira

    1996-12-31

    This work introduces an analysis and a comparative study of some of the techniques for digital filtering of the voltage and current waveforms from faulted transmission lines. This study is of fundamental importance for the development of algorithms applied to digital protection of electric power systems. The techniques studied are based on the Discrete Fourier Transform theory, the Walsh functions and the Kalman filter theory. Two aspects were emphasized in this study: Firstly, the non-recursive techniques were analysed with the implementation of filters based on Fourier theory and the Walsh functions. Secondly, recursive techniques were analyzed, with the implementation of the filters based on the Kalman theory and once more on the Fourier theory. (author) 56 refs., 25 figs., 16 tabs.

  9. Statistical validation of engineering and scientific models : bounds, calibration, and extrapolation.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Hills, Richard Guy (New Mexico State University, Las Cruces, NM)

    2005-04-01

    Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.

  10. Machine-learning techniques applied to antibacterial drug discovery.

    Science.gov (United States)

    Durrant, Jacob D; Amaro, Rommie E

    2015-01-01

    The emergence of drug-resistant bacteria threatens to revert humanity back to the preantibiotic era. Even now, multidrug-resistant bacterial infections annually result in millions of hospital days, billions in healthcare costs, and, most importantly, tens of thousands of lives lost. As many pharmaceutical companies have abandoned antibiotic development in search of more lucrative therapeutics, academic researchers are uniquely positioned to fill the pipeline. Traditional high-throughput screens and lead-optimization efforts are expensive and labor intensive. Computer-aided drug-discovery techniques, which are cheaper and faster, can accelerate the identification of novel antibiotics, leading to improved hit rates and faster transitions to preclinical and clinical testing. The current review describes two machine-learning techniques, neural networks and decision trees, that have been used to identify experimentally validated antibiotics. We conclude by describing the future directions of this exciting field. © 2015 John Wiley & Sons A/S.

  11. Advanced nondestructive techniques applied for the detection of discontinuities in aluminum foams

    Science.gov (United States)

    Katchadjian, Pablo; García, Alejandro; Brizuela, Jose; Camacho, Jorge; Chiné, Bruno; Mussi, Valerio; Britto, Ivan

    2018-04-01

    Metal foams are finding an increasing range of applications by their lightweight structure and physical, chemical and mechanical properties. Foams can be used to fill closed moulds for manufacturing structural foam parts of complex shape [1]; foam filled structures are expected to provide good mechanical properties and energy absorption capabilities. The complexity of the foaming process and the number of parameters to simultaneously control, demand a preliminary and hugely wide experimental activity to manufacture foamed components with a good quality. That is why there are many efforts to improve the structure of foams, in order to obtain a product with good properties. The problem is that even for seemingly identical foaming conditions, the effective foaming can vary significantly from one foaming trial to another. The variation of the foams often is related by structural imperfections, joining region (foam-foam or foam-wall mold) or difficulties in achieving a complete filling of the mould. That is, in a closed mold, the result of the mold filling and its structure or defects are not known a priori and can eventually vary significantly. These defects can cause a drastic deterioration of the mechanical properties [2] and lead to a low performance in its application. This work proposes the use of advanced nondestructive techniques for evaluating the foam distribution after filling the mold to improve the manufacturing process. To achieved this purpose ultrasonic technique (UT) and cone beam computed tomography (CT) were applied on plate and structures of different thicknesses filled with foam of different porosity. UT was carried out on transmission mode with low frequency air-coupled transducers [3], in focused and unfocused configurations.

  12. Characterization of a extrapolation chamber in standard X-ray beam, radiodiagnosis level; Caracterizacao de uma camara de extrapolacao em feixes padroes de raios X, nivel radiodiagnostico

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Eric A.B. da; Caldas, Linda V.E., E-mail: ebrito@usp.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-10-26

    The extrapolation chamber is a ionization chamber used for detection low energy radiation and can be used as an standard instrument for beta radiation beams. This type of ionization chamber have as main characteristic the variation of sensible volume. This paper performs a study of characterization of a PTW commercial extrapolation chamber, in the energy interval of the qualities of conventional radiodiagnostic

  13. Comparison of extrapolation methods for creep rupture stresses of 12Cr and 18Cr10NiTi steels

    International Nuclear Information System (INIS)

    Ivarsson, B.

    1979-01-01

    As a part of a Soviet-Swedish research programme the creep rupture properties of two heat resisting steels namely a 12% Cr steel and an 18% Cr12% Ni titanium stabilized steel have been studied. One heat from each country of both steels were creep tested. The strength of the 12% Cr steels was similar to earlier reported strength values, the Soviet steel being some-what stronger due to a higher tungsten content. The strength of the Swedish 18/12 Ti steel agreed with earlier results, while the properties of the Soviet steel were inferior to those reported from earlier Soviet creep testings. Three extrapolation methods were compared on creep rupture data collected in both countries. Isothermal extrapolation and an algebraic method of Soviet origin gave in many cases rather similar results, while the parameter method recommended by ISO resulted in higher rupture strength values at longer times. (author)

  14. Bioremediation techniques applied to aqueous media contaminated with mercury.

    Science.gov (United States)

    Velásquez-Riaño, Möritz; Benavides-Otaya, Holman D

    2016-12-01

    In recent years, the environmental and human health impacts of mercury contamination have driven the search for alternative, eco-efficient techniques different from the traditional physicochemical methods for treating this metal. One of these alternative processes is bioremediation. A comprehensive analysis of the different variables that can affect this process is presented. It focuses on determining the effectiveness of different techniques of bioremediation, with a specific consideration of three variables: the removal percentage, time needed for bioremediation and initial concentration of mercury to be treated in an aqueous medium.

  15. Spatial analysis techniques applied to uranium prospecting in Chihuahua State, Mexico

    Science.gov (United States)

    Hinojosa de la Garza, Octavio R.; Montero Cabrera, María Elena; Sanín, Luz H.; Reyes Cortés, Manuel; Martínez Meyer, Enrique

    2014-07-01

    To estimate the distribution of uranium minerals in Chihuahua, the advanced statistical model "Maximun Entropy Method" (MaxEnt) was applied. A distinguishing feature of this method is that it can fit more complex models in case of small datasets (x and y data), as is the location of uranium ores in the State of Chihuahua. For georeferencing uranium ores, a database from the United States Geological Survey and workgroup of experts in Mexico was used. The main contribution of this paper is the proposal of maximum entropy techniques to obtain the mineral's potential distribution. For this model were used 24 environmental layers like topography, gravimetry, climate (worldclim), soil properties and others that were useful to project the uranium's distribution across the study area. For the validation of the places predicted by the model, comparisons were done with other research of the Mexican Service of Geological Survey, with direct exploration of specific areas and by talks with former exploration workers of the enterprise "Uranio de Mexico". Results. New uranium areas predicted by the model were validated, finding some relationship between the model predictions and geological faults. Conclusions. Modeling by spatial analysis provides additional information to the energy and mineral resources sectors.

  16. Tracer techniques applied to groundwater studies

    International Nuclear Information System (INIS)

    Sanchez, W.

    1975-01-01

    The determination of several aquifer characteristics, primarily in the satured zone, namely: porosity, permeability, transmissivity, dispersivity, direction and velocity of sub-surface water is presented. These techniques are based on artificial radioisotopes utilization. Only field determination of porosity are considered here and their advantage over laboratory measurements are: better representation of volume average, insensibility to local inhomogenities and no distortion of the structure due to sampling. The radioisotope dilution method is used to obtain an independent and direct measurement of the filtration velocity in a water-bearing formation under natural or induced hydraulic gradient. The velocity of the flow is usually calculated from Darcy's formula through the measurement of gradients and requires a knowledge of the permeability of the formation. The filtration velocity interpreted in conjunction with other parameters can, under favourable conditions, provide valuable information on the permeability, transmissibility and amount of water moving through an aquifer

  17. Proposal of requirements for performance in Brazil for systems of external individual monitoring for neutrons applying the TLD-albedo technique

    International Nuclear Information System (INIS)

    Martins, Marcelo M.; Mauricio, Claudia L.P.; Pereira, Walsan W.; Fonseca, Evaldo S. da; Silva, Ademir X.

    2009-01-01

    This work presents a criteria and conditions proposal for the regulations in Brazil of individual monitoring systems for neutrons applying the albedo technique with thermoluminescent detectors. Tests are proposed for the characterization performance of the system based on the Regulation ISO 21909 and on the experience of the authors

  18. Barometric altimetry system as virtual constellation applied in CAPS

    Science.gov (United States)

    Ai, Guoxiang; Sheng, Peixuan; Du, Jinlin; Zheng, Yongguang; Cai, Xiande; Wu, Haitao; Hu, Yonghui; Hua, Yu; Li, Xiaohui

    2009-03-01

    This work describes the barometric altimetry as virtual constellation applied to the Chinese Area Positioning System (CAPS), which uses the transponders of communication satellites to transfer navigation messages to users. Barometric altimetry depends on the relationship of air pressure varying with altitude in the Earth’s atmosphere. Once the air pressure at a location is measured the site altitude can be found. This method is able to enhance and improve the availability of three-dimensional positioning. The difficulty is that the relation between barometric pressure and altitude is variable in different areas and under various weather conditions. Hence, in order to obtain higher accuracy, we need to acquire the real-time air pressure corresponding to an altimetric region’s reference height. On the other hand, the altimetry method will be applied to satellite navigation system, but the greatest difficulty lies in how to get the real-time air pressure value at the reference height in the broad areas overlaid by satellite navigation. We propose an innovational method to solve this problem. It is to collect the real-time air pressures and temperatures of the 1860 known-altitude weather observatories over China and around via satellite communication and to carry out time extrapolation forecast uniformly. To reduce data quantity, we first partition the data and encode them and then broadcast these information via navigation message to CAPS users’ receivers. Upon the interpolations being done in receivers, the reference air pressure and temperature at the receiver’s nearby place is derived. Lastly, combing with the receiver-observed real air pressure and temperature, the site’s altitude can be determined. The work is presented in the following aspects: the calculation principle, formulae, data collection, encoding, prediction, interpolation method, navigation message transmission together with errors causes and analyses. The advantages and shortcomings of the

  19. Applying Cooperative Techniques in Teaching Problem Solving

    Directory of Open Access Journals (Sweden)

    Krisztina Barczi

    2013-12-01

    Full Text Available Teaching how to solve problems – from solving simple equations to solving difficult competition tasks – has been one of the greatest challenges for mathematics education for many years. Trying to find an effective method is an important educational task. Among others, the question arises as to whether a method in which students help each other might be useful. The present article describes part of an experiment that was designed to determine the effects of cooperative teaching techniques on the development of problem-solving skills.

  20. The ordering operator technique applied to open systems

    International Nuclear Information System (INIS)

    Pedrosa, I.A.; Baseia, B.

    1982-01-01

    A normal ordering technique and the coherent representation are used to discribe the time evolution of an open system of a single oscillator, linearly coupled with an infinite number of reservoir oscillators and it is shown how to include the dissipation and get the exponential decay. (Author) [pt

  1. X-diffraction technique applied for nano system metrology

    International Nuclear Information System (INIS)

    Kuznetsov, Alexei Yu.; Machado, Rogerio; Robertis, Eveline de; Campos, Andrea P.C.; Archanjo, Braulio S.; Gomes, Lincoln S.; Achete, Carlos A.

    2009-01-01

    The application of nano materials are fast growing in all industrial sectors, with a strong necessity in nano metrology and normalizing in the nano material area. The great potential of the X-ray diffraction technique in this field is illustrated at the example of metals, metal oxides and pharmaceuticals

  2. Scaling and extrapolation of hydrogen distribution experiments

    International Nuclear Information System (INIS)

    Karwat, H.

    1986-01-01

    The containment plays an important role in predicting the residual risk to the environment under severe accident conditions. Risk analyses show that massive fission product release from the reactor fuel can occur only if during a loss of coolant the core is severely damaged and a containment failure is anticipated. Large amounts of hydrogen inevitably, are formed during the core degradation and will be released into the containment. More combustible gases are produced later when the coremelt will contact the containment concrete. Thus a potential for an early containment failure exists if a massive hydrogen deflagration cannot be excluded. A more remote cause for early containment failure may be an energetic steam explosion which requires a number of independent conditions when the molten core material contacts residual coolant water. The prediction of the containment loads caused by a hydrogen combustion is dependent on the prediction of the combustion mode. In the paper an attempt is made to identify on basis of a dimensional analysis such areas for which particular care must be exercised when scale experimental evidence is interpreted and extrapolated with the aid of a computer code or a system of computer codes. The study is restricted to fluid dynamic phenomena of the gas distribution process within the containment atmosphere. The gas sources and the mechanical response of containment structures are considered as given boundary conditions under which the containment is to be analyzed

  3. Photoacoustic technique applied to the study of skin and leather

    International Nuclear Information System (INIS)

    Vargas, M.; Varela, J.; Hernandez, L.; Gonzalez, A.

    1998-01-01

    In this paper the photoacoustic technique is used in bull skin for the determination of thermal and optical properties as a function of the tanning process steps. Our results show that the photoacoustic technique is sensitive to the study of physical changes in this kind of material due to the tanning process

  4. Multivariable extrapolation of grand canonical free energy landscapes

    Science.gov (United States)

    Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-12-01

    We derive an approach for extrapolating the free energy landscape of multicomponent systems in the grand canonical ensemble, obtained from flat-histogram Monte Carlo simulations, from one set of temperature and chemical potentials to another. This is accomplished by expanding the landscape in a Taylor series at each value of the order parameter which defines its macrostate phase space. The coefficients in each Taylor polynomial are known exactly from fluctuation formulas, which may be computed by measuring the appropriate moments of extensive variables that fluctuate in this ensemble. Here we derive the expressions necessary to define these coefficients up to arbitrary order. In principle, this enables a single flat-histogram simulation to provide complete thermodynamic information over a broad range of temperatures and chemical potentials. Using this, we also show how to combine a small number of simulations, each performed at different conditions, in a thermodynamically consistent fashion to accurately compute properties at arbitrary temperatures and chemical potentials. This method may significantly increase the computational efficiency of biased grand canonical Monte Carlo simulations, especially for multicomponent mixtures. Although approximate, this approach is amenable to high-throughput and data-intensive investigations where it is preferable to have a large quantity of reasonably accurate simulation data, rather than a smaller amount with a higher accuracy.

  5. L2-Error Estimates of the Extrapolated Crank-Nicolson Discontinuous Galerkin Approximations for Nonlinear Sobolev Equations

    Directory of Open Access Journals (Sweden)

    Hyun Young Lee

    2010-01-01

    Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal ℓ∞(L2 error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.

  6. Eddy current technique applied to automated tube profilometry

    International Nuclear Information System (INIS)

    Dobbeni, D.; Melsen, C. van

    1982-01-01

    The use of eddy current methods in the first totally automated pre-service inspection of the internal diameter of PWR steam generator tubes is described. The technique was developed at Laborelec, the Belgian Laboratory of the Electricity Supply Industry. Details are given of the data acquisition system and of the automated manipulator. Representative tube profiles are illustrated. (U.K.)

  7. On Extrapolating Past the Range of Observed Data When Making Statistical Predictions in Ecology.

    Directory of Open Access Journals (Sweden)

    Paul B Conn

    Full Text Available Ecologists are increasingly using statistical models to predict animal abundance and occurrence in unsampled locations. The reliability of such predictions depends on a number of factors, including sample size, how far prediction locations are from the observed data, and similarity of predictive covariates in locations where data are gathered to locations where predictions are desired. In this paper, we propose extending Cook's notion of an independent variable hull (IVH, developed originally for application with linear regression models, to generalized regression models as a way to help assess the potential reliability of predictions in unsampled areas. Predictions occurring inside the generalized independent variable hull (gIVH can be regarded as interpolations, while predictions occurring outside the gIVH can be regarded as extrapolations worthy of additional investigation or skepticism. We conduct a simulation study to demonstrate the usefulness of this metric for limiting the scope of spatial inference when conducting model-based abundance estimation from survey counts. In this case, limiting inference to the gIVH substantially reduces bias, especially when survey designs are spatially imbalanced. We also demonstrate the utility of the gIVH in diagnosing problematic extrapolations when estimating the relative abundance of ribbon seals in the Bering Sea as a function of predictive covariates. We suggest that ecologists routinely use diagnostics such as the gIVH to help gauge the reliability of predictions from statistical models (such as generalized linear, generalized additive, and spatio-temporal regression models.

  8. A visual basic program to generate sediment grain-size statistics and to extrapolate particle distributions

    Science.gov (United States)

    Poppe, L.J.; Eliason, A.H.; Hastings, M.E.

    2004-01-01

    Measures that describe and summarize sediment grain-size distributions are important to geologists because of the large amount of information contained in textural data sets. Statistical methods are usually employed to simplify the necessary comparisons among samples and quantify the observed differences. The two statistical methods most commonly used by sedimentologists to describe particle distributions are mathematical moments (Krumbein and Pettijohn, 1938) and inclusive graphics (Folk, 1974). The choice of which of these statistical measures to use is typically governed by the amount of data available (Royse, 1970). If the entire distribution is known, the method of moments may be used; if the next to last accumulated percent is greater than 95, inclusive graphics statistics can be generated. Unfortunately, earlier programs designed to describe sediment grain-size distributions statistically do not run in a Windows environment, do not allow extrapolation of the distribution's tails, or do not generate both moment and graphic statistics (Kane and Hubert, 1963; Collias et al., 1963; Schlee and Webster, 1967; Poppe et al., 2000)1.Owing to analytical limitations, electro-resistance multichannel particle-size analyzers, such as Coulter Counters, commonly truncate the tails of the fine-fraction part of grain-size distributions. These devices do not detect fine clay in the 0.6–0.1 μm range (part of the 11-phi and all of the 12-phi and 13-phi fractions). Although size analyses performed down to 0.6 μm microns are adequate for most freshwater and near shore marine sediments, samples from many deeper water marine environments (e.g. rise and abyssal plain) may contain significant material in the fine clay fraction, and these analyses benefit from extrapolation.The program (GSSTAT) described herein generates statistics to characterize sediment grain-size distributions and can extrapolate the fine-grained end of the particle distribution. It is written in Microsoft

  9. Dutch Young Adults Ratings of Behavior Change Techniques Applied in Mobile Phone Apps to Promote Physical Activity: A Cross-Sectional Survey.

    Science.gov (United States)

    Belmon, Laura S; Middelweerd, Anouk; Te Velde, Saskia J; Brug, Johannes

    2015-11-12

    Interventions delivered through new device technology, including mobile phone apps, appear to be an effective method to reach young adults. Previous research indicates that self-efficacy and social support for physical activity and self-regulation behavior change techniques (BCT), such as goal setting, feedback, and self-monitoring, are important for promoting physical activity; however, little is known about evaluations by the target population of BCTs applied to physical activity apps and whether these preferences are associated with individual personality characteristics. This study aimed to explore young adults' opinions regarding BCTs (including self-regulation techniques) applied in mobile phone physical activity apps, and to examine associations between personality characteristics and ratings of BCTs applied in physical activity apps. We conducted a cross-sectional online survey among healthy 18 to 30-year-old adults (N=179). Data on participants' gender, age, height, weight, current education level, living situation, mobile phone use, personality traits, exercise self-efficacy, exercise self-identity, total physical activity level, and whether participants met Dutch physical activity guidelines were collected. Items for rating BCTs applied in physical activity apps were selected from a hierarchical taxonomy for BCTs, and were clustered into three BCT categories according to factor analysis: "goal setting and goal reviewing," "feedback and self-monitoring," and "social support and social comparison." Most participants were female (n=146), highly educated (n=169), physically active, and had high levels of self-efficacy. In general, we observed high ratings of BCTs aimed to increase "goal setting and goal reviewing" and "feedback and self-monitoring," but not for BCTs addressing "social support and social comparison." Only 3 (out of 16 tested) significant associations between personality characteristics and BCTs were observed: "agreeableness" was related to

  10. GORE PRECLUDE MVP dura substitute applied as a nonwatertight "underlay" graft for craniotomies: product and technique evaluation.

    Science.gov (United States)

    Chappell, E Thomas; Pare, Laura; Salehpour, Mohammed; Mathews, Marlon; Middlehof, Charles

    2009-01-01

    While watertight closure of the dura is a long-standing tenet of cranial surgery, it is often not possible and sometimes unnecessary. Many graft materials with various attributes and drawbacks have been in use for many years. A novel synthetic dural graft material called GORE PRECLUDE MVP dura substitute (WL Gore & Associates, Inc, Flagstaff, Ariz) (henceforth called "MVP") is designed for use both in traditional watertight dural closure and as a dural "underlay" graft in a nonwatertight fashion. One surface of MVP is engineered to facilitate fibroblast in-growth so that its proximity to the underside of the dura will lead to rapid incorporation, whereas the other surface acts as a barrier to reduce tissue adhesion to the device. A series of 59 human subjects undergoing craniotomy and available for clinical and radiographic follow-up underwent nonwatertight underlay grafting of their durotomy with MVP. This is an assessment of the specific product and technique. No attempt is made to compare this to other products or techniques. The mean follow-up in this group was more than 4 months. All subjects have ultimately experienced excellent outcomes related to use of the graft implanted with the underlay technique. No complications occurred related directly to MVP, but the wound-related complication rate attributed to the underlay technique was higher than expected (17%). However, careful analysis found a high rate of risk factors for wound complications and determined that complications with the underlay technique could be avoided by assuring close approximation of the graft material to the underside of the dura. MVP can be used as an underlay graft in a nonwatertight fashion. However, if used over large voids (relaxed brain or large tumor bed), "tacking" or traditional watertight closure techniques should be used. The underlay application of MVP is best applied over the convexities and is particularly well-suited to duraplasty after hemicraniectomy.

  11. Dose and dose rate extrapolation factors for malignant and non-malignant health endpoints after exposure to gamma and neutron radiation

    Energy Technology Data Exchange (ETDEWEB)

    Tran, Van; Little, Mark P. [National Cancer Institute, Radiation Epidemiology Branch, Rockville, MD (United States)

    2017-11-15

    Murine experiments were conducted at the JANUS reactor in Argonne National Laboratory from 1970 to 1992 to study the effect of acute and protracted radiation dose from gamma rays and fission neutron whole body exposure. The present study reports the reanalysis of the JANUS data on 36,718 mice, of which 16,973 mice were irradiated with neutrons, 13,638 were irradiated with gamma rays, and 6107 were controls. Mice were mostly Mus musculus, but one experiment used Peromyscus leucopus. For both types of radiation exposure, a Cox proportional hazards model was used, using age as timescale, and stratifying on sex and experiment. The optimal model was one with linear and quadratic terms in cumulative lagged dose, with adjustments to both linear and quadratic dose terms for low-dose rate irradiation (<5 mGy/h) and with adjustments to the dose for age at exposure and sex. After gamma ray exposure there is significant non-linearity (generally with upward curvature) for all tumours, lymphoreticular, respiratory, connective tissue and gastrointestinal tumours, also for all non-tumour, other non-tumour, non-malignant pulmonary and non-malignant renal diseases (p < 0.001). Associated with this the low-dose extrapolation factor, measuring the overestimation in low-dose risk resulting from linear extrapolation is significantly elevated for lymphoreticular tumours 1.16 (95% CI 1.06, 1.31), elevated also for a number of non-malignant endpoints, specifically all non-tumour diseases, 1.63 (95% CI 1.43, 2.00), non-malignant pulmonary disease, 1.70 (95% CI 1.17, 2.76) and other non-tumour diseases, 1.47 (95% CI 1.29, 1.82). However, for a rather larger group of malignant endpoints the low-dose extrapolation factor is significantly less than 1 (implying downward curvature), with central estimates generally ranging from 0.2 to 0.8, in particular for tumours of the respiratory system, vasculature, ovary, kidney/urinary bladder and testis. For neutron exposure most endpoints, malignant and

  12. Applied research on air pollution using nuclear-related analytical techniques. Report on the second research co-ordination meeting

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-07-01

    A co-ordinated research programme (CRP) on applied research on air pollution using nuclear-related techniques is a global CRP which started in 1992, and is scheduled to run until early 1997. The purpose of this CRP is to promote the use of nuclear analytical techniques in air pollution studies, e.g. NAA, XRF, and PIXE for the analysis of toxic and other trace elements in air particulate matter. The main purposes of the core programme are i) to support the use of nuclear and nuclear-related analytical techniques for research and monitoring studies on air pollution, ii) to identify major sources of air pollution affecting each of the participating countries with particular reference to toxic heavy metals, and iii) to obtain comparative data on pollution levels in areas of high pollution (e.g. a city centre or a populated area downwind of a large pollution source) and low pollution (e.g. rural area). This document reports the discussions held during the second Research Co-ordination Meeting (RCM) for the CRP which took place at ANSTO in Menai, Australia. (author)

  13. Applied research on air pollution using nuclear-related analytical techniques. Report on the second research co-ordination meeting

    International Nuclear Information System (INIS)

    1995-01-01

    A co-ordinated research programme (CRP) on applied research on air pollution using nuclear-related techniques is a global CRP which started in 1992, and is scheduled to run until early 1997. The purpose of this CRP is to promote the use of nuclear analytical techniques in air pollution studies, e.g. NAA, XRF, and PIXE for the analysis of toxic and other trace elements in air particulate matter. The main purposes of the core programme are i) to support the use of nuclear and nuclear-related analytical techniques for research and monitoring studies on air pollution, ii) to identify major sources of air pollution affecting each of the participating countries with particular reference to toxic heavy metals, and iii) to obtain comparative data on pollution levels in areas of high pollution (e.g. a city centre or a populated area downwind of a large pollution source) and low pollution (e.g. rural area). This document reports the discussions held during the second Research Co-ordination Meeting (RCM) for the CRP which took place at ANSTO in Menai, Australia. (author)

  14. Performance values for non destructive assay (NDA) techniques applied to safeguards: the 2002 evaluation by the ESARDA NDA Working Group

    International Nuclear Information System (INIS)

    Guardini, S.

    2003-01-01

    The first evaluation of NDA performance values undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques (WGNDA) was published in 1993. Almost 10 years later the Working Group decided to review those values, to report about improvements and to issue new performance values for techniques which were not applied in the early nineties, or were at that time only emerging. Non-Destructive Assay techniques have become more and more important in recent years, and they are used to a large extent in nuclear material accountancy and control both by operators and control authorities. As a consequence, the performance evaluation for NDA techniques is of particular relevance to safeguards authorities in optimising Safeguards operations and reducing costs. Performance values are important also for NMAC regulators, to define detection levels, limits for anomalies, goal quantities and to negotiate basic audit rules. This paper presents the latest evaluation of ESARDA Performance Values (EPVs) for the most common NDA techniques currently used for the assay of nuclear materials for Safeguards purposes. The main topics covered by the document are: techniques for plutonium bearing materials: PuO 2 and MOX; techniques for U-bearing materials; techniques for U and Pu in liquid form; techniques for spent fuel assay. This issue of the performance values is the result of specific international round robin exercises, field measurements and ad hoc experiments, evaluated and discussed in the ESARDA NDA Working Group. (author)

  15. The importance of inclusion of kinetic information in the extrapolation of high-to-low concentrations for human limit setting.

    NARCIS (Netherlands)

    Geraets, Liesbeth; Zeilmaker, Marco J; Bos, Peter M J

    2018-01-01

    Human health risk assessment of inhalation exposures generally includes a high-to-low concentration extrapolation. Although this is a common step in human risk assessment, it introduces various uncertainties. One of these uncertainties is related to the toxicokinetics. Many kinetic processes such as

  16. Acceleration of nodal diffusion code by Chebychev polynomial extrapolation method; Ubrzanje spoljasnjih iteracija difuzionog nodalnog proracuna Chebisevijevom ekstrapolacionom metodom

    Energy Technology Data Exchange (ETDEWEB)

    Zmijarevic, I; Tomashevic, Dj [Institut za Nuklearne Nauke Boris Kidric, Belgrade (Yugoslavia)

    1988-07-01

    This paper presents Chebychev acceleration of outer iterations of a nodal diffusion code of high accuracy. Extrapolation parameters, unique for all moments are calculated using the node integrated distribution of fission source. Sample calculations are presented indicating the efficiency of method. (author)

  17. Flash radiographic technique applied to fuel injector sprays

    International Nuclear Information System (INIS)

    Vantine, H.C.

    1977-01-01

    A flash radiographic technique, using 50 ns exposure times, was used to study the pattern and density distribution of a fuel injector spray. The experimental apparatus and method are described. An 85 kVp flash x-ray generator, designed and fabricated at the Lawrence Livermore Laboratory, is utilized. Radiographic images, recorded on standard x-ray films, are digitized and computer processed

  18. Use of Random and Site-Directed Mutagenesis to Probe Protein Structure-Function Relationships: Applied Techniques in the Study of Helicobacter pylori.

    Science.gov (United States)

    Whitmire, Jeannette M; Merrell, D Scott

    2017-01-01

    Mutagenesis is a valuable tool to examine the structure-function relationships of bacterial proteins. As such, a wide variety of mutagenesis techniques and strategies have been developed. This chapter details a selection of random mutagenesis methods and site-directed mutagenesis procedures that can be applied to an array of bacterial species. Additionally, the direct application of the techniques to study the Helicobacter pylori Ferric Uptake Regulator (Fur) protein is described. The varied approaches illustrated herein allow the robust investigation of the structural-functional relationships within a protein of interest.

  19. Quantifying Methane Flux from a Prominent Seafloor Crater with Water Column Imagery Filtering and Bubble Quantification Techniques

    Science.gov (United States)

    Mitchell, G. A.; Gharib, J. J.; Doolittle, D. F.

    2015-12-01

    Methane gas flux from the seafloor to atmosphere is an important variable for global carbon cycle and climate models, yet is poorly constrained. Methodologies used to estimate seafloor gas flux commonly employ a combination of acoustic and optical techniques. These techniques often use hull-mounted multibeam echosounders (MBES) to quickly ensonify large volumes of the water column for acoustic backscatter anomalies indicative of gas bubble plumes. Detection of these water column anomalies with a MBES provides information on the lateral distribution of the plumes, the midwater dimensions of the plumes, and their positions on the seafloor. Seafloor plume locations are targeted for visual investigations using a remotely operated vehicle (ROV) to determine bubble emission rates, venting behaviors, bubble sizes, and ascent velocities. Once these variables are measured in-situ, an extrapolation of gas flux is made over the survey area using the number of remotely-mapped flares. This methodology was applied to a geophysical survey conducted in 2013 over a large seafloor crater that developed in response to an oil well blowout in 1983 offshore Papua New Guinea. The site was investigated by multibeam and sidescan mapping, sub-bottom profiling, 2-D high-resolution multi-channel seismic reflection, and ROV video and coring operations. Numerous water column plumes were detected in the data suggesting vigorously active vents within and near the seafloor crater (Figure 1). This study uses dual-frequency MBES datasets (Reson 7125, 200/400 kHz) and ROV video imagery of the active hydrocarbon seeps to estimate total gas flux from the crater. Plumes of bubbles were extracted from the water column data using threshold filtering techniques. Analysis of video images of the seep emission sites within the crater provided estimates on bubble size, expulsion frequency, and ascent velocity. The average gas flux characteristics made from ROV video observations is extrapolated over the number

  20. Robust extrapolation scheme for fast estimation of 3D Ising field partition functions: application to within subject fMRI data

    Energy Technology Data Exchange (ETDEWEB)

    Risser, L.; Vincent, T.; Ciuciu, Ph. [NeuroSpin CEA, F-91191 Gif sur Yvette (France); Risser, L.; Vincent, T. [Laboratoire de Neuroimagerie Assistee par Ordinateur (LNAO) CEA - DSV/I2BM/NEUROSPIN (France); Risser, L. [Institut de mecanique des fluides de Toulouse (IMFT), CNRS: UMR5502 - Universite Paul Sabatier - Toulouse III - Institut National Polytechnique de Toulouse - INPT (France); Idier, J. [Institut de Recherche en Communications et en Cybernetique de Nantes (IRCCyN) CNRS - UMR6597 - Universite de Nantes - ecole Centrale de Nantes - Ecole des Mines de Nantes - Ecole Polytechnique de l' Universite de Nantes (France)

    2009-07-01

    In this paper, we present a first numerical scheme to estimate Partition Functions (PF) of 3D Ising fields. Our strategy is applied to the context of the joint detection-estimation of brain activity from functional Magnetic Resonance Imaging (fMRI) data, where the goal is to automatically recover activated regions and estimate region-dependent, hemodynamic filters. For any region, a specific binary Markov random field may embody spatial correlation over the hidden states of the voxels by modeling whether they are activated or not. To make this spatial regularization fully adaptive, our approach is first based upon it, classical path-sampling method to approximate a small subset of reference PFs corresponding to pre-specified regions. Then, file proposed extrapolation method allows its to approximate the PFs associated with the Ising fields defined over the remaining brain regions. In comparison with preexisting approaches, our method is robust; to topological inhomogeneities in the definition of the reference regions. As a result, it strongly alleviates the computational burden and makes spatially adaptive regularization of whole brain fMRI datasets feasible. (authors)

  1. Applying CFD in the Analysis of Heavy-Oil Transportation in Curved Pipes Using Core-Flow Technique

    Directory of Open Access Journals (Sweden)

    S Conceição

    2017-06-01

    Full Text Available Multiphase flow of oil, gas and water occurs in the petroleum industry from the reservoir to the processing units. The occurrence of heavy oils in the world is increasing significantly and points to the need for greater investment in the reservoirs exploitation and, consequently, to the development of new technologies for the production and transport of this oil. Therefore, it is interesting improve techniques to ensure an increase in energy efficiency in the transport of this oil. The core-flow technique is one of the most advantageous methods of lifting and transporting of oil. The core-flow technique does not alter the oil viscosity, but change the flow pattern and thus, reducing friction during heavy oil transportation. This flow pattern is characterized by a fine water pellicle that is formed close to the inner wall of the pipe, aging as lubricant of the oil flowing in the core of the pipe. In this sense, the objective of this paper is to study the isothermal flow of heavy oil in curved pipelines, employing the core-flow technique. A three-dimensional, transient and isothermal mathematical model that considers the mixture and k-e  turbulence models to address the gas-water-heavy oil three-phase flow in the pipe was applied for analysis. Simulations with different flow patterns of the involved phases (oil-gas-water have been done, in order to optimize the transport of heavy oils. Results of pressure and volumetric fraction distribution of the involved phases are presented and analyzed. It was verified that the oil core lubricated by a fine water layer flowing in the pipe considerably decreases pressure drop.

  2. Extrapolation procedures for calculating high-temperature gibbs free energies of aqueous electrolytes

    International Nuclear Information System (INIS)

    Tremaine, P.R.

    1979-01-01

    Methods for calculating high-temprature Gibbs free energies of mononuclear cations and anions from room-temperature data are reviewed. Emphasis is given to species required for oxide solubility calculations relevant to mass transport situations in the nuclear industry. Free energies predicted by each method are compared to selected values calculated from recently reported solubility studies and other literature data. Values for monatomic ions estimated using the assumption anti C 0 p(T) = anti C 0 p(298) agree best with experiment to 423 K. From 423 K to 523 K, free energies from an electrostatic model for ion hydration are more accurate. Extrapolations for hydrolyzed species are limited by a lack of room-temperature entropy data and expressions for estimating these entropies are discussed. (orig.) [de

  3. Performance Values for Non-Destructive Assay (NDA) Technique Applied to Wastes: Evaluation by the ESARDA NDA Working Group

    International Nuclear Information System (INIS)

    Rackham, Jamie; Weber, Anne-Laure; Chard, Patrick

    2012-01-01

    The first evaluation of NDA performance values was undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques and was published in 1993. Almost ten years later in 2002 the Working Group reviewed those values and reported on improvements in performance values and new measurement techniques that had emerged since the original assessment. The 2002 evaluation of NDA performance values did not include waste measurements (although these had been incorporated into the 1993 exercise), because although the same measurement techniques are generally applied, the performance is significantly different compared to the assay of conventional Safeguarded special nuclear material. It was therefore considered more appropriate to perform a separate evaluation of performance values for waste assay. Waste assay is becoming increasingly important within the Safeguards community, particularly since the implementation of the Additional Protocol, which calls for declaration of plutonium and HEU bearing waste in addition to information on existing declared material or facilities. Improvements in the measurement performance in recent years, in particular the accuracy, mean that special nuclear materials can now be accounted for in wastes with greater certainty. This paper presents an evaluation of performance values for the NDA techniques in common usage for the assay of waste containing special nuclear material. The main topics covered by the document are: 1- Techniques for plutonium bearing solid wastes 2- Techniques for uranium bearing solid wastes 3 - Techniques for assay of fissile material in spent fuel wastes. Originally it was intended to include performance values for measurements of uranium and plutonium in liquid wastes; however, as no performance data for liquid waste measurements was obtained it was decided to exclude liquid wastes from this report. This issue of the performance values for waste assay has been evaluated and discussed by the ESARDA

  4. Performance Values for Non-Destructive Assay (NDA) Technique Applied to Wastes: Evaluation by the ESARDA NDA Working Group

    Energy Technology Data Exchange (ETDEWEB)

    Rackham, Jamie [Babcock International Group, Sellafield, Seascale, Cumbria, (United Kingdom); Weber, Anne-Laure [Institut de Radioprotection et de Surete Nucleaire Fontenay-Aux-Roses (France); Chard, Patrick [Canberra, Forss Business and Technology park, Thurso, Caithness (United Kingdom)

    2012-12-15

    The first evaluation of NDA performance values was undertaken by the ESARDA Working Group for Standards and Non Destructive Assay Techniques and was published in 1993. Almost ten years later in 2002 the Working Group reviewed those values and reported on improvements in performance values and new measurement techniques that had emerged since the original assessment. The 2002 evaluation of NDA performance values did not include waste measurements (although these had been incorporated into the 1993 exercise), because although the same measurement techniques are generally applied, the performance is significantly different compared to the assay of conventional Safeguarded special nuclear material. It was therefore considered more appropriate to perform a separate evaluation of performance values for waste assay. Waste assay is becoming increasingly important within the Safeguards community, particularly since the implementation of the Additional Protocol, which calls for declaration of plutonium and HEU bearing waste in addition to information on existing declared material or facilities. Improvements in the measurement performance in recent years, in particular the accuracy, mean that special nuclear materials can now be accounted for in wastes with greater certainty. This paper presents an evaluation of performance values for the NDA techniques in common usage for the assay of waste containing special nuclear material. The main topics covered by the document are: 1- Techniques for plutonium bearing solid wastes 2- Techniques for uranium bearing solid wastes 3 - Techniques for assay of fissile material in spent fuel wastes. Originally it was intended to include performance values for measurements of uranium and plutonium in liquid wastes; however, as no performance data for liquid waste measurements was obtained it was decided to exclude liquid wastes from this report. This issue of the performance values for waste assay has been evaluated and discussed by the ESARDA

  5. Extrapolation of lattice gauge theories to the continuum limit

    International Nuclear Information System (INIS)

    Duncan, A.; Vaidya, H.

    1978-01-01

    The problem of extrapolating lattice gauge theories from the strong-coupling phase to the continuum critical point is studied for the Abelian (U(1)) and non-Abelian (SU(2)) theories in three (space--time) dimensions. A method is described for obtaining the asymptotic behavior, for large β, of such thermodynamic quantities and correlation functions as the free energy and Wilson loop function. Certain general analyticity and positivity properties (in the complex β-plane) are shown to lead, after appropriate analytic remappings, to a Stieltjes property of these functions. Rigorous theorems then guarantee uniform and monotone convergence of the Pade approximants, with exact pointwise upper and lower bounds. The first three Pade's are computed for both the free energy and the Wilson function. For the free energy, satisfactory agreement is with the asymptotic behavior computed by an explicit lattice calculation. The strong-coupling series for the Wilson function is found to be considerably more unstable in the lower order terms - correspondingly, convergence of the Pade's is found to be slower than in the free-energy case. It is suggested that higher-order calculations may allow a reasonably accurate determination of the string constant for the SU(2) theory. 14 references

  6. An accurate technique for the solution of the nonlinear point kinetics equations

    International Nuclear Information System (INIS)

    Picca, Paolo; Ganapol, Barry D.; Furfaro, Roberto

    2011-01-01

    A novel methodology for the solution of non-linear point kinetic (PK) equations is proposed. The technique is based on a piecewise constant approximation of PK system of ODEs and explicitly accounts for reactivity feedback effects, through an iterative cycle. High accuracy is reached by introducing a sub-mesh for the numerical evaluation of integrals involved and by correcting the source term to include the non-linear effect on a finer time scale. The use of extrapolation techniques for convergence acceleration is also explored. Results for adiabatic feedback model are reported and compared with other benchmarks in literature. The convergence trend makes the algorithm particularly attractive for applications, including in multi-point kinetics and quasi-static frameworks. (author)

  7. Extrapolation of short term observations to time periods relevant to the isolation of long lived radioactive waste. Results of a co-ordinated research project 1995-2000

    International Nuclear Information System (INIS)

    2000-09-01

    This report addresses safety analysis of the whole repository life-cycle that may require long term performance assessment of its components and evaluation of potential impacts of the facility on the environment. Generic consideration of procedures for the development of predictive tools are completed by detailed characterization of selected principles and methods that were applied and presented within the co-ordinated research project (CRP). The project focused on different approaches to extrapolation, considering radionuclide migration/sorption, physical, geochemical and geotechnical characteristics of engineered barriers, irradiated rock and backfill performance, and on corrosion of metallic and vitreous materials. This document contains a comprehensive discussion of the overall problem and the practical results of the individual projects preformed within the CRP. Each of the papers on the individual projects has been indexed separately

  8. Evaluation of the Microbiologically Influenced Corrosion in a carbon steel making use of electrochemical techniques

    International Nuclear Information System (INIS)

    Diaz S, A.C.; Arganis, C.; Ayala, V.; Gachuz, M.; Merino, J.; Suarez, S.; Brena, M.; Luna, P.

    2001-01-01

    The Microbiologically Influenced Corrosion (MIC) has been identified as a problem of the nuclear plants systems in the last years. The electrochemical behavior of metal coupons of carbon steel submitted to the action of sulfate reducing bacteria (SRB) was evaluated, making use of the electrochemical techniques of direct current as well as electrochemical noise. The generated results show a little variation in the corrosion velocities which obtained by Tafel extrapolation and resistance to the linear polarization, whereas the electrochemical noise technique presented important differences as regards the registered behavior in environment with and without microorganisms. (Author)

  9. Depth dose distribution in the water for clinical applicators of 90Sr + 90Y, with a extrapolation mini chamber

    International Nuclear Information System (INIS)

    Antonio, Patricia de Lara; Caldas, Linda V.E.; Oliveira, Mercia L.

    2009-01-01

    This work determines the depth dose in the water for clinical applicators of 90 Sr + 90 Y, using a extrapolation mini chamber developed at the IPEN, Sao Paulo, Brazil, and different thickness acrylic plates. The obtained results were compared with the international recommendations and were considered satisfactory

  10. Kinetic Monte Carlo simulations of water ice porosity: extrapolations of deposition parameters from the laboratory to interstellar space

    Science.gov (United States)

    Clements, Aspen R.; Berk, Brandon; Cooke, Ilsa R.; Garrod, Robin T.

    2018-02-01

    Using an off-lattice kinetic Monte Carlo model we reproduce experimental laboratory trends in the density of amorphous solid water (ASW) for varied deposition angle, rate and surface temperature. Extrapolation of the model to conditions appropriate to protoplanetary disks and interstellar dark clouds indicate that these ices may be less porous than laboratory ices.

  11. Neutron Filter Technique and its use for Fundamental and applied Investigations

    International Nuclear Information System (INIS)

    Gritzay, V.; Kolotyi, V.

    2008-01-01

    At Kyiv Research Reactor (KRR) the neutron filtered beam technique is used for more than 30 years and its development continues, the new and updated facilities for neutron cross section measurements provide the receipt of neutron cross sections with rather high accuracy: total neutron cross sections with accuracy 1% and better, neutron scattering cross sections with 3-6% accuracy. The main purpose of this paper is presentation of the neutron measurement techniques, developed at KRR, and demonstration some experimental results, obtained using these techniques

  12. VIDEOGRAMMETRIC RECONSTRUCTION APPLIED TO VOLCANOLOGY: PERSPECTIVES FOR A NEW MEASUREMENT TECHNIQUE IN VOLCANO MONITORING

    Directory of Open Access Journals (Sweden)

    Emmanuelle Cecchi

    2011-05-01

    Full Text Available This article deals with videogrammetric reconstruction of volcanic structures. As a first step, the method is tested in laboratory. The objective is to reconstruct small sand and plaster cones, analogous to volcanoes, that deform with time. The initial stage consists in modelling the sensor (internal parameters and calculating its orientation and position in space, using a multi-view calibration method. In practice two sets of views are taken: a first one around a calibration target and a second one around the studied object. Both sets are combined in the calibration software to simultaneously compute the internal parameters modelling the sensor, and the external parameters giving the spatial location of each view around the cone. Following this first stage, a N-view reconstruction process is carried out. The principle is as follows: an initial 3D model of the cone is created and then iteratively deformed to fit the real object. The deformation of the meshed model is based on a texture coherence criterion. At present, this reconstruction method and its precision are being validated at laboratory scale. The objective will be then to follow analogue model deformation with time using successive reconstructions. In the future, the method will be applied to real volcanic structures. Modifications of the initial code will certainly be required, however excellent reconstruction accuracy, valuable simplicity and flexibility of the technique are expected, compared to classic stereophotogrammetric techniques used in volcanology.

  13. Dosimetry techniques applied to thermoluminescent age estimation

    International Nuclear Information System (INIS)

    Erramli, H.

    1986-12-01

    The reliability and the ease of the field application of the measuring techniques of natural radioactivity dosimetry are studied. The natural radioactivity in minerals in composed of the internal dose deposited by alpha and beta radiations issued from the sample itself and the external dose deposited by gamma and cosmic radiations issued from the surroundings of the sample. Two technics for external dosimetry are examined in details. TL Dosimetry and field gamma dosimetry. Calibration and experimental conditions are presented. A new integrated dosimetric method for internal and external dose measure is proposed: the TL dosimeter is placed in the soil in exactly the same conditions as the sample ones, during a time long enough for the total dose evaluation [fr

  14. Prediction of UT1-UTC, LOD and AAM χ3 by combination of least-squares and multivariate stochastic methods

    Science.gov (United States)

    Niedzielski, Tomasz; Kosek, Wiesław

    2008-02-01

    This article presents the application of a multivariate prediction technique for predicting universal time (UT1-UTC), length of day (LOD) and the axial component of atmospheric angular momentum (AAM χ 3). The multivariate predictions of LOD and UT1-UTC are generated by means of the combination of (1) least-squares (LS) extrapolation of models for annual, semiannual, 18.6-year, 9.3-year oscillations and for the linear trend, and (2) multivariate autoregressive (MAR) stochastic prediction of LS residuals (LS + MAR). The MAR technique enables the use of the AAM χ 3 time-series as the explanatory variable for the computation of LOD or UT1-UTC predictions. In order to evaluate the performance of this approach, two other prediction schemes are also applied: (1) LS extrapolation, (2) combination of LS extrapolation and univariate autoregressive (AR) prediction of LS residuals (LS + AR). The multivariate predictions of AAM χ 3 data, however, are computed as a combination of the extrapolation of the LS model for annual and semiannual oscillations and the LS + MAR. The AAM χ 3 predictions are also compared with LS extrapolation and LS + AR prediction. It is shown that the predictions of LOD and UT1-UTC based on LS + MAR taking into account the axial component of AAM are more accurate than the predictions of LOD and UT1-UTC based on LS extrapolation or on LS + AR. In particular, the UT1-UTC predictions based on LS + MAR during El Niño/La Niña events exhibit considerably smaller prediction errors than those calculated by means of LS or LS + AR. The AAM χ 3 time-series is predicted using LS + MAR with higher accuracy than applying LS extrapolation itself in the case of medium-term predictions (up to 100 days in the future). However, the predictions of AAM χ 3 reveal the best accuracy for LS + AR.

  15. MTS-MD of Biomolecules Steered with 3D-RISM-KH Mean Solvation Forces Accelerated with Generalized Solvation Force Extrapolation.

    Science.gov (United States)

    Omelyan, Igor; Kovalenko, Andriy

    2015-04-14

    We developed a generalized solvation force extrapolation (GSFE) approach to speed up multiple time step molecular dynamics (MTS-MD) of biomolecules steered with mean solvation forces obtained from the 3D-RISM-KH molecular theory of solvation (three-dimensional reference interaction site model with the Kovalenko-Hirata closure). GSFE is based on a set of techniques including the non-Eckart-like transformation of coordinate space separately for each solute atom, extension of the force-coordinate pair basis set followed by selection of the best subset, balancing the normal equations by modified least-squares minimization of deviations, and incremental increase of outer time step in motion integration. Mean solvation forces acting on the biomolecule atoms in conformations at successive inner time steps are extrapolated using a relatively small number of best (closest) solute atomic coordinates and corresponding mean solvation forces obtained at previous outer time steps by converging the 3D-RISM-KH integral equations. The MTS-MD evolution steered with GSFE of 3D-RISM-KH mean solvation forces is efficiently stabilized with our optimized isokinetic Nosé-Hoover chain (OIN) thermostat. We validated the hybrid MTS-MD/OIN/GSFE/3D-RISM-KH integrator on solvated organic and biomolecules of different stiffness and complexity: asphaltene dimer in toluene solvent, hydrated alanine dipeptide, miniprotein 1L2Y, and protein G. The GSFE accuracy and the OIN efficiency allowed us to enlarge outer time steps up to huge values of 1-4 ps while accurately reproducing conformational properties. Quasidynamics steered with 3D-RISM-KH mean solvation forces achieves time scale compression of conformational changes coupled with solvent exchange, resulting in further significant acceleration of protein conformational sampling with respect to real time dynamics. Overall, this provided a 50- to 1000-fold effective speedup of conformational sampling for these systems, compared to conventional MD

  16. Predicting treatment effect from surrogate endpoints and historical trials: an extrapolation involving probabilities of a binary outcome or survival to a specific time.

    Science.gov (United States)

    Baker, Stuart G; Sargent, Daniel J; Buyse, Marc; Burzykowski, Tomasz

    2012-03-01

    Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download. © 2011, The International Biometric Society No claim to original US government works.

  17. Assessment of ground-based monitoring techniques applied to landslide investigations

    Science.gov (United States)

    Uhlemann, S.; Smith, A.; Chambers, J.; Dixon, N.; Dijkstra, T.; Haslam, E.; Meldrum, P.; Merritt, A.; Gunn, D.; Mackay, J.

    2016-01-01

    A landslide complex in the Whitby Mudstone Formation at Hollin Hill, North Yorkshire, UK is periodically re-activated in response to rainfall-induced pore-water pressure fluctuations. This paper compares long-term measurements (i.e., 2009-2014) obtained from a combination of monitoring techniques that have been employed together for the first time on an active landslide. The results highlight the relative performance of the different techniques, and can provide guidance for researchers and practitioners for selecting and installing appropriate monitoring techniques to assess unstable slopes. Particular attention is given to the spatial and temporal resolutions offered by the different approaches that include: Real Time Kinematic-GPS (RTK-GPS) monitoring of a ground surface marker array, conventional inclinometers, Shape Acceleration Arrays (SAA), tilt meters, active waveguides with Acoustic Emission (AE) monitoring, and piezometers. High spatial resolution information has allowed locating areas of stability and instability across a large slope. This has enabled identification of areas where further monitoring efforts should be focused. High temporal resolution information allowed the capture of 'S'-shaped slope displacement-time behaviour (i.e. phases of slope acceleration, deceleration and stability) in response to elevations in pore-water pressures. This study shows that a well-balanced suite of monitoring techniques that provides high temporal and spatial resolutions on both measurement and slope scale is necessary to fully understand failure and movement mechanisms of slopes. In the case of the Hollin Hill landslide it enabled detailed interpretation of the geomorphological processes governing landslide activity. It highlights the benefit of regularly surveying a network of GPS markers to determine areas for installation of movement monitoring techniques that offer higher resolution both temporally and spatially. The small sensitivity of tilt meter measurements

  18. [Progress in transgenic fish techniques and application].

    Science.gov (United States)

    Ye, Xing; Tian, Yuan-Yuan; Gao, Feng-Ying

    2011-05-01

    Transgenic technique provides a new way for fish breeding. Stable lines of growth hormone gene transfer carps, salmon and tilapia, as well as fluorescence protein gene transfer zebra fish and white cloud mountain minnow have been produced. The fast growth characteristic of GH gene transgenic fish will be of great importance to promote aquaculture production and economic efficiency. This paper summarized the progress in transgenic fish research and ecological assessments. Microinjection is still the most common used method, but often resulted in multi-site and multi-copies integration. Co-injection of transposon or meganuclease will greatly improve the efficiency of gene transfer and integration. "All fish" gene or "auto gene" should be considered to produce transgenic fish in order to eliminate misgiving on food safety and to benefit expression of the transferred gene. Environmental risk is the biggest obstacle for transgenic fish to be commercially applied. Data indicates that transgenic fish have inferior fitness compared with the traditional domestic fish. However, be-cause of the genotype-by-environment effects, it is difficult to extrapolate simple phenotypes to the complex ecological interactions that occur in nature based on the ecological consequences of the transgenic fish determined in the laboratory. It is critical to establish highly naturalized environments for acquiring reliable data that can be used to evaluate the environ-mental risk. Efficacious physical and biological containment strategies remain to be crucial approaches to ensure the safe application of transgenic fish technology.

  19. Study of an extrapolation chamber in a standard diagnostic radiology beam by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Vedovato, Uly Pita; Silva, Rayre Janaina Vieira; Neves, Lucio Pereira; Santos, William S.; Perini, Ana Paula; Belinato, Walmir

    2016-01-01

    In this work, we studied the influence of the components of an extrapolation ionization chamber in its response. This study was undertaken using the MCNP-5 Monte Carlo code, and the standard diagnostic radiology quality for direct beams (RQR5). Using tally F6 and 2.1 x 10"9 simulated histories, the results showed that the chamber design and material not alter significantly the energy deposited in its sensitive volume. The collecting electrode and support board were the components with more influence on the chamber response. (author)

  20. Containment integrity and leak testing. Procedures applied and experiences gained in European countries

    International Nuclear Information System (INIS)

    1987-01-01

    Containment systems are the ultimate safety barrier for preventing the escape of gaseous, liquid and solid radioactive materials produced in normal operation, not retained in process systems, and for keeping back radioactive materials released by system malfunction or equipment failure. A primary element of the containment shell is therefore its leak-tight design. The report describes the present containment concepts mostly used in European countries. The leak-testing procedures applied and the experiences gained in their application are also discussed. The report refers more particularly to pre-operational testing, periodic testing and extrapolation methods of leak rates measured at test conditions to expected leak rates at calculated accident conditions. The actual problems in periodic containment leak rate testing are critically reviewed. In the appendix to the report a summary is given of the regulations and specifications applied in different member countries

  1. Applied techniques for high bandwidth data transfers across wide area networks

    International Nuclear Information System (INIS)

    Lee, Jason; Gunter, Dan; Tierney, Brian; Allcock, Bill; Bester, Joe; Bresnahan, John; Tuecke, Steve

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. Ensuring that the data is there in time for the computation in today's Internet is a massive problem. From our work developing a scalable distributed network cache, we have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). In this paper, we discuss several hardware and software design techniques and issues, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. We also describe results from two applications using these techniques, which were obtained at the Supercomputing 2000 conference

  2. Beyond the plot: technology extrapolation domains for scaling out agronomic science

    Science.gov (United States)

    Rattalino Edreira, Juan I.; Cassman, Kenneth G.; Hochman, Zvi; van Ittersum, Martin K.; van Bussel, Lenny; Claessens, Lieven; Grassini, Patricio

    2018-05-01

    Ensuring an adequate food supply in systems that protect environmental quality and conserve natural resources requires productive and resource-efficient cropping systems on existing farmland. Meeting this challenge will be difficult without a robust spatial framework that facilitates rapid evaluation and scaling-out of currently available and emerging technologies. Here we develop a global spatial framework to delineate ‘technology extrapolation domains’ based on key climate and soil factors that govern crop yields and yield stability in rainfed crop production. The proposed framework adequately represents the spatial pattern of crop yields and stability when evaluated over the data-rich US Corn Belt. It also facilitates evaluation of cropping system performance across continents, which can improve efficiency of agricultural research that seeks to intensify production on existing farmland. Populating this biophysical spatial framework with appropriate socio-economic attributes provides the potential to amplify the return on investments in agricultural research and development by improving the effectiveness of research prioritization and impact assessment.

  3. In vitro to in vivo extrapolation of effective dosimetry in developmental toxicity testing : Application of a generic PBK modelling approach

    NARCIS (Netherlands)

    Fragki, Styliani; Piersma, Aldert H; Rorije, Emiel; Zeilmaker, Marco J

    2017-01-01

    Incorporation of kinetics to quantitative in vitro to in vivo extrapolations (QIVIVE) is a key step for the realization of a non-animal testing paradigm, in the sphere of regulatory toxicology. The use of Physiologically-Based Kinetic (PBK) modelling for determining systemic doses of chemicals at

  4. In vitro to in vivo extrapolation of effective dosimetry in developmental toxicity testing: Application of a generic PBK modelling approach.

    NARCIS (Netherlands)

    Fragki, Styliani; Piersma, Aldert H; Rorije, Emiel; Zeilmaker, Marco J

    2017-01-01

    Incorporation of kinetics to quantitative in vitro to in vivo extrapolations (QIVIVE) is a key step for the realization of a non-animal testing paradigm, in the sphere of regulatory toxicology. The use of Physiologically-Based Kinetic (PBK) modelling for determining systemic doses of chemicals at

  5. Zero order and signal processing spectrophotometric techniques applied for resolving interference of metronidazole with ciprofloxacin in their pharmaceutical dosage form.

    Science.gov (United States)

    Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed

    2016-02-05

    Four rapid, simple, accurate and precise spectrophotometric methods were used for the determination of ciprofloxacin in the presence of metronidazole as interference. The methods under study are area under the curve, simultaneous equation in addition to smart signal processing techniques of manipulating ratio spectra namely Savitsky-Golay filters and continuous wavelet transform. All the methods were validated according to the ICH guidelines where accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can therefore be used for the routine analysis of ciprofloxacin in quality-control laboratories. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. submitter Unified Scaling Law for flux pinning in practical superconductors: II. Parameter testing, scaling constants, and the Extrapolative Scaling Expression

    CERN Document Server

    Ekin, Jack W; Goodrich, Loren; Splett, Jolene; Bordini, Bernardo; Richter, David

    2016-01-01

    A scaling study of several thousand Nb$_{3}$Sn critical-current $(I_c)$ measurements is used to derive the Extrapolative Scaling Expression (ESE), a relation that can quickly and accurately extrapolate limited datasets to obtain full three-dimensional dependences of I c on magnetic field (B), temperature (T), and mechanical strain (ε). The relation has the advantage of being easy to implement, and offers significant savings in sample characterization time and a useful tool for magnet design. Thorough data-based analysis of the general parameterization of the Unified Scaling Law (USL) shows the existence of three universal scaling constants for practical Nb$_{3}$Sn conductors. The study also identifies the scaling parameters that are conductor specific and need to be fitted to each conductor. This investigation includes two new, rare, and very large I c(B,T,ε) datasets (each with nearly a thousand I c measurements spanning magnetic fields from 1 to 16 T, temperatures from ~2.26 to 14 K, and intrinsic strain...

  7. Nuclear analytical techniques applied to forensic chemistry

    International Nuclear Information System (INIS)

    Nicolau, Veronica; Montoro, Silvia; Pratta, Nora; Giandomenico, Angel Di

    1999-01-01

    Gun shot residues produced by firing guns are mainly composed by visible particles. The individual characterization of these particles allows distinguishing those ones containing heavy metals, from gun shot residues, from those having a different origin or history. In this work, the results obtained from the study of gun shot residues particles collected from hands are presented. The aim of the analysis is to establish whether a person has shot a firing gun has been in contact with one after the shot has been produced. As reference samples, particles collected hands of persons affected to different activities were studied to make comparisons. The complete study was based on the application of nuclear analytical techniques such as Scanning Electron Microscopy, Energy Dispersive X Ray Electron Probe Microanalysis and Graphite Furnace Atomic Absorption Spectrometry. The essays allow to be completed within time compatible with the forensic requirements. (author)

  8. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D D; Bailey, G; Martin, J; Garton, D; Noorman, H; Stelcer, E; Johnson, P [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1994-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  9. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  10. Applying Toyota production system techniques for medication delivery: improving hospital safety and efficiency.

    Science.gov (United States)

    Newell, Terry L; Steinmetz-Malato, Laura L; Van Dyke, Deborah L

    2011-01-01

    The inpatient medication delivery system used at a large regional acute care hospital in the Midwest had become antiquated and inefficient. The existing 24-hr medication cart-fill exchange process with delivery to the patients' bedside did not always provide ordered medications to the nursing units when they were needed. In 2007 the principles of the Toyota Production System (TPS) were applied to the system. Project objectives were to improve medication safety and reduce the time needed for nurses to retrieve patient medications. A multidisciplinary team was formed that included representatives from nursing, pharmacy, informatics, quality, and various operational support departments. Team members were educated and trained in the tools and techniques of TPS, and then designed and implemented a new pull system benchmarking the TPS Ideal State model. The newly installed process, providing just-in-time medication availability, has measurably improved delivery processes as well as patient safety and satisfaction. Other positive outcomes have included improved nursing satisfaction, reduced nursing wait time for delivered medications, and improved efficiency in the pharmacy. After a successful pilot on two nursing units, the system is being extended to the rest of the hospital. © 2010 National Association for Healthcare Quality.

  11. Is the climate right for pleistocene rewilding? Using species distribution models to extrapolate climatic suitability for mammals across continents.

    Directory of Open Access Journals (Sweden)

    Orien M W Richmond

    Full Text Available Species distribution models (SDMs are increasingly used for extrapolation, or predicting suitable regions for species under new geographic or temporal scenarios. However, SDM predictions may be prone to errors if species are not at equilibrium with climatic conditions in the current range and if training samples are not representative. Here the controversial "Pleistocene rewilding" proposal was used as a novel example to address some of the challenges of extrapolating modeled species-climate relationships outside of current ranges. Climatic suitability for three proposed proxy species (Asian elephant, African cheetah and African lion was extrapolated to the American southwest and Great Plains using Maxent, a machine-learning species distribution model. Similar models were fit for Oryx gazella, a species native to Africa that has naturalized in North America, to test model predictions. To overcome biases introduced by contracted modern ranges and limited occurrence data, random pseudo-presence points generated from modern and historical ranges were used for model training. For all species except the oryx, models of climatic suitability fit to training data from historical ranges produced larger areas of predicted suitability in North America than models fit to training data from modern ranges. Four naturalized oryx populations in the American southwest were correctly predicted with a generous model threshold, but none of these locations were predicted with a more stringent threshold. In general, the northern Great Plains had low climatic suitability for all focal species and scenarios considered, while portions of the southern Great Plains and American southwest had low to intermediate suitability for some species in some scenarios. The results suggest that the use of historical, in addition to modern, range information and randomly sampled pseudo-presence points may improve model accuracy. This has implications for modeling range shifts of

  12. Study of the collecting electrode material of an extrapolation chamber by Monte Carlo simulation

    International Nuclear Information System (INIS)

    Vedovato, Uly Pita; Santos, William S.; Perini, Ana Paula; Belinato, Walmir

    2017-01-01

    In this work, the influence of different materials of the collecting electrode on the response of an extrapolation ionization chamber, was evaluated. This ionization chamber was simulated with the MCNP-4C Monte Carlo code and the spectrum of a standard diagnostic radiology beam (RQR5) was utilized. The different results are due to interactions of photons with different materials of the collecting electrode contributing with different values of energy deposited in the sensitive volume of the ionization chamber, which depends on the atomic number of the evaluated materials. The material that presented the least influence was graphite, the original constituent of the ionization chamber. (author)

  13. Use of generalized regression models for the analysis of stress-rupture data

    International Nuclear Information System (INIS)

    Booker, M.K.

    1978-01-01

    The design of components for operation in an elevated-temperature environment often requires a detailed consideration of the creep and creep-rupture properties of the construction materials involved. Techniques for the analysis and extrapolation of creep data have been widely discussed. The paper presents a generalized regression approach to the analysis of such data. This approach has been applied to multiple heat data sets for types 304 and 316 austenitic stainless steel, ferritic 2 1 / 4 Cr-1 Mo steel, and the high-nickel austenitic alloy 800H. Analyses of data for single heats of several materials are also presented. All results appear good. The techniques presented represent a simple yet flexible and powerful means for the analysis and extrapolation of creep and creep-rupture data

  14. Emissions of sulfur gases from marine and freshwater wetlands of the Florida Everglades: Rates and extrapolation using remote sensing

    Science.gov (United States)

    Hines, Mark E.; Pelletier, Ramona E.; Crill, Patrick M.

    1992-01-01

    Rates of emissions of the biogenic sulfur (S) gases carbonyl sulfide (COS), methyl mercaptan (MSH), dimethyl sulfide (DMS), and carbon disulfide (CS2) were measured in a variety of marine and freshwater wetland habitats in the Florida Everglades during a short duration period in October using dynamic chambers, cryotrapping techniques, and gas chromatography. The most rapid emissions of greater than 500 nmol/m(sup -2)h(sup -1) occurred in red mangrove-dominated sites that were adjacent to open seawater and contained numerous crab burrows. Poorly drained red mangrove sites exhibited lower fluxes of approximately 60 nmol/m(sup -2)h(sup -1) which were similar to fluxes from the black mangrove areas which dominated the marine-influenced wetland sites in the Everglades. DMS was the dominant organo-S gas emitted especially in the freshwater areas. Spectral data from a scene from the Landsat thematic mapper were used to map habitats in the Everglades. Six vegetation categories were delineated using geographical information system software and S gas emission were extrapolated for the entire Everglades National Park. The black mangrove-dominated areas accounted for the largest portion of S gas emissions to the area. The large area extent of the saw grass communities (42 percent) accounted for approximately 24 percent of the total S emissions.

  15. Applying of Reliability Techniques and Expert Systems in Management of Radioactive Accidents

    International Nuclear Information System (INIS)

    Aldaihan, S.; Alhbaib, A.; Alrushudi, S.; Karazaitri, C.

    1998-01-01

    Accidents including radioactive exposure have variety of nature and size. This makes such accidents complex situations to be handled by radiation protection agencies or any responsible authority. The situations becomes worse with introducing advanced technology with high complexity that provide operator huge information about system working on. This paper discusses the application of reliability techniques in radioactive risk management. Event tree technique from nuclear field is described as well as two other techniques from nonnuclear fields, Hazard and Operability and Quality Function Deployment. The objective is to show the importance and the applicability of these techniques in radiation risk management. Finally, Expert Systems in the field of accidents management are explored and classified upon their applications

  16. Applying NISHIJIN historical textile technique for e-Textile.

    Science.gov (United States)

    Kuroda, Tomohiro; Hirano, Kikuo; Sugimura, Kazushige; Adachi, Satoshi; Igarashi, Hidetsugu; Ueshima, Kazuo; Nakamura, Hideo; Nambu, Masayuki; Doi, Takahiro

    2013-01-01

    The e-Textile is the key technology for continuous ambient health monitoring to increase quality of life of patients with chronic diseases. The authors introduce techniques of Japanese historical textile, NISHIJIN, which illustrate almost any pattern from one continuous yarn within the machine weaving process, which is suitable for mixed flow production. Thus, NISHIJIN is suitable for e-Textile production, which requires rapid prototyping and mass production of very complicated patterns. The authors prototyped and evaluated a few vests to take twelve-lead electrocardiogram. The result tells that the prototypes obtains electrocardiogram, which is good enough for diagnosis.

  17. Improving Predictions with Reliable Extrapolation Schemes and Better Understanding of Factorization

    Science.gov (United States)

    More, Sushant N.

    New insights into the inter-nucleon interactions, developments in many-body technology, and the surge in computational capabilities has led to phenomenal progress in low-energy nuclear physics in the past few years. Nonetheless, many calculations still lack a robust uncertainty quantification which is essential for making reliable predictions. In this work we investigate two distinct sources of uncertainty and develop ways to account for them. Harmonic oscillator basis expansions are widely used in ab-initio nuclear structure calculations. Finite computational resources usually require that the basis be truncated before observables are fully converged, necessitating reliable extrapolation schemes. It has been demonstrated recently that errors introduced from basis truncation can be taken into account by focusing on the infrared and ultraviolet cutoffs induced by a truncated basis. We show that a finite oscillator basis effectively imposes a hard-wall boundary condition in coordinate space. We accurately determine the position of the hard-wall as a function of oscillator space parameters, derive infrared extrapolation formulas for the energy and other observables, and discuss the extension of this approach to higher angular momentum and to other localized bases. We exploit the duality of the harmonic oscillator to account for the errors introduced by a finite ultraviolet cutoff. Nucleon knockout reactions have been widely used to study and understand nuclear properties. Such an analysis implicitly assumes that the effects of the probe can be separated from the physics of the target nucleus. This factorization between nuclear structure and reaction components depends on the renormalization scale and scheme, and has not been well understood. But it is potentially critical for interpreting experiments and for extracting process-independent nuclear properties. We use a class of unitary transformations called the similarity renormalization group (SRG) transformations to

  18. Applying a nonlinear, pitch-catch, ultrasonic technique for the detection of kissing bonds in friction stir welds.

    Science.gov (United States)

    Delrue, Steven; Tabatabaeipour, Morteza; Hettler, Jan; Van Den Abeele, Koen

    2016-05-01

    Friction stir welding (FSW) is a promising technology for the joining of aluminum alloys and other metallic admixtures that are hard to weld by conventional fusion welding. Although FSW generally provides better fatigue properties than traditional fusion welding methods, fatigue properties are still significantly lower than for the base material. Apart from voids, kissing bonds for instance, in the form of closed cracks propagating along the interface of the stirred and heat affected zone, are inherent features of the weld and can be considered as one of the main causes of a reduced fatigue life of FSW in comparison to the base material. The main problem with kissing bond defects in FSW, is that they currently are very difficult to detect using existing NDT methods. Besides, in most cases, the defects are not directly accessible from the exposed surface. Therefore, new techniques capable of detecting small kissing bond flaws need to be introduced. In the present paper, a novel and practical approach is introduced based on a nonlinear, single-sided, ultrasonic technique. The proposed inspection technique uses two single element transducers, with the first transducer transmitting an ultrasonic signal that focuses the ultrasonic waves at the bottom side of the sample where cracks are most likely to occur. The large amount of energy at the focus activates the kissing bond, resulting in the generation of nonlinear features in the wave propagation. These nonlinear features are then captured by the second transducer operating in pitch-catch mode, and are analyzed, using pulse inversion, to reveal the presence of a defect. The performance of the proposed nonlinear, pitch-catch technique, is first illustrated using a numerical study of an aluminum sample containing simple, vertically oriented, incipient cracks. Later, the proposed technique is also applied experimentally on a real-life friction stir welded butt joint containing a kissing bond flaw. Copyright © 2016

  19. Applied computing in medicine and health

    CERN Document Server

    Al-Jumeily, Dhiya; Mallucci, Conor; Oliver, Carol

    2015-01-01

    Applied Computing in Medicine and Health is a comprehensive presentation of on-going investigations into current applied computing challenges and advances, with a focus on a particular class of applications, primarily artificial intelligence methods and techniques in medicine and health. Applied computing is the use of practical computer science knowledge to enable use of the latest technology and techniques in a variety of different fields ranging from business to scientific research. One of the most important and relevant areas in applied computing is the use of artificial intelligence (AI) in health and medicine. Artificial intelligence in health and medicine (AIHM) is assuming the challenge of creating and distributing tools that can support medical doctors and specialists in new endeavors. The material included covers a wide variety of interdisciplinary perspectives concerning the theory and practice of applied computing in medicine, human biology, and health care. Particular attention is given to AI-bas...

  20. Learning mediastinoscopy: the need for education, experience and modern techniques--interdependency of the applied technique and surgeon's training level.

    Science.gov (United States)

    Walles, Thorsten; Friedel, Godehard; Stegherr, Tobias; Steger, Volker

    2013-04-01

    Mediastinoscopy represents the gold standard for invasive mediastinal staging. While learning and teaching the surgical technique are challenging due to the limited accessibility of the operation field, both benefited from the implementation of video-assisted techniques. However, it has not been established yet whether video-assisted mediastinoscopy improves the mediastinal staging in itself. Retrospective single-centre cohort analysis of 657 mediastinoscopies performed at a specialized tertiary care thoracic surgery unit from 1994 to 2006. The number of specimens obtained per procedure and per lymph node station (2, 4, 7, 8 for mediastinoscopy and 2-9 for open lymphadenectomy), the number of lymph node stations examined, sensitivity and negative predictive value with a focus on the technique employed (video-assisted vs standard technique) and the surgeon's experience were calculated. Overall sensitivity was 60%, accuracy was 90% and negative predictive value 88%. With the conventional technique, experience alone improved sensitivity from 49 to 57% and it was predominant at the paratracheal right region (from 62 to 82%). But with the video-assisted technique, experienced surgeons rose sensitivity from 57 to 79% in contrast to inexperienced surgeons who lowered sensitivity from 49 to 33%. We found significant differences concerning (i) the total number of specimens taken, (ii) the amount of lymph node stations examined, (iii) the number of specimens taken per lymph node station and (iv) true positive mediastinoscopies. The video-assisted technique can significantly improve the results of mediastinoscopy. A thorough education on the modern video-assisted technique is mandatory for thoracic surgeons until they can fully exhaust its potential.

  1. Impact of entrainment and impingement on fish populations in the Hudson River estuary. Volume III. An analysis of the validity of the utilities' stock-recruitment curve-fitting exercise and prior estimation of beta technique. Environmental Sciences Division publication No. 1792

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.; Kirk, B.L.

    1982-03-01

    This report addresses the validity of the utilities' use of the Ricker stock-recruitment model to extrapolate the combined entrainment-impingement losses of young fish to reductions in the equilibrium population size of adult fish. In our testimony, a methodology was developed and applied to address a single fundamental question: if the Ricker model really did apply to the Hudson River striped bass population, could the utilities' estimates, based on curve-fitting, of the parameter alpha (which controls the impact) be considered reliable. In addition, an analysis is included of the efficacy of an alternative means of estimating alpha, termed the technique of prior estimation of beta (used by the utilities in a report prepared for regulatory hearings on the Cornwall Pumped Storage Project). This validation methodology should also be useful in evaluating inferences drawn in the literature from fits of stock-recruitment models to data obtained from other fish stocks

  2. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  3. Water spray cooling technique applied on a photovoltaic panel: The performance response

    International Nuclear Information System (INIS)

    Nižetić, S.; Čoko, D.; Yadav, A.; Grubišić-Čabo, F.

    2016-01-01

    Highlights: • An experimental study was conducted on a monocrystalline photovoltaic panel (PV). • A water spray cooling technique was implemented to determine PV panel response. • The experimental results showed favorable cooling effect on the panel performance. • A feasibility aspect of the water spray cooling technique was also proven. - Abstract: This paper presents an alternative cooling technique for photovoltaic (PV) panels that includes a water spray application over panel surfaces. An alternative cooling technique in the sense that both sides of the PV panel were cooled simultaneously, to investigate the total water spray cooling effect on the PV panel performance in circumstances of peak solar irradiation levels. A specific experimental setup was elaborated in detail and the developed cooling system for the PV panel was tested in a geographical location with a typical Mediterranean climate. The experimental result shows that it is possible to achieve a maximal total increase of 16.3% (effective 7.7%) in electric power output and a total increase of 14.1% (effective 5.9%) in PV panel electrical efficiency by using the proposed cooling technique in circumstances of peak solar irradiation. Furthermore, it was also possible to decrease panel temperature from an average 54 °C (non-cooled PV panel) to 24 °C in the case of simultaneous front and backside PV panel cooling. Economic feasibility was also determined for of the proposed water spray cooling technique, where the main advantage of the analyzed cooling technique is regarding the PV panel’s surface and its self-cleaning effect, which additionally acts as a booster to the average delivered electricity.

  4. Study for applying microwave power saturation technique on fingernail/EPR dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Park, Byeong Ryong; Choi, Hoon; Nam, Hyun Ill; Lee, Byung Ill [Radiation Health Research Institute, Seoul (Korea, Republic of)

    2012-10-15

    There is growing recognition worldwide of the need to develop effective uses of dosimetry methods to assess unexpected exposure to radiation in the event of a large scale event. One of physically based dosimetry methods electron paramagnetic resonance (EPR) spectroscopy has been applied to perform retrospective radiation dosimetry using extracted samples of tooth enamel and nail(fingernail and toenail), following radiation accidents and exposures resulting from weapon use, testing, and production. Human fingernails are composed largely of a keratin, which consists of {alpha} helical peptide chains that are twisted into a left handed coil and strengthened by disulphide cross links. Ionizing radiation generates free radicals in the keratin matrix, and these radicals are stable over a relatively long period (days to weeks). Most importantly, the number of radicals is proportional to the magnitude of the dose over a wide dose range (0{approx}30 Gy). Also, dose can be estimated at four different locations on the human body, providing information on the homogeneity of the radiation exposure. And The results from EPR nail dosimetry are immediately available However, relatively large background signal (BKS) converted from mechanically induced signal (MIS) after cutting process of fingernail, normally overlaps with the radiation induced signal (RIS), make it difficult to estimate accurate dose accidental exposure. Therefore, estimation method using dose response curve was difficult to ensure reliability below 5 Gy. In this study, In order to overcome these disadvantages, we measured the reactions of RIS and BKS (MIS) according to the change of Microwave power level, and researched about the applicability of the Power saturation technique at low dose.

  5. Extrapolating cetacean densities to quantitatively assess human impacts on populations in the high seas.

    Science.gov (United States)

    Mannocci, Laura; Roberts, Jason J; Miller, David L; Halpin, Patrick N

    2017-06-01

    As human activities expand beyond national jurisdictions to the high seas, there is an increasing need to consider anthropogenic impacts to species inhabiting these waters. The current scarcity of scientific observations of cetaceans in the high seas impedes the assessment of population-level impacts of these activities. We developed plausible density estimates to facilitate a quantitative assessment of anthropogenic impacts on cetacean populations in these waters. Our study region extended from a well-surveyed region within the U.S. Exclusive Economic Zone into a large region of the western North Atlantic sparsely surveyed for cetaceans. We modeled densities of 15 cetacean taxa with available line transect survey data and habitat covariates and extrapolated predictions to sparsely surveyed regions. We formulated models to reduce the extent of extrapolation beyond covariate ranges, and constrained them to model simple and generalizable relationships. To evaluate confidence in the predictions, we mapped where predictions were made outside sampled covariate ranges, examined alternate models, and compared predicted densities with maps of sightings from sources that could not be integrated into our models. Confidence levels in model results depended on the taxon and geographic area and highlighted the need for additional surveying in environmentally distinct areas. With application of necessary caution, our density estimates can inform management needs in the high seas, such as the quantification of potential cetacean interactions with military training exercises, shipping, fisheries, and deep-sea mining and be used to delineate areas of special biological significance in international waters. Our approach is generally applicable to other marine taxa and geographic regions for which management will be implemented but data are sparse. © 2016 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  6. Regression models in the determination of the absorbed dose with extrapolation chamber for ophthalmological applicators; Modelos de regresion en la determinacion de la dosis absorbida con camara de extrapolacion para aplicadores oftalmologicos

    Energy Technology Data Exchange (ETDEWEB)

    Alvarez R, J T; Morales P, R

    1992-06-15

    The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, ({sup 90} Sr/{sup 90} Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)

  7. Measurement of the viability of stored red cells by the single-isotope technique using 51Cr. Analysis of validity

    International Nuclear Information System (INIS)

    Beutler, E.; West, C.

    1984-01-01

    A single-isotope 51 Cr method often is used to evaluate the viability of stored red cells. In this technique, the red cell mass is measured by back-extrapolation to time zero (t0) of the radioactivity of the blood between 5 and 20 minutes after infusion of the sample. If there is early destruction of stored cells, this method provides an overestimate of the red cell mass and, hence, of the viability of the stored cells. Freshly drawn red cells from normal donors were labeled with /sup 99m/Tc, and cells from the same donor which had been stored in citrate-phosphate-dextrose-adenine-one (CPDA-1) for periods ranging from 7 to 49 days were labeled with 51 Cr. A comparison of the ''true red cell mass'' as determined with /sup 99m/Tc with the back-extrapolated red cell mass from stored 51 Cr-labeled cells has made it possible to define the magnitude of error introduced by early loss of red cells. The overestimation of red cell mass and viability was diminished if only the 51 Cr radioactivity between 5 and 15 minutes after infusion was used in back-extrapolating to t0. The degree of overestimation of red cell mass was greatest when the red cell viability had declined to very low levels. However, in the entire range of 10 to 80 percent viability, the overestimate of viability was usually less than 4 percent. The overestimate of viability proved to be quite similar for all samples and may be taken into account when using the single-isotope technique for measurement of red cell viability

  8. FEEDBACK LINEARISATION APPLIED ON A HYDRAULIC

    DEFF Research Database (Denmark)

    Andersen, Torben Ole; Hansen, Michael Rygaard; Pedersen, Henrik C.

    2005-01-01

    is on developing and applying several different feedback linearisation (FL) controllers to the individual servo actuators in a hydraulically driven servo robot to evaluate and compare their possiblities and limitations. This is done based on both simulation and experimental results.......Generally most hydraulic systems are intrensically non-linear, why applying linear control techniques typically results in conservatively dimensioned controllers to obtain stable performance. Non-linear control techniques have the potential of overcoming these problems, and in this paper the focus...

  9. Applying value stream mapping techniques to eliminate non-value-added waste for the procurement of endovascular stents

    International Nuclear Information System (INIS)

    Teichgräber, Ulf K.; Bucourt, Maximilian de

    2012-01-01

    Objectives: To eliminate non-value-adding (NVA) waste for the procurement of endovascular stents in interventional radiology services by applying value stream mapping (VSM). Materials and methods: The Lean manufacturing technique was used to analyze the process of material and information flow currently required to direct endovascular stents from external suppliers to patients. Based on a decision point analysis for the procurement of stents in the hospital, a present state VSM was drawn. After assessment of the current status VSM and progressive elimination of unnecessary NVA waste, a future state VSM was drawn. Results: The current state VSM demonstrated that out of 13 processes for the procurement of stents only 2 processes were value-adding. Out of the NVA processes 5 processes were unnecessary NVA activities, which could be eliminated. The decision point analysis demonstrated that the procurement of stents was mainly a forecast driven push system. The future state VSM applies a pull inventory control system to trigger the movement of a unit after withdrawal by using a consignment stock. Conclusion: VSM is a visualization tool for the supply chain and value stream, based on the Toyota Production System and greatly assists in successfully implementing a Lean system.

  10. Positron Plasma Control Techniques Applied to Studies of Cold Antihydrogen

    CERN Document Server

    Funakoshi, Ryo

    2003-01-01

    In the year 2002, two experiments at CERN succeeded in producing cold antihydrogen atoms, first ATHENA and subsequently ATRAP. Following on these results, it is now feasible to use antihydrogen to study the properties of antimatter. In the ATHENA experiment, the cold antihydrogen atoms are produced by mixing large amounts of antiprotons and positrons in a nested Penning trap. The complicated behaviors of the charged particles are controlled and monitored by plasma manipulation techniques. The antihydrogen events are studied using position sensitive detectors and the evidence of production of antihydrogen atoms is separated out with the help of analysis software. This thesis covers the first production of cold antihydrogen in the first section as well as the further studies of cold antihydrogen performed by using the plasma control techniques in the second section.

  11. Simulation-extrapolation method to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates, 1950-2003

    Energy Technology Data Exchange (ETDEWEB)

    Allodji, Rodrigue S.; Schwartz, Boris; Diallo, Ibrahima; Vathaire, Florent de [Gustave Roussy B2M, Radiation Epidemiology Group/CESP - Unit 1018 INSERM, Villejuif Cedex (France); Univ. Paris-Sud, Villejuif (France); Agbovon, Cesaire [Pierre and Vacances - Center Parcs Group, L' artois - Espace Pont de Flandre, Paris Cedex 19 (France); Laurier, Dominique [Institut de Radioprotection et de Surete Nucleaire (IRSN), DRPH, SRBE, Laboratoire d' epidemiologie, BP17, Fontenay-aux-Roses Cedex (France)

    2015-08-15

    Analyses of the Life Span Study (LSS) of Japanese atomic bombing survivors have routinely incorporated corrections for additive classical measurement errors using regression calibration. Recently, several studies reported that the efficiency of the simulation-extrapolation method (SIMEX) is slightly more accurate than the simple regression calibration method (RCAL). In the present paper, the SIMEX and RCAL methods have been used to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates. For instance, it is shown that using the SIMEX method, the ERR/Gy is increased by an amount of about 29 % for all solid cancer deaths using a linear model compared to the RCAL method, and the corrected EAR 10{sup -4} person-years at 1 Gy (the linear terms) is decreased by about 8 %, while the corrected quadratic term (EAR 10{sup -4} person-years/Gy{sup 2}) is increased by about 65 % for leukaemia deaths based on a linear-quadratic model. The results with SIMEX method are slightly higher than published values. The observed differences were probably due to the fact that with the RCAL method the dosimetric data were partially corrected, while all doses were considered with the SIMEX method. Therefore, one should be careful when comparing the estimated risks and it may be useful to use several correction techniques in order to obtain a range of corrected estimates, rather than to rely on a single technique. This work will enable to improve the risk estimates derived from LSS data, and help to make more reliable the development of radiation protection standards. (orig.)

  12. Advanced examination techniques applied to the qualification of critical welds for the ITER correction coils

    CERN Document Server

    Sgobba, Stefano; Libeyre, Paul; Marcinek, Dawid Jaroslaw; Piguiet, Aline; Cécillon, Alexandre

    2015-01-01

    The ITER correction coils (CCs) consist of three sets of six coils located in between the toroidal (TF) and poloidal field (PF) magnets. The CCs rely on a Cable-in-Conduit Conductor (CICC), whose supercritical cooling at 4.5 K is provided by helium inlets and outlets. The assembly of the nozzles to the stainless steel conductor conduit includes fillet welds requiring full penetration through the thickness of the nozzle. Static and cyclic stresses have to be sustained by the inlet welds during operation. The entire volume of helium inlet and outlet welds, that are submitted to the most stringent quality levels of imperfections according to standards in force, is virtually uninspectable with sufficient resolution by conventional or computed radiography or by Ultrasonic Testing. On the other hand, X-ray computed tomography (CT) was successfully applied to inspect the full weld volume of several dozens of helium inlet qualification samples. The extensive use of CT techniques allowed a significant progress in the ...

  13. Nuclear techniques applied to provenance and technological studies of Renaissance majolica roundels from Portuguese museums attributed to della Robbia Italian workshop

    International Nuclear Information System (INIS)

    Dias, M.I.; Prudencio, M.I.; Kasztovszky, Zsolt; Maroti, Boglarka; Harsanyi, Ildiko

    2017-01-01

    Artistic and historical examination of high-quality glazed terracotta sculptures displayed in various Portuguese museums point to their production in della Robbia workshop of Florence (Italy). A multitechnique analytical approach is applied for the first time to these sculptures, aiming to confirm their origin. Materials were analyzed using Instrumental Neutron Activation Analysis, Prompt Gamma Activation Analysis and X-ray Diffraction. The compositional results are similar to other della Robbia sculptures, suggesting a common origin for the raw material that was identified as carbonate rich marine origin marly clay. The applied firing temperatures was proved to be around 900 deg C. The differences found within each sculpture are explained by the production technique of assembling separate parts to produce these huge sculptures, and the clay pit heterogeneity. (author)

  14. Geological-geophysical techniques applied to urban planning in karst hazardous areas. Case study of Zaragoza, NE Spain

    Science.gov (United States)

    Pueyo Anchuela, O.; Soriano, A.; Casas Sainz, A.; Pocoví Juan, A.

    2009-12-01

    Industrial and urban growth must deal in some settings with geological hazards. In the last 50 years, the city of Zaragoza (NE Spain) has developed an increase of its urbanized area in a progression several orders higher than expected from its population increase. This fast growth has affected several areas around the city that were not usually used for construction. Maps of the Zaragoza city area at the end of the XIXth century and beginning of the XXth reveal the presence of karst hazards in several zones that can be observed in more modern data, as aerial photographs taken during a period ranging from 1927 to present. The urban and industrial development has covered many of these hazardous zones, even though potential risks were known. The origins of the karst problems are related to the solution of evaporites (mainly gypsum, glauberite and halite) that represent the Miocene substratum of the Zaragoza area underlying the Quaternary terraces and pediments related to the Ebro River and its tributaries. Historical data show the persistence of subsidence foci during long periods of time while in recent urbanized areas this stability is not shared, observing the increase of activity and/or radius affection in short periods of time after building over. These problems can be related to two factors: i) urban development over hazardous areas can increase the karst activity and ii) the affection radius is not properly established with the commonly applied methods. One way to develop these detailed maps can be related to the geophysical approach. The applied geophysical routine, dependent on the characteristics of the surveyed area, is based on potential geophysical techniques (magnetometry and gravimetry) and others related to the application of induced fields (EM and GPR). The obtained results can be related to more straightforward criteria as the detection of cavities in the subsoil and indirect indicators related to the long-term activity of the subsidence areas

  15. Applied predictive analytics principles and techniques for the professional data analyst

    CERN Document Server

    Abbott, Dean

    2014-01-01

    Learn the art and science of predictive analytics - techniques that get results Predictive analytics is what translates big data into meaningful, usable business information. Written by a leading expert in the field, this guide examines the science of the underlying algorithms as well as the principles and best practices that govern the art of predictive analytics. It clearly explains the theory behind predictive analytics, teaches the methods, principles, and techniques for conducting predictive analytics projects, and offers tips and tricks that are essential for successful predictive mode

  16. Applied techniques for high bandwidth data transfers across wide area networks

    International Nuclear Information System (INIS)

    Lee, J.; Gunter, D.; Tierney, B.; Allcock, B.; Bester, J.; Bresnahan, J.; Tuecke, S.

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. From their work developing a scalable distributed network cache, the authors have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). The authors discuss several hardware and software design techniques, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. The authors describe results from the Supercomputing 2000 conference

  17. Applying Data-mining techniques to study drought periods in Spain

    Science.gov (United States)

    Belda, F.; Penades, M. C.

    2010-09-01

    Data-mining is a technique that it can be used to interact with large databases and to help in the discovery relations between parameters by extracting information from massive and multiple data archives. Drought affects many economic and social sectors, from agricultural to transportation, going through urban water deficit and the development of modern industries. With these problems and drought geographical and temporal distribution it's difficult to find a single definition of drought. Improving the understanding of the knowledge of climatic index is necessary to reduce the impacts of drought and to facilitate quick decisions regarding this problem. The main objective is to analyze drought periods from 1950 to 2009 in Spain. We use several kinds of information, different formats, sources and transmission mode. We use satellite-based Vegetation Index, dryness index for several temporal periods. We use daily and monthly precipitation and temperature data and soil moisture data from numerical weather model. We calculate mainly Standardized Precipitation Index (SPI) that it has been used amply in the bibliography. We use OLAP-Mining techniques to discovery of association rules between remote-sensing, numerical weather model and climatic index. Time series Data- Mining techniques organize data as a sequence of events, with each event having a time of recurrence, to cluster the data into groups of records or cluster with similar characteristics. Prior climatological classification is necessary if we want to study drought periods over all Spain.

  18. Functional reasoning, explanation and analysis: Part 1: a survey on theories, techniques and applied systems. Part 2: qualitative function formation technique

    International Nuclear Information System (INIS)

    Far, B.H.

    1992-01-01

    Functional Reasoning (FR) enables people to derive the purpose of objects and explain their functions, JAERI's 'Human Acts Simulation Program (HASP)', started from 1987, has the goal of developing programs of the underlying technologies for intelligent robots by imitating the intelligent behavior of humans. FR is considered a useful reasoning method in HASP and applied to understand function of tools and objects in the Toolbox Project. In this report, first, the results of the diverse FR researches within a variety of disciplines are reviewed and the common core and basic problems are identified. Then the qualitative function formation (QFF) technique is introduced. Some novel points are: extending the common qualitative models to include interactions and timing of events by defining temporal and dependency constraints, and binding it with the conventional qualitative simulation. Function concepts are defined as interpretations of either a persistence or an order in the sequence of states, using the trace of the qualitative state vector derived by qualitative simulation on the extended qualitative model. This offers solution to some of the FR problems and leads to a method for generalization and comparison of functions of different objects. (author) 85 refs

  19. A Stable Marching on-in-time Scheme for Solving the Time Domain Electric Field Volume Integral Equation on High-contrast Scatterers

    KAUST Repository

    Sayed, Sadeed Bin

    2015-05-05

    A time domain electric field volume integral equation (TD-EFVIE) solver is proposed for characterizing transient electromagnetic wave interactions on high-contrast dielectric scatterers. The TD-EFVIE is discretized using the Schaubert- Wilton-Glisson (SWG) and approximate prolate spherical wave (APSW) functions in space and time, respectively. The resulting system of equations can not be solved by a straightforward application of the marching on-in-time (MOT) scheme since the two-sided APSW interpolation functions require the knowledge of unknown “future” field samples during time marching. Causality of the MOT scheme is restored using an extrapolation technique that predicts the future samples from known “past” ones. Unlike the extrapolation techniques developed for MOT schemes that are used in solving time domain surface integral equations, this scheme trains the extrapolation coefficients using samples of exponentials with exponents on the complex frequency plane. This increases the stability of the MOT-TD-EFVIE solver significantly, since the temporal behavior of decaying and oscillating electromagnetic modes induced inside the scatterers is very accurately taken into account by this new extrapolation scheme. Numerical results demonstrate that the proposed MOT solver maintains its stability even when applied to analyzing wave interactions on high-contrast scatterers.

  20. A Stable Marching on-in-time Scheme for Solving the Time Domain Electric Field Volume Integral Equation on High-contrast Scatterers

    KAUST Repository

    Sayed, Sadeed Bin; Ulku, Huseyin; Bagci, Hakan

    2015-01-01

    A time domain electric field volume integral equation (TD-EFVIE) solver is proposed for characterizing transient electromagnetic wave interactions on high-contrast dielectric scatterers. The TD-EFVIE is discretized using the Schaubert- Wilton-Glisson (SWG) and approximate prolate spherical wave (APSW) functions in space and time, respectively. The resulting system of equations can not be solved by a straightforward application of the marching on-in-time (MOT) scheme since the two-sided APSW interpolation functions require the knowledge of unknown “future” field samples during time marching. Causality of the MOT scheme is restored using an extrapolation technique that predicts the future samples from known “past” ones. Unlike the extrapolation techniques developed for MOT schemes that are used in solving time domain surface integral equations, this scheme trains the extrapolation coefficients using samples of exponentials with exponents on the complex frequency plane. This increases the stability of the MOT-TD-EFVIE solver significantly, since the temporal behavior of decaying and oscillating electromagnetic modes induced inside the scatterers is very accurately taken into account by this new extrapolation scheme. Numerical results demonstrate that the proposed MOT solver maintains its stability even when applied to analyzing wave interactions on high-contrast scatterers.

  1. Analysis of Arbitrary Reflector Antennas Applying the Geometrical Theory of Diffraction Together with the Master Points Technique

    Directory of Open Access Journals (Sweden)

    María Jesús Algar

    2013-01-01

    Full Text Available An efficient approach for the analysis of surface conformed reflector antennas fed arbitrarily is presented. The near field in a large number of sampling points in the aperture of the reflector is obtained applying the Geometrical Theory of Diffraction (GTD. A new technique named Master Points has been developed to reduce the complexity of the ray-tracing computations. The combination of both GTD and Master Points reduces the time requirements of this kind of analysis. To validate the new approach, several reflectors and the effects on the radiation pattern caused by shifting the feed and introducing different obstacles have been considered concerning both simple and complex geometries. The results of these analyses have been compared with the Method of Moments (MoM results.

  2. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    Science.gov (United States)

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  3. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    Science.gov (United States)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  4. An integrated spectroscopic approach for the non-invasive study of modern art materials and techniques

    Science.gov (United States)

    Rosi, F.; Miliani, C.; Clementi, C.; Kahrim, K.; Presciutti, F.; Vagnini, M.; Manuali, V.; Daveri, A.; Cartechini, L.; Brunetti, B. G.; Sgamellotti, A.

    2010-09-01

    A non-invasive study has been carried out on 18 paintings by Alberto Burri (1915-1995), one of Italy’s most important contemporary painters. The study aims to demonstrate the appropriate and suitable use of portable non-invasive instrumentation for the characterization of materials and techniques found in works dating from 1948 to 1975 belonging to the Albizzini Collection. Sampling of any kind has been forbidden, in order to maintain the integrity of the paintings. Furthermore, the material heterogeneity of each single artwork could potentially result in a poorly representative sampling campaign. Therefore, a non-invasive and in situ analytical approach has been deemed mandatory, notwithstanding the complexity of modern materials and challenging data interpretation. It is the non-invasive nature of the study that has allowed for the acquisition of vast spectral data (a total of about 650 spectra including XRF, mid and near FTIR, micro-Raman and UV-vis absorption and emission spectroscopies). In order to better handle and to extrapolate the most meaningful information from these data, a statistical multivariate analysis, namely principal component analysis (PCA), has been applied to the spectral results. In particular, the possibility of combining elemental and molecular information has been explored by uniting XRF and infrared spectra in one PCA dataset. The combination of complementary spectroscopic techniques has allowed for the characterization of both inorganic and organic pigments, extenders, fillers, and binders employed by Alberto Burri.

  5. A reference data set for validating vapor pressure measurement techniques: homologous series of polyethylene glycols

    Science.gov (United States)

    Krieger, Ulrich K.; Siegrist, Franziska; Marcolli, Claudia; Emanuelsson, Eva U.; Gøbel, Freya M.; Bilde, Merete; Marsh, Aleksandra; Reid, Jonathan P.; Huisman, Andrew J.; Riipinen, Ilona; Hyttinen, Noora; Myllys, Nanna; Kurtén, Theo; Bannan, Thomas; Percival, Carl J.; Topping, David

    2018-01-01

    To predict atmospheric partitioning of organic compounds between gas and aerosol particle phase based on explicit models for gas phase chemistry, saturation vapor pressures of the compounds need to be estimated. Estimation methods based on functional group contributions require training sets of compounds with well-established saturation vapor pressures. However, vapor pressures of semivolatile and low-volatility organic molecules at atmospheric temperatures reported in the literature often differ by several orders of magnitude between measurement techniques. These discrepancies exceed the stated uncertainty of each technique which is generally reported to be smaller than a factor of 2. At present, there is no general reference technique for measuring saturation vapor pressures of atmospherically relevant compounds with low vapor pressures at atmospheric temperatures. To address this problem, we measured vapor pressures with different techniques over a wide temperature range for intercomparison and to establish a reliable training set. We determined saturation vapor pressures for the homologous series of polyethylene glycols (H - (O - CH2 - CH2)n - OH) for n = 3 to n = 8 ranging in vapor pressure at 298 K from 10-7 to 5×10-2 Pa and compare them with quantum chemistry calculations. Such a homologous series provides a reference set that covers several orders of magnitude in saturation vapor pressure, allowing a critical assessment of the lower limits of detection of vapor pressures for the different techniques as well as permitting the identification of potential sources of systematic error. Also, internal consistency within the series allows outlying data to be rejected more easily. Most of the measured vapor pressures agreed within the stated uncertainty range. Deviations mostly occurred for vapor pressure values approaching the lower detection limit of a technique. The good agreement between the measurement techniques (some of which are sensitive to the mass

  6. Applying the sterile insect technique to the control of insect pests

    International Nuclear Information System (INIS)

    LaChance, L.E.; Klassen, W.

    1991-01-01

    The sterile insect technique involves the mass-rearing of insects, which are sterilized by gamma rays from a 60 Co source before being released in a controlled fashion into nature. Matings between the sterile insects released and native insects produce no progeny, and so if enough of these matings occur the pest population can be controlled or even eradicated. A modification of the technique, especially suitable for the suppression of the moths and butterflies, is called the F, or inherited sterility method. In this, lower radiation doses are used such that the released males are only partially sterile (30-60%) and the females are fully sterile. When released males mate with native females some progeny are produced, but they are completely sterile. Thus, full expression of the sterility is delayed by one generation. This article describes the use of the sterile insect technique in controlling the screwworm fly, the tsetse fly, the medfly, the pink bollworm and the melon fly, and of the F 1 sterility method in the eradication of local gypsy moth infestations. 18 refs, 5 figs, 1 tab

  7. Enhanced nonlinear iterative techniques applied to a nonequilibrium plasma flow

    International Nuclear Information System (INIS)

    Knoll, D.A.

    1998-01-01

    The authors study the application of enhanced nonlinear iterative methods to the steady-state solution of a system of two-dimensional convection-diffusion-reaction partial differential equations that describe the partially ionized plasma flow in the boundary layer of a tokamak fusion reactor. This system of equations is characterized by multiple time and spatial scales and contains highly anisotropic transport coefficients due to a strong imposed magnetic field. They use Newton's method to linearize the nonlinear system of equations resulting from an implicit, finite volume discretization of the governing partial differential equations, on a staggered Cartesian mesh. The resulting linear systems are neither symmetric nor positive definite, and are poorly conditioned. Preconditioned Krylov iterative techniques are employed to solve these linear systems. They investigate both a modified and a matrix-free Newton-Krylov implementation, with the goal of reducing CPU cost associated with the numerical formation of the Jacobian. A combination of a damped iteration, mesh sequencing, and a pseudotransient continuation technique is used to enhance global nonlinear convergence and CPU efficiency. GMRES is employed as the Krylov method with incomplete lower-upper (ILU) factorization preconditioning. The goal is to construct a combination of nonlinear and linear iterative techniques for this complex physical problem that optimizes trade-offs between robustness, CPU time, memory requirements, and code complexity. It is shown that a mesh sequencing implementation provides significant CPU savings for fine grid calculations. Performance comparisons of modified Newton-Krylov and matrix-free Newton-Krylov algorithms will be presented

  8. AN ACCURACY ASSESSMENT OF GEOREFERENCED POINT CLOUDS PRODUCED VIA MULTI-VIEW STEREO TECHNIQUES APPLIED TO IMAGERY ACQUIRED VIA UNMANNED AERIAL VEHICLE

    Directory of Open Access Journals (Sweden)

    S. Harwin

    2012-08-01

    Full Text Available Low-cost Unmanned Aerial Vehicles (UAVs are becoming viable environmental remote sensing tools. Sensor and battery technology is expanding the data capture opportunities. The UAV, as a close range remote sensing platform, can capture high resolution photography on-demand. This imagery can be used to produce dense point clouds using multi-view stereopsis techniques (MVS combining computer vision and photogrammetry. This study examines point clouds produced using MVS techniques applied to UAV and terrestrial photography. A multi-rotor micro UAV acquired aerial imagery from a altitude of approximately 30–40 m. The point clouds produced are extremely dense (<1–3 cm point spacing and provide a detailed record of the surface in the study area, a 70 m section of sheltered coastline in southeast Tasmania. Areas with little surface texture were not well captured, similarly, areas with complex geometry such as grass tussocks and woody scrub were not well mapped. The process fails to penetrate vegetation, but extracts very detailed terrain in unvegetated areas. Initially the point clouds are in an arbitrary coordinate system and need to be georeferenced. A Helmert transformation is applied based on matching ground control points (GCPs identified in the point clouds to GCPs surveying with differential GPS. These point clouds can be used, alongside laser scanning and more traditional techniques, to provide very detailed and precise representations of a range of landscapes at key moments. There are many potential applications for the UAV-MVS technique, including coastal erosion and accretion monitoring, mine surveying and other environmental monitoring applications. For the generated point clouds to be used in spatial applications they need to be converted to surface models that reduce dataset size without loosing too much detail. Triangulated meshes are one option, another is Poisson Surface Reconstruction. This latter option makes use of point normal

  9. Extrapolation of extreme response for different mooring line systems of floating wave energy converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Sterndorff, Martin; Sørensen, John Dalsgaard

    2014-01-01

    Mooring systems for floating wave energy converters (WECs) are a major cost driver. Failure of mooring systems often occurs due to extreme loads. This paper introduces an extrapolation method for extreme response which accounts for the control system of a WEC that controls the loads onto...... measurements from lab-scaled WEPTOS WEC are taken. Different catenary anchor leg mooring (CALM) systems as well as single anchor legmooring (SALM)mooring systemsare implemented for a dynamic simulation with different number of mooring lines. Extreme tension loads with a return period of 50 years are assessed...... for the hawser as well as at the different mooring lines. Furthermore, the extreme load impact given failure of one mooring line is assessed and compared with extreme loads given no system failure....

  10. Estimated UV clutter levels at 10-100 meter sensor pixel resolution extrapolated from recent Polar Bear measurements

    International Nuclear Information System (INIS)

    Wohlers, M.; Huguenin, R.; Weinberg, M.; Huffman, R.; Eastes, R.

    1989-01-01

    This paper describes the methodology and the results obtained at 1304 A wavelength from an analysis of the AFGL Polar Bear experiment. The basic measurement equipment provided data of a spatial resolution of 20 km over a large portion of the earth. The instrumentation also provided sampled outputs as the footprint scanned along the measurement track. The combination of the fine scanning and large area coverage provided opportunity for a spatial power spectral analysis that in turn provided a means for extrapolation to finer spatial scale

  11. Applying value stream mapping techniques to eliminate non-value-added waste for the procurement of endovascular stents.

    Science.gov (United States)

    Teichgräber, Ulf K; de Bucourt, Maximilian

    2012-01-01

    OJECTIVES: To eliminate non-value-adding (NVA) waste for the procurement of endovascular stents in interventional radiology services by applying value stream mapping (VSM). The Lean manufacturing technique was used to analyze the process of material and information flow currently required to direct endovascular stents from external suppliers to patients. Based on a decision point analysis for the procurement of stents in the hospital, a present state VSM was drawn. After assessment of the current status VSM and progressive elimination of unnecessary NVA waste, a future state VSM was drawn. The current state VSM demonstrated that out of 13 processes for the procurement of stents only 2 processes were value-adding. Out of the NVA processes 5 processes were unnecessary NVA activities, which could be eliminated. The decision point analysis demonstrated that the procurement of stents was mainly a forecast driven push system. The future state VSM applies a pull inventory control system to trigger the movement of a unit after withdrawal by using a consignment stock. VSM is a visualization tool for the supply chain and value stream, based on the Toyota Production System and greatly assists in successfully implementing a Lean system. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  12. Neoliberal Optimism: Applying Market Techniques to Global Health.

    Science.gov (United States)

    Mei, Yuyang

    2017-01-01

    Global health and neoliberalism are becoming increasingly intertwined as organizations utilize markets and profit motives to solve the traditional problems of poverty and population health. I use field work conducted over 14 months in a global health technology company to explore how the promise of neoliberalism re-envisions humanitarian efforts. In this company's vaccine refrigerator project, staff members expect their investors and their market to allow them to achieve scale and develop accountability to their users in developing countries. However, the translation of neoliberal techniques to the global health sphere falls short of the ideal, as profits are meager and purchasing power remains with donor organizations. The continued optimism in market principles amidst such a non-ideal market reveals the tenacious ideological commitment to neoliberalism in these global health projects.

  13. Extrapolation of the Dutch 1 MW tunable free electron maser to a 5 MW ECRH source

    International Nuclear Information System (INIS)

    Caplan, M.; Nelson, S.; Kamin, G.; Antonsen, T. Levush, B.; Urbanus, W.; Tulupov, A.

    1995-01-01

    A Free Electron Maser (FEM) is now under construction at the FOM Institute (Rijnhuizen) Netherlands with the goal of producing 1 MW long pulse to CW microwave output in the range 130 GHz to 250 GHz with wall plug efficiencies of 50% (Verhoeven, et al EC-9 Conference). An extrapolated version of this device is proposed which by scaling up the beam current, would produce microwave power levels of up to 5 MW CW in order to reduce the cost per watt and increase the power per module, thus providing the fusion community with a practical ECRH source

  14. A Methods and procedures to apply probabilistic safety Assessment (PSA) techniques to the cobalt-therapy process. Cuban experience

    International Nuclear Information System (INIS)

    Vilaragut Llanes, J.J.; Ferro Fernandez, R.; Lozano Lima, B; De la Fuente Puch, A.; Dumenigo Gonzalez, C.; Troncoso Fleitas, M.; Perez Reyes, Y.

    2003-01-01

    This paper presents the results of the Probabilistic Safety Analysis (PSA) to the Cobalt Therapy Process, which was performed as part of the International Atomic Energy Agency's Coordinated Research Project (CRP) to Investigate Appropriate Methods and Procedures to Apply Probabilistic Safety Assessment (PSA) Techniques to Large Radiation Sources. The primary methodological tools used in the analysis were Failure Modes and Effects Analysis (FMEA), Event Trees and Fault Trees. These tools were used to evaluate occupational, public and medical exposures during cobalt therapy treatment. The emphasis of the study was on the radiological protection of patients. During the course of the PSA, several findings were analysed concerning the cobalt treatment process. In relation with the Undesired Events Probabilities, the lowest exposures probabilities correspond to the public exposures during the treatment process (Z21); around 10-10 per year, being the workers exposures (Z11); around 10-4 per year. Regarding to the patient, the Z33 probabilities prevail (not desired dose to normal tissue) and Z34 (not irradiated portion to target volume). Patient accidental exposures are also classified in terms of the extent to which the error is likely to affect individual treatments, individual patients, or all the patients treated on a specific unit. Sensitivity analyses were realised to determine the influence of certain tasks or critical stages on the results. As a conclusion the study establishes that the PSA techniques may effectively and reasonably determine the risk associated to the cobalt-therapy treatment process, though there are some weaknesses in its methodological application for this kind of study requiring further research. These weaknesses are due to the fact that the traditional PSA has been mainly applied to complex hardware systems designed to operate with a high automation level, whilst the cobalt therapy treatment is a relatively simple hardware system with a

  15. Nonlinear Force-free Field Extrapolation of a Coronal Magnetic Flux Rope Supporting a Large-scale Solar Filament from a Photospheric Vector Magnetogram

    Science.gov (United States)

    Jiang, Chaowei; Wu, S. T.; Feng, Xueshang; Hu, Qiang

    2014-05-01

    Solar filaments are commonly thought to be supported in magnetic dips, in particular, in those of magnetic flux ropes (FRs). In this Letter, based on the observed photospheric vector magnetogram, we implement a nonlinear force-free field (NLFFF) extrapolation of a coronal magnetic FR that supports a large-scale intermediate filament between an active region and a weak polarity region. This result is a first, in the sense that current NLFFF extrapolations including the presence of FRs are limited to relatively small-scale filaments that are close to sunspots and along main polarity inversion lines (PILs) with strong transverse field and magnetic shear, and the existence of an FR is usually predictable. In contrast, the present filament lies along the weak-field region (photospheric field strength barbs very well, which strongly supports the FR-dip model for filaments. The filament is stably sustained because the FR is weakly twisted and strongly confined by the overlying closed arcades.

  16. Technique of uranium exploration in tropical rain forests as applied in Sumatra and other tropical areas

    International Nuclear Information System (INIS)

    Hahn, L.

    1983-01-01

    The technique of uranium prospecting in areas covered by tropical rain forest is discussed using a uranium exploration campaign conducted from 1976 to 1978 in Western Sumatra as an example. A regional reconnaissance survey using stream sediment samples combined with radiometric field measurements proved ideal for covering very large areas. A mobile field laboratory was used for the geochemical survey. Helicopter support in diffult terrain was found to be very efficient and economical. A field procedure for detecting low uranium concentrations in stream water samples is described. This method has been successfully applied in Sarawak. To distinguish meaningful uranium anomalies in water from those with no meaning for prospecting, the correlations between U content and conductivity of the water and between U content and Ca and HCO 3 content must be considered. This method has been used successfully in a geochemical survey in Thailand. (author)

  17. Determination of hydrogen diffusivity and permeability in W near room temperature applying a tritium tracer technique

    International Nuclear Information System (INIS)

    Ikeda, T.; Otsuka, T.; Tanabe, T.

    2011-01-01

    Tungsten is a primary candidate of plasma facing material in ITER and beyond, owing to its good thermal property and low erosion. But hydrogen solubility and diffusivity near ITER operation temperatures (below 500 K) have scarcely studied. Mainly because its low hydrogen solubility and diffusivity at lower temperatures make the detection of hydrogen quite difficult. We have tried to observe hydrogen plasma driven permeation (PDP) through nickel and tungsten near room temperatures applying a tritium tracer technique, which is extremely sensible to detect tritium diluted in hydrogen. The apparent diffusion coefficients for PDP were determined by permeation lag times at first time, and those for nickel and tungsten were similar or a few times larger than those for gas driven permeation (GDP). The permeation rates for PDP in nickel and tungsten were larger than those for GDP normalized to the same gas pressure about 20 and 5 times larger, respectively.

  18. Applied nuclear γ-resonance as fingerprint technique in geochemistry and mineralogy

    International Nuclear Information System (INIS)

    Constantinescu, S.

    2003-01-01

    The aim of the present paper is to evidence the new developments of one of the most refined technique, the nuclear γ resonance or the well-known Moessbauer effect, in the field of mineralogical and geo-chemical investigation. There are many Moessbauer studies on minerals, but the development, the new performance of the Moessbauer equipment and of the computers impose to review more profoundly and more thoroughly the information, which this non-destructive technique offers. This task became more and more pressingly because a lot of minerals contain in high proportion, the Moessbauer isotopes. Generally, the mineralogists, physicists and chemists hope to obtain more refined and complete information about the physics and chemistry synthesis aspects in solid state transformation of some natural and synthetic materials and also about the structural aspects, by these kind of techniques. On this line, the authors very shortly review the principal aspects of the Moessbauer spectroscopy and underline the most important information one can obtain from spectra. The recent results, which have been obtained on minerals extracted from Romanian geological deposits by the authors, will be discussed in detail in the second part of this article. (authors)

  19. Conceptual design study and evaluation of an advanced treatment process applying a submerged combustion technique for spent solvents

    International Nuclear Information System (INIS)

    Uchiyama, Gunzo; Maeda, Mitsuru; Fijine, Sachio; Chida, Mitsuhisa; Kirishima, Kenji.

    1993-10-01

    An advanced treatment process based on a submerged combustion technique was proposed for spent solvents and the distillation residues containing transuranium (TRU) nuclides. A conceptual design study and the preliminary cost estimation of the treatment facility applying the process were conducted. Based on the results of the study, the process evaluation on the technical features, such as safety, volume reduction of TRU waste and economics was carried out. The key requirements for practical use were also summarized. It was shown that the process had the features as follows: the simplified treatment and solidification steps will not generate secondary aqueous wastes, the volume of TRU solid waste will be reduced less than one tenth of that of a reference technique (pyrolysis process), and the facility construction cost is less than 1 % of the total construction cost of a future large scale reprocessing plant. As for the low level wastes of calcium phosphate, it was shown that the further removal of β · γ nuclides with TRU nuclides from the wastes would be required for the safety in interim storage and transportation and for the load of shielding. (author)

  20. People Recognition for Loja ECU911 applying artificial vision techniques

    Directory of Open Access Journals (Sweden)

    Diego Cale

    2016-05-01

    Full Text Available This article presents a technological proposal based on artificial vision which aims to search people in an intelligent way by using IP video cameras. Currently, manual searching process is time and resource demanding in contrast to automated searching one, which means that it could be replaced. In order to obtain optimal results, three different techniques of artificial vision were analyzed (Eigenfaces, Fisherfaces, Local Binary Patterns Histograms. The selection process considered factors like lighting changes, image quality and changes in the angle of focus of the camera. Besides, a literature review was conducted to evaluate several points of view regarding artificial vision techniques.