Karslı, Hakan
2006-08-01
Seismic data have still no enough temporal resolution because of band-limited nature of available data even if it is deconvolved. However, lower and higher frequency information belonging to seismic data is missing and it is not directly recovered from seismic data. In this paper, a method originally applied by Honarvar et al. [Honarvar, F., Sheikhzadeh, H., Moles, M., Sinclair, A.N., 2004. Improving the time-resolution and signal-noise ratio of ultrasonic NDE signals. Ultrasonics 41, 755-763.] which is the combination of the most widely used Wiener deconvolution and AR spectral extrapolation in frequency domain is briefly reviewed and is applied to seismic data to improve temporal resolution further. The missing frequency information is optimally recovered by forward and backward extrapolation based on the selection of a high signal-noise ratio (SNR) of signal spectrum deconvolved in signal processing technique. The combination of the two methods is firstly tested on a variety of synthetic examples and then applied to a stacked real trace. The selection of necessary parameters in Wiener filtering and in extrapolation are discussed in detail. It is used an optimum frequency windows between 3 and 10 dB drops by comparing results from these drops, while frequency windows are used as standard between 2.8 and 3.2 dB drops in study of Honarvar et al. [Honarvar, F., Sheikhzadeh, H., Moles, M., Sinclair, A.N., 2004. Improving the time-resolution and signal-noise ratio of ultrasonic NDE signals. Ultrasonics 41, 755-763.]. The results obtained show that the application of the purposed signal processing technique considerably improves temporal resolution of seismic data when compared with the original seismic data. Furthermore, AR based spectral extrapolated data can be almost considered as reflectivity sequence of layered medium. Consequently, the combination of Wiener deconvolution and AR spectral extrapolation can reveal some details of seismic data that cannot be
Caldwell, J.; Shakibi, B.; Moles, M.; Sinclair, A. N.
2013-01-01
Phased array inspection was conducted on a V-butt welded steel sample with multiple shallow flaws of varying depths. The inspection measurements were processed using Wiener filtering and Autoregressive Spectral Extrapolation (AS) to enhance the signals. Phased array inspections were conducted using multiple phased array probes of varying nominal central frequencies (2.25, 4, 5 and 10 MHz). This paper describes the measured results, which show high accuracy, typically in the range of 0.1-0.2 mm. The results concluded that: 1. There was no statistical difference between the calculated flaw depths from phased array inspections at different flaw tip angles. 2. There was no statistical difference in flaw depths calculated using phased array data collected from either side of the weld. 3. Flaws with depths less than the estimated probe signal shear wavelength could not be sized. 4. Finally, there was no statistical difference in the calculated flaw depths using phased array probes with different sampling frequencies and destructive measurements of the flaws.
One-step lowrank wave extrapolation
Sindi, Ghada Atif
2014-01-01
Wavefield extrapolation is at the heart of modeling, imaging, and Full waveform inversion. Spectral methods gained well deserved attention due to their dispersion free solutions and their natural handling of anisotropic media. We propose a scheme a modified one-step lowrank wave extrapolation using Shanks transform in isotropic, and anisotropic media. Specifically, we utilize a velocity gradient term to add to the accuracy of the phase approximation function in the spectral implementation. With the higher accuracy, we can utilize larger time steps and make the extrapolation more efficient. Applications to models with strong inhomogeneity and considerable anisotropy demonstrates the utility of the approach.
RECIPROCAL POLYNOMIAL EXTRAPOLATION
Institute of Scientific and Technical Information of China (English)
SergioAmat; SoniaBusquier; VicenteF.Candela
2004-01-01
An alternative to the classical extrapolations is proposed. The stability and the accuracy are studied. The new extrapolation behaves better than the classical ones when there are problems of stability. This technique will be useful in those problems where the region of stability is very small and it forces to work with too fine scales.
Residual extrapolation operators for efficient wavefield construction
Alkhalifah, Tariq Ali
2013-02-27
Solving the wave equation using finite-difference approximations allows for fast extrapolation of the wavefield for modelling, imaging and inversion in complex media. It, however, suffers from dispersion and stability-related limitations that might hamper its efficient or proper application to high frequencies. Spectral-based time extrapolation methods tend to mitigate these problems, but at an additional cost to the extrapolation. I investigate the prospective of using a residual formulation of the spectral approach, along with utilizing Shanks transform-based expansions, that adheres to the residual requirements, to improve accuracy and reduce the cost. Utilizing the fact that spectral methods excel (time steps are allowed to be large) in homogeneous and smooth media, the residual implementation based on velocity perturbation optimizes the use of this feature. Most of the other implementations based on the spectral approach are focussed on reducing cost by reducing the number of inverse Fourier transforms required in every step of the spectral-based implementation. The approach here fixes that by improving the accuracy of each, potentially longer, time step.
Ecotoxicological effects extrapolation models
Energy Technology Data Exchange (ETDEWEB)
Suter, G.W. II
1996-09-01
One of the central problems of ecological risk assessment is modeling the relationship between test endpoints (numerical summaries of the results of toxicity tests) and assessment endpoints (formal expressions of the properties of the environment that are to be protected). For example, one may wish to estimate the reduction in species richness of fishes in a stream reach exposed to an effluent and have only a fathead minnow 96 hr LC50 as an effects metric. The problem is to extrapolate from what is known (the fathead minnow LC50) to what matters to the decision maker, the loss of fish species. Models used for this purpose may be termed Effects Extrapolation Models (EEMs) or Activity-Activity Relationships (AARs), by analogy to Structure-Activity Relationships (SARs). These models have been previously reviewed in Ch. 7 and 9 of and by an OECD workshop. This paper updates those reviews and attempts to further clarify the issues involved in the development and use of EEMs. Although there is some overlap, this paper does not repeat those reviews and the reader is referred to the previous reviews for a more complete historical perspective, and for treatment of additional extrapolation issues.
Cosmological extrapolation of MOND
Kiselev, V V
2011-01-01
Regime of MOND, which is used in astronomy to describe the gravitating systems of island type without the need to postulate the existence of a hypothetical dark matter, is generalized to the case of homogeneous distribution of usual matter by introducing a linear dependence of the critical acceleration on the size of region under consideration. We show that such the extrapolation of MOND in cosmology is consistent with both the observed dependence of brightness on the redshift for type Ia supernovae and the parameters of large-scale structure of Universe in the evolution, that is determined by the presence of a cosmological constant, the ordinary matter of baryons and electrons as well as the photon and neutrino radiation without any dark matter.
The optimizied expansion method for wavefield extrapolation
Wu, Zedong
2013-01-01
Spectral methods are fast becoming an indispensable tool for wave-field extrapolation, especially in anisotropic media, because of its dispersion and artifact free, as well as highly accurate, solutions of the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain operator.In this abstract, we propose an optimized expansion method that can approximate this operator with its low rank representation. The rank defines the number of inverse FFT required per time extrapolation step, and thus, a lower rank admits faster extrapolations. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its low rank representation.Thus,we obtain more accurate wave-fields using lower rank representation, and thus cheaper extrapolations. The optimization operation to define the low rank representation depends only on the velocity model, and this is done only once, and valid for a full reverse time migration (many shots) or one iteration of full waveform inversion. Applications on the BP model yielded superior results than those obtained using the decomposition approach. For transversely isotopic media, the solutions were free of the shear wave artifacts, and does not require that eta>0.
Wavefield extrapolation in pseudodepth domain
Ma, Xuxin
2013-02-01
Wavefields are commonly computed in the Cartesian coordinate frame. Its efficiency is inherently limited due to spatial oversampling in deep layers, where the velocity is high and wavelengths are long. To alleviate this computational waste due to uneven wavelength sampling, we convert the vertical axis of the conventional domain from depth to vertical time or pseudodepth. This creates a nonorthognal Riemannian coordinate system. Isotropic and anisotropic wavefields can be extrapolated in the new coordinate frame with improved efficiency and good consistency with Cartesian domain extrapolation results. Prestack depth migrations are also evaluated based on the wavefield extrapolation in the pseudodepth domain.© 2013 Society of Exploration Geophysicists. All rights reserved.
Local theory of extrapolation methods
Kulikov, Gennady
2010-03-01
In this paper we discuss the theory of one-step extrapolation methods applied both to ordinary differential equations and to index 1 semi-explicit differential-algebraic systems. The theoretical background of this numerical technique is the asymptotic global error expansion of numerical solutions obtained from general one-step methods. It was discovered independently by Henrici, Gragg and Stetter in 1962, 1964 and 1965, respectively. This expansion is also used in most global error estimation strategies as well. However, the asymptotic expansion of the global error of one-step methods is difficult to observe in practice. Therefore we give another substantiation of extrapolation technique that is based on the usual local error expansion in a Taylor series. We show that the Richardson extrapolation can be utilized successfully to explain how extrapolation methods perform. Additionally, we prove that the Aitken-Neville algorithm works for any one-step method of an arbitrary order s, under suitable smoothness.
On the extrapolation of band-limited signals
Chamzas, C. C.
1980-12-01
The determination of the Fourier Transform of a band-limited signal in terms of a finite segment is examined. The Papoulis' Extrapolation Algorithm is extended in a broader class of signals and its convergence is considerably improved by a multiplication with an adaptive constant, chosen to minimize the mean square error in the extrapolation interval. The discrete version of the iteration is examined and then modified in order to converge to the best linear mean square estimator of the unknown signal when noise is added to the given data. The problem of determining the frequencies, amplitudes and phases of a sinusoidal signal from incomplete noisy data, is considered and the extrapolation algorithm is properly modified to estimate these quantities. The obtained iteration is nonlinear and adaptively reduces the spectral components due to noise. The adaptive extrapolation technique is applied to the problem of image restoration for objects consisting of point or line sources, and to an ultrasonic problem.
Infrared extrapolations for atomic nuclei
Furnstahl, R J; Papenbrock, T; Wendt, K A
2014-01-01
Harmonic oscillator model-space truncations introduce systematic errors to the calculation of binding energies and other observables. We identify the relevant infrared scaling variable and give values for this nucleus-dependent quantity. We consider isotopes of oxygen computed with the coupled-cluster method from chiral nucleon-nucleon interactions at next-to-next-to-leading order and show that the infrared component of the error is sufficiently understood to permit controlled extrapolations. By employing oscillator spaces with relatively large frequencies, well above the energy minimum, the ultraviolet corrections can be suppressed while infrared extrapolations over tens of MeVs are accurate for ground-state energies. However, robust uncertainty quantification for extrapolated quantities that fully accounts for systematic errors is not yet developed.
Assessment of Load Extrapolation Methods for Wind Turbines
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Sørensen, John Dalsgaard
2010-01-01
In the present paper methods for statistical load extrapolation of wind turbine response are studied using a stationary Gaussian process model which has approximately the same spectral properties as the response for the flap bending moment of a wind turbine blade. For a Gaussian process an approx...
Extrapolation methods theory and practice
Brezinski, C
1991-01-01
This volume is a self-contained, exhaustive exposition of the extrapolation methods theory, and of the various algorithms and procedures for accelerating the convergence of scalar and vector sequences. Many subroutines (written in FORTRAN 77) with instructions for their use are provided on a floppy disk in order to demonstrate to those working with sequences the advantages of the use of extrapolation methods. Many numerical examples showing the effectiveness of the procedures and a consequent chapter on applications are also provided - including some never before published results and applicat
Bandlimited image extrapolation with faster convergence
Cahana, D.; Stark, H.
1981-08-01
Techniques for increasing the convergence rate of the extrapolation algorithm proposed by Gerchberg (1974) and Papoulis (1975) for image restoration applications are presented. The techniques involve the modification of the Gerchberg-Papoulis algorithm to include additional a priori data such as the low-pass projection of the image either by the inclusion of the data at the start of the recursion to reduce the starting-point error, or by use of the low-pass image in each iteration to correct twice in the frequency domain. The performance of the GP algorithm and the two modifications presented in the restorations of a signal consisting of widely separated spectral components of equal magnitude and a signal with spectral components grouped in passbands is compared, and it is found that while both modifications reduced the starting point error, the convergence rate of the second technique was not substantially greater than that of the first despite the additional iterative frequency-plane correction. A significant improvement in the starting-point errors and convergence rates of both modified algorithms is obtained, however, when they are combined with adaptive thresholding in the presence of low noise levels and a signal with relatively well spaced impulse-type spectral components.
Effective wavefield extrapolation in anisotropic media: Accounting for resolvable anisotropy
Alkhalifah, Tariq Ali
2014-04-30
Spectral methods provide artefact-free and generally dispersion-free wavefield extrapolation in anisotropic media. Their apparent weakness is in accessing the medium-inhomogeneity information in an efficient manner. This is usually handled through a velocity-weighted summation (interpolation) of representative constant-velocity extrapolated wavefields, with the number of these extrapolations controlled by the effective rank of the original mixed-domain operator or, more specifically, by the complexity of the velocity model. Conversely, with pseudo-spectral methods, because only the space derivatives are handled in the wavenumber domain, we obtain relatively efficient access to the inhomogeneity in isotropic media, but we often resort to weak approximations to handle the anisotropy efficiently. Utilizing perturbation theory, I isolate the contribution of anisotropy to the wavefield extrapolation process. This allows us to factorize as much of the inhomogeneity in the anisotropic parameters as possible out of the spectral implementation, yielding effectively a pseudo-spectral formulation. This is particularly true if the inhomogeneity of the dimensionless anisotropic parameters are mild compared with the velocity (i.e., factorized anisotropic media). I improve on the accuracy by using the Shanks transformation to incorporate a denominator in the expansion that predicts the higher-order omitted terms; thus, we deal with fewer terms for a high level of accuracy. In fact, when we use this new separation-based implementation, the anisotropy correction to the extrapolation can be applied separately as a residual operation, which provides a tool for anisotropic parameter sensitivity analysis. The accuracy of the approximation is high, as demonstrated in a complex tilted transversely isotropic model. © 2014 European Association of Geoscientists & Engineers.
UFOs: Observations, Studies and Extrapolations
Baer, T; Barnes, M J; Bartmann, W; Bracco, C; Carlier, E; Cerutti, F; Dehning, B; Ducimetière, L; Ferrari, A; Ferro-Luzzi, M; Garrel, N; Gerardin, A; Goddard, B; Holzer, E B; Jackson, S; Jimenez, J M; Kain, V; Zimmermann, F; Lechner, A; Mertens, V; Misiowiec, M; Nebot Del Busto, E; Morón Ballester, R; Norderhaug Drosdal, L; Nordt, A; Papotti, G; Redaelli, S; Uythoven, J; Velghe, B; Vlachoudis, V; Wenninger, J; Zamantzas, C; Zerlauth, M; Fuster Martinez, N
2012-01-01
UFOs (“ Unidentified Falling Objects”) could be one of the major performance limitations for nominal LHC operation. Therefore, in 2011, the diagnostics for UFO events were significantly improved, dedicated experiments and measurements in the LHC and in the laboratory were made and complemented by FLUKA simulations and theoretical studies. The state of knowledge is summarized and extrapolations for LHC operation in 2012 and beyond are presented. Mitigation strategies are proposed and related tests and measures for 2012 are specified.
Renyi extrapolation of Shannon entropy
Zyczkowski, K
2003-01-01
Relations between Shannon entropy and Renyi entropies of integer order are discussed. For any N-point discrete probability distribution for which the Renyi entropies of order two and three are known, we provide an lower and an upper bound for the Shannon entropy. The average of both bounds provide an explicit extrapolation for this quantity. These results imply relations between the von Neumann entropy of a mixed quantum state, its linear entropy and traces.
Lowrank seismic-wave extrapolation on a staggered grid
Fang, Gang
2014-05-01
© 2014 Society of Exploration Geophysicists. We evaluated a new spectral method and a new finite-difference (FD) method for seismic-wave extrapolation in time. Using staggered temporal and spatial grids, we derived a wave-extrapolation operator using a lowrank decomposition for a first-order system of wave equations and designed the corresponding FD scheme. The proposed methods extend previously proposed lowrank and lowrank FD wave extrapolation methods from the cases of constant density to those of variable density. Dispersion analysis demonstrated that the proposed methods have high accuracy for a wide wavenumber range and significantly reduce the numerical dispersion. The method of manufactured solutions coupled with mesh refinement was used to verify each method and to compare numerical errors. Tests on 2D synthetic examples demonstrated that the proposed method is highly accurate and stable. The proposed methods can be used for seismic modeling or reverse-time migration.
Extrapolation of acenocoumarol pharmacogenetic algorithms.
Jiménez-Varo, Enrique; Cañadas-Garre, Marisa; Garcés-Robles, Víctor; Gutiérrez-Pimentel, María José; Calleja-Hernández, Miguel Ángel
2015-11-01
Acenocoumarol (ACN) has a narrow therapeutic range that is especially difficult to control at the start of its administration. Various dosing pharmacogenetic-guided dosing algorithms have been developed, but further work on their external validation is required. The aim of this study was to evaluate the extrapolation of pharmacogenetic algorithms for ACN as an alternative to the development of a specific algorithm for a given population. The predictive performance, deviation, accuracy, and clinical significance of five pharmacogenetic algorithms (EU-PACT, Borobia, Rathore, Markatos, Krishna Kumar) were compared in 189 stable ACN patients representing all indications for anticoagulant treatment. The correlation between the dose predictions of the five pharmacogenetic models ranged from 7.7 to 70.6% and the percentage of patients with a correct prediction (deviation ≤20% from actual ACN dose) ranged from 5.9 to 40.7%. EU-PACT and Borobia pharmacogenetic dosing algorithms were the most accurate in our setting and evidenced the best clinical performance. Among the five models studied, the EU-PACT and Borobia pharmacogenetic dosing algorithms demonstrated the best potential for extrapolation. Copyright © 2015 Elsevier Inc. All rights reserved.
Extrapolating future Arctic ozone losses
Directory of Open Access Journals (Sweden)
B. M. Knudsen
2004-06-01
Full Text Available Future increases in the concentration of greenhouse gases and water vapour are likely to cool the stratosphere further and to increase the amount of polar stratospheric clouds (PSCs. Future Arctic PSC areas have been extrapolated using the highly significant trends in the temperature record from 1958–2001. Using a tight correlation between PSC area and the total vortex ozone depletion and taking the decreasing amounts of ozone depleting substances into account we make empirical estimates of future ozone. The result is that Arctic ozone losses increase until 2010–2020 and only decrease slightly up to 2030. This approach is an alternative method of prediction to that based on the complex coupled chemistry-climate models (CCMs.
How accurate are infrared luminosities from monochromatic photometric extrapolation?
Lin, Zesen; Kong, Xu
2016-01-01
Template-based extrapolations from only one photometric band can be a cost-effective method to estimate the total infrared (IR) luminosities ($L_{\\mathrm{IR}}$) of galaxies. By utilizing multi-wavelength data that covers across 0.35--500\\,$\\mathrm{\\mu m}$ in GOODS-North and GOODS-South fields, we investigate the accuracy of this monochromatic extrapolated $L_{\\mathrm{IR}}$ based on three IR spectral energy distribution (SED) templates (\\citealt[CE01]{Chary2001}; \\citealt[DH02]{Dale2002}; \\citealt[W08]{Wuyts2008a}) out to $z\\sim 3.5$. We find that the CE01 template provides the best estimate of $L_{\\mathrm{IR}}$ in {\\it Herschel}/PACS bands, while the DH02 template performs best in {\\it Herschel}/SPIRE bands. To estimate $L_{\\mathrm{IR}}$, we suggest that extrapolations from the available longest wavelength PACS band based on the CE01 template can be a good estimator. Moreover, if PACS measurement is unavailable, extrapolations from SPIRE observations but based on the \\cite{Dale2002} template can also provide ...
Assessment of Load Extrapolation Methods for Wind Turbines
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Sørensen, John Dalsgaard; Veldkamp, Dick
2011-01-01
In the present paper, methods for statistical load extrapolation of wind-turbine response are studied using a stationary Gaussian process model, which has approximately the same spectral properties as the response for the out-of-plane bending moment of a wind-turbine blade. For a Gaussian process......, an approximate analytical solution for the distribution of the peaks is given by Rice. In the present paper, three different methods for statistical load extrapolation are compared with the analytical solution for one mean wind speed. The methods considered are global maxima, block maxima, and the peak over....... By considering Gaussian processes for 12 mean wind speeds, the "fitting before aggregation" and "aggregation before fitting" approaches are studied. The results show that the fitting before aggregation approach gives the best results. [DOI: 10.1115/1.4003416]...
Efficient Wavefield Extrapolation In Anisotropic Media
Alkhalifah, Tariq
2014-07-03
Various examples are provided for wavefield extrapolation in anisotropic media. In one example, among others, a method includes determining an effective isotropic velocity model and extrapolating an equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield. The effective isotropic velocity model can be based upon a kinematic geometrical representation of an anisotropic, poroelastic or viscoelastic wavefield. Extrapolating the equivalent propagation can use isotopic, acoustic or elastic operators based upon the determined effective isotropic velocity model. In another example, non-transitory computer readable medium stores an application that, when executed by processing circuitry, causes the processing circuitry to determine the effective isotropic velocity model and extrapolate the equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield. In another example, a system includes processing circuitry and an application configured to cause the system to determine the effective isotropic velocity model and extrapolate the equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield.
Builtin vs. auxiliary detection of extrapolation risk.
Energy Technology Data Exchange (ETDEWEB)
Munson, Miles Arthur; Kegelmeyer, W. Philip,
2013-02-01
A key assumption in supervised machine learning is that future data will be similar to historical data. This assumption is often false in real world applications, and as a result, prediction models often return predictions that are extrapolations. We compare four approaches to estimating extrapolation risk for machine learning predictions. Two builtin methods use information available from the classification model to decide if the model would be extrapolating for an input data point. The other two build auxiliary models to supplement the classification model and explicitly model extrapolation risk. Experiments with synthetic and real data sets show that the auxiliary models are more reliable risk detectors. To best safeguard against extrapolating predictions, however, we recommend combining builtin and auxiliary diagnostics.
Signal extrapolation based on wavelet representation
Xia, Xiang-Gen; Kuo, C.-C. Jay; Zhang, Zhen
1993-11-01
The Papoulis-Gerchberg (PG) algorithm is well known for band-limited signal extrapolation. We consider the generalization of the PG algorithm to signals in the wavelet subspaces in this research. The uniqueness of the extrapolation for continuous-time signals is examined, and sufficient conditions on signals and wavelet bases for the generalized PG (GPG) algorithm to converge are given. We also propose a discrete GPG algorithm for discrete-time signal extrapolation, and investigate its convergence. Numerical examples are given to illustrate the performance of the discrete GPG algorithm.
Extrapolation procedures in Mott electron polarimetry
Gay, T. J.; Khakoo, M. A.; Brand, J. A.; Furst, J. E.; Wijayaratna, W. M. K. P.; Meyer, W. V.; Dunning, F. B.
1992-01-01
In standard Mott electron polarimetry using thin gold film targets, extrapolation procedures must be used to reduce the experimentally measured asymmetries A to the values they would have for scattering from single atoms. These extrapolations involve the dependent of A on either the gold film thickness or the maximum detected electron energy loss in the target. A concentric cylindrical-electrode Mott polarimeter, has been used to study and compare these two types of extrapolations over the electron energy range 20-100 keV. The potential systematic errors which can result from such procedures are analyzed in detail, particularly with regard to the use of various fitting functions in thickness extrapolations, and the failure of perfect energy-loss discrimination to yield accurate polarizations when thick foils are used.
Typical object velocity influences motion extrapolation.
Makin, Alexis D J; Stewart, Andrew J; Poliakoff, Ellen
2009-02-01
Previous work indicates that extrapolation of object motion during occlusion is affected by the velocity of the immediately preceding trial. Here we ask whether longer-term velocity representations can also influence motion extrapolation. Red, blue or green targets disappeared behind an occluder. Participants pressed a button when they thought the target had reached the other side. Red targets were slower (10-20 deg/s), blue targets moved at medium velocities (14-26 deg/s) and green targets were faster (20-30 deg/s). We compared responses on a subset of red and green trials which always travelled at 20 deg/s. Although trial velocities were identical, participants responded as if the green targets moved faster (M = 22.64 deg/s) then the red targets (M = 19.72 deg/s). This indicates that motion extrapolation is affected by longer-term information about the typical velocity of different categories of stimuli.
Chiral extrapolation of nucleon magnetic form factors
Energy Technology Data Exchange (ETDEWEB)
P. Wang; D. Leinweber; A. W. Thomas; R.Young
2007-04-01
The extrapolation of nucleon magnetic form factors calculated within lattice QCD is investigated within a framework based upon heavy baryon chiral effective-field theory. All one-loop graphs are considered at arbitrary momentum transfer and all octet and decuplet baryons are included in the intermediate states. Finite range regularization is applied to improve the convergence in the quark-mass expansion. At each value of the momentum transfer (Q{sup 2}), a separate extrapolation to the physical pion mass is carried out as a function of m{sub {pi}} alone. Because of the large values of Q{sup 2} involved, the role of the pion form factor in the standard pion-loop integrals is also investigated. The resulting values of the form factors at the physical pion mass are compared with experimental data as a function of Q{sup 2} and demonstrate the utility and accuracy of the chiral extrapolation methods presented herein.
Wavefield extrapolation in pseudo-depth domain
Ma, Xuxin
2012-01-01
Extrapolating seismic waves in Cartesian coordinate is prone to uneven spatial sampling, because the seismic wavelength tends to grow with depth, as velocity increase. We transform the vertical depth axis to a pseudo one using a velocity weighted mapping, which can effectively mitigate this wavelength variation. We derive acoustic wave equations in this new domain based on the direct transformation of the Laplacian derivatives, which admits solutions that are more accurate and stable than those derived from the kinematic transformation. The anisotropic versions of these equations allow us to isolate the vertical velocity influence and reduce its impact on modeling and imaging. The major benefit of extrapolating wavefields in pseudo-depth space is its near uniform wavelength as opposed to the normally dramatic change of wavelength with the conventional approach. Time wavefield extrapolation on a complex velocity shows some of the features of this approach.
Hydrogen solubility in rare earth based hydrogen storage alloys
Energy Technology Data Exchange (ETDEWEB)
Uchida, Hirohisa [Tokai Univ., Kanagawa (Japan). School of Engineering; Kuji, Toshiro [Mitsui Mining and Smelting Co. Ltd., Saitama (Japan)
1999-09-01
This paper reviews significant results of recent studies on the hydrogen storage properties of rare earth based AB{sub 5} (A: rare earth element, B: transition element) alloys The hydrogen solubility and the hydride formation, typically appeared in pressure-composition isotherms (PCT), are strongly dependent upon alloy composition, structure, morphology and even alloy particle size. Typical experimental results are shown to describe how these factors affect the hydrogen solubility and storage properties.
Array aperture extrapolation using sparse reconstruction
Anitori, L.; Rossum, W.L. van; Huizing, A.G.
2015-01-01
In this paper we present some preliminary results on antenna array extrapolation for Direction Of Arrival (DOA) estimation using Sparse Reconstruction (SR). The objective of this study is to establish wether it is possible to achieve with an array of a given physical length the performance (in terms
Efficient and stable extrapolation of prestack wavefields
Wu, Zedong
2013-09-22
The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers and the image point, or in other words, prestack wavefields. Extrapolating such wavefields in time, nevertheless, is a big challenge because the radicand can be negative, thus reduce to a complex phase velocity, which will make the rank of the mixed domain matrix very high. Using the vertical offset between the sources and receivers, we introduce a method for deriving the DSR formulation, which gives us the opportunity to derive approximations for the mixed domain operator. The method extrapolates prestack wavefields by combining all data into one wave extrapolation procedure, allowing both upgoing and downgoing wavefields since the extrapolation is done in time, and doesn’t have the v(z) assumption in the offset axis of the media. Thus, the imaging condition is imposed by taking the zero-time and zero-offset slice from the multi-dimensional prestack wavefield. Unlike reverse time migration (RTM), no crosscorrelation is needed and we also have access to the subsurface offset information, which is important for migration velocity analysis. Numerical examples show the capability of this approach in dealing with complex velocity models and can provide a better quality image compared to RTM more efficiently.
The extrapolated successive overrelaxation (ESOR method for consistently ordered matrices
Directory of Open Access Journals (Sweden)
N. M. Missirlis
1984-01-01
Full Text Available This paper develops the theory of the Extrapolated Successive Overrelaxation (ESOR method as introduced by Sisler in [1], [2], [3] for the numerical solution of large sparse linear systems of the form Au=b, when A is a consistently ordered 2-cyclic matrix with non-vanishing diagonal elements and the Jacobi iteration matrix B possesses only real eigenvalues. The region of convergence for the ESOR method is described and the optimum values of the involved parameters are also determined. It is shown that if the minimum of the moduli of the eigenvalues of B, μ¯ does not vanish, then ESOR attains faster rate of convergence than SOR when 1−μ¯2<(1−μ¯212, where μ¯ denotes the spectral radius of B.
Universality of Mixed Action Extrapolation Formulae
Chen, Jiunn-Wei; Walker-Loud, Andre
2009-01-01
Mixed action theories with chirally symmetric valence fermions exhibit very desirable features both at the level of the lattice calculations as well as in the construction and implementation of the low energy mixed action effective field theory. In this work we show that when the mixed action effective field theory is projected onto the valence sector, both the Lagrangian and the extrapolation formulae become universal in form through next to leading order, for all variants of discretization methods used for the sea fermions. This implies that for all sea quark methods which are in the same universality class as QCD, the numerical values of the physical coefficients in the various mixed action chiral Lagrangians will be the same up to perturbative lattice spacing dependent corrections. This allows us to construct a prescription to determine the mixed action extrapolation formulae for a large class of hadronic correlation functions computed in partially quenched chiral perturbation theory at the one-loop level...
Extrapolation Method for System Reliability Assessment
DEFF Research Database (Denmark)
Qin, Jianjun; Nishijima, Kazuyoshi; Faber, Michael Havbro
2012-01-01
The present paper presents a new scheme for probability integral solution for system reliability analysis, which takes basis in the approaches by Naess et al. (2009) and Bucher (2009). The idea is to evaluate the probability integral by extrapolation, based on a sequence of MC approximations....... The scheme is extended so that it can be applied to cases where the asymptotic property may not be valid and/or the random variables are not normally distributed. The performance of the scheme is investigated by four principal series and parallel systems and some practical examples. The results indicate...... of integrals with scaled domains. The performance of this class of approximation depends on the approach applied for the scaling and the functional form utilized for the extrapolation. A scheme for this task is derived here taking basis in the theory of asymptotic solutions to multinormal probability integrals...
Seismic wave extrapolation using lowrank symbol approximation
Fomel, Sergey
2012-04-30
We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.
Extrapolating spatial layout in scene representations.
Castelhano, Monica S; Pollatsek, Alexander
2010-12-01
Can the visual system extrapolate spatial layout of a scene to new viewpoints after a single view? In the present study, we examined this question by investigating the priming of spatial layout across depth rotations of the same scene (Sanocki & Epstein, 1997). Participants had to indicate which of two dots superimposed on objects in the target scene appeared closer to them in space. There was as much priming from a prime with a viewpoint that was 10° different from the test image as from a prime that was identical to the target; however, there was no reliable priming from larger differences in viewpoint. These results suggest that a scene's spatial layout can be extrapolated, but only to a limited extent.
Effective orthorhombic anisotropic models for wavefield extrapolation
Ibanez-Jacome, W.
2014-07-18
Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth\\'s subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the firstarrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.
An Earth-Based Model of Microgravity Pulmonary Physiology
Hirschl, Ronald B.; Bull, Joseph L.; Grothberg, James B.
2004-01-01
There are currently only two practical methods of achieving micro G for experimentation: parabolic flight in an aircraft or space flight, both of which have limitations. As a result, there are many important aspects of pulmonary physiology that have not been investigated in micro G. We propose to develop an earth-based animal model of micro G by using liquid ventilation, which will allow us to fill the lungs with perfluorocarbon, and submersing the animal in water such that the density of the lungs is the same as the surrounding environment. By so doing, we will eliminate the effects of gravity on respiration. We will first validate the model by comparing measures of pulmonary physiology, including cardiac output, central venous pressures, lung volumes, and pulmonary mechanics, to previous space flight and parabolic flight measurements. After validating the model, we will investigate the impact of micro G on aspects of lung physiology that have not been previously measured. These will include pulmonary blood flow distribution, ventilation distribution, pulmonary capillary wedge pressure, ventilation-perfusion matching, and pleural pressures and flows. We expect that this earth-based model of micro G will enhance our knowledge and understanding of lung physiology in space which will increase in importance as space flights increase in time and distance.
The influence of an extrapolation chamber over the low energy X-ray beam radiation field
Energy Technology Data Exchange (ETDEWEB)
Tanuri de F, M. T.; Da Silva, T. A., E-mail: mttf@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Pampulha, Belo Horizonte, Minas Gerais (Brazil)
2016-10-15
The extrapolation chambers are detectors whose sensitive volume can be modified by changing the distance between the electrodes and has been widely used for beta particles primary measurement system. In this work, was performed a PTW 23392 extrapolation chamber Monte Carlo simulation, by mean the MCNPX code. Although the sensitive volume of an extrapolation chamber can be reduced to very small size, their packaging is large enough to modify the radiation field and change the absorbed dose measurements values. Experiments were performed to calculate correction factors for this purpose. The validation of the Monte Carlo model was done by comparing the spectra obtained with a CdTe detector according to the ISO 4037 criteria. Agreements smaller than 5% for half value layers, 10% for spectral resolution and 1% for mean energy, were found. It was verified that the correction factors are dependent of the X-ray beam quality. (Author)
Effect of Annealing on Rare Earth Based Hydrogen Storage Alloys
Institute of Scientific and Technical Information of China (English)
Li Jinhua
2004-01-01
Rare earth-based hydrogen storage alloy used as negative electrode materials for nickel-metal hydride (Ni-MH) batteries are used commercially.The effect of annealing treatment with different annealing temperature and time on the MLNi3.68 Co0.78 Mn0.35 Al0.27 and MMNi3.55 Co0.75 Mn0.40 Al0.30 alloys were investigated.The crystal microstructure,pressure-composition-isotherms (p-C-T) and electrochemical properties of alloys were examined by X-ray diffraction (XRD), automatic PCI monitoring system and electrical performance testing instruments.The optimum annealing treatment conditions of two kinds of alloys were determined.
On extrapolation blowups in the scale
Directory of Open Access Journals (Sweden)
Fiorenza Alberto
2006-01-01
Full Text Available Yano's extrapolation theorem dated back to 1951 establishes boundedness properties of a subadditive operator acting continuously in for close to and/or taking into as and/or with norms blowing up at speed and/or , . Here we give answers in terms of Zygmund, Lorentz-Zygmund and small Lebesgue spaces to what happens if as . The study has been motivated by current investigations of convolution maximal functions in stochastic analysis, where the problem occurs for . We also touch the problem of comparison of results in various scales of spaces.
Source-receiver two-way wave extrapolation for prestack exploding-reflector modelling and migration
Alkhalifah, Tariq Ali
2014-10-08
Most modern seismic imaging methods separate input data into parts (shot gathers). We develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield forward or backward in time. This approach has the potential for generating accurate images free of artiefacts associated with conventional approaches. We derive novel high-order partial differential equations in the source-receiver time domain. The fourth-order nature of the extrapolation in time leads to four solutions, two of which correspond to the incoming and outgoing P-waves and reduce to the zero-offset exploding-reflector solutions when the source coincides with the receiver. A challenge for implementing two-way time extrapolation is an essential singularity for horizontally travelling waves. This singularity can be avoided by limiting the range of wavenumbers treated in a spectral-based extrapolation. Using spectral methods based on the low-rank approximation of the propagation symbol, we extrapolate only the desired solutions in an accurate and efficient manner with reduced dispersion artiefacts. Applications to synthetic data demonstrate the accuracy of the new prestack modelling and migration approach.
Extrapolating Solar Dynamo Models Throughout the Heliosphere
Cox, B. T.; Miesch, M. S.; Augustson, K.; Featherstone, N. A.
2014-12-01
There are multiple theories that aim to explain the behavior of the solar dynamo, and their associated models have been fiercely contested. The two prevailing theories investigated in this project are the Convective Dynamo model that arises from the pure solving of the magnetohydrodynamic equations, as well as the Babcock-Leighton model that relies on sunspot dissipation and reconnection. Recently, the supercomputer simulations CASH and BASH have formed models of the behavior of the Convective and Babcock-Leighton models, respectively, in the convective zone of the sun. These models show the behavior of the models within the sun, while much less is known about the effects these models may have further away from the solar surface. The goal of this work is to investigate any fundamental differences between the Convective and Babcock-Leighton models of the solar dynamo outside of the sun and extending into the solar system via the use of potential field source surface extrapolations implemented via python code that operates on data from CASH and BASH. The use of real solar data to visualize supergranular flow data in the BASH model is also used to learn more about the behavior of the Babcock-Leighton Dynamo. From the process of these extrapolations it has been determined that the Babcock-Leighton model, as represented by BASH, maintains complex magnetic fields much further into the heliosphere before reverting into a basic dipole field, providing 3D visualisations of the models distant from the sun.
Schroedinger's radial equation - Solution by extrapolation
Goorvitch, D.; Galant, D. C.
1992-01-01
A high-accuracy numerical method for the solution of a 1D Schroedinger equation that is suitable for a diatomic molecule, obtained by combining a finite-difference method with iterative extrapolation to the limit, is presently shown to have several advantages over more conventional methods. Initial guesses for the term values are obviated, and implementation of the algorithm is straightforward. The method is both less sensitive to round-off error, and faster than conventional methods for equivalent accuracy. These advantages are illustrated through the solution of Schroedinger's equation for a Morse potential function suited for HCl and a numerically derived Rydberg-Klein-Rees potential function for the X 1Sigma(+) state of CO.
Universal properties of infrared oscillator basis extrapolations
More, S N; Furnstahl, R J; Hagen, G; Papenbrock, T
2013-01-01
Recent work has shown that a finite harmonic oscillator basis in nuclear many-body calculations effectively imposes a hard-wall boundary condition in coordinate space, motivating infrared extrapolation formulas for the energy and other observables. Here we further refine these formulas by studying two-body models and the deuteron. We accurately determine the box size as a function of the model space parameters, and compute scattering phase shifts in the harmonic oscillator basis. We show that the energy shift can be well approximated in terms of the asymptotic normalization coefficient and the bound-state momentum, discuss higher-order corrections for weakly bound systems, and illustrate this universal property using unitarily equivalent calculations of the deuteron.
Extrapolation methods for dynamic partial differential equations
Turkel, E.
1978-01-01
Several extrapolation procedures are presented for increasing the order of accuracy in time for evolutionary partial differential equations. These formulas are based on finite difference schemes in both the spatial and temporal directions. On practical grounds the methods are restricted to schemes that are fourth order in time and either second, fourth or sixth order in space. For hyperbolic problems the second order in space methods are not useful while the fourth order methods offer no advantage over the Kreiss-Oliger method unless very fine meshes are used. Advantages are first achieved using sixth order methods in space coupled with fourth order accuracy in time. Computational results are presented confirming the analytic discussions.
The optimized expansion based low-rank method for wavefield extrapolation
Wu, Zedong
2014-03-01
Spectral methods are fast becoming an indispensable tool for wavefield extrapolation, especially in anisotropic media because it tends to be dispersion and artifact free as well as highly accurate when solving the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain extrapolation operator efficiently. To solve this problem, we evaluated an optimized expansion method that can approximate this operator with a low-rank variable separation representation. The rank defines the number of inverse Fourier transforms for each time extrapolation step, and thus, the lower the rank, the faster the extrapolation. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its explicit low-rank representation. As a result, we obtain lower rank representations compared with the standard low-rank method within reasonable accuracy and thus cheaper extrapolations. Additional bounds set on the range of propagated wavenumbers to adhere to the physical wave limits yield unconditionally stable extrapolations regardless of the time step. An application on the BP model provided superior results compared to those obtained using the decomposition approach. For transversely isotopic media, because we used the pure P-wave dispersion relation, we obtained solutions that were free of the shear wave artifacts, and the algorithm does not require that n > 0. In addition, the required rank for the optimization approach to obtain high accuracy in anisotropic media was lower than that obtained by the decomposition approach, and thus, it was more efficient. A reverse time migration result for the BP tilted transverse isotropy model using this method as a wave propagator demonstrated the ability of the algorithm.
Frequency extrapolation by nonconvex compressive sensing
Energy Technology Data Exchange (ETDEWEB)
Chartrand, Rick [Los Alamos National Laboratory; Sidky, Emil Y [UNIV OF CHICAGO; Pan, Xiaochaun [UNIV OF CHICAGO
2010-12-03
Tomographic imaging modalities sample subjects with a discrete, finite set of measurements, while the underlying object function is continuous. Because of this, inversion of the imaging model, even under ideal conditions, necessarily entails approximation. The error incurred by this approximation can be important when there is rapid variation in the object function or when the objects of interest are small. In this work, we investigate this issue with the Fourier transform (FT), which can be taken as the imaging model for magnetic resonance imaging (MRl) or some forms of wave imaging. Compressive sensing has been successful for inverting this data model when only a sparse set of samples are available. We apply the compressive sensing principle to a somewhat related problem of frequency extrapolation, where the object function is represented by a super-resolution grid with many more pixels than FT measurements. The image on the super-resolution grid is obtained through nonconvex minimization. The method fully utilizes the available FT samples, while controlling aliasing and ringing. The algorithm is demonstrated with continuous FT samples of the Shepp-Logan phantom with additional small, high-contrast objects.
Uncertainties of Euclidean Time Extrapolation in Lattice Effective Field Theory
Lähde, Timo A; Krebs, Hermann; Lee, Dean; Meißner, Ulf-G; Rupak, Gautam
2014-01-01
Extrapolations in Euclidean time form a central part of Nuclear Lattice Effective Field Theory (NLEFT) calculations using the Projection Monte Carlo method, as the sign problem in many cases prevents simulations at large Euclidean time. We review the next-to-next-to-leading order NLEFT results for the alpha nuclei up to $^{28}$Si, with emphasis on the Euclidean time extrapolations, their expected accuracy and potential pitfalls. We also discuss possible avenues for improving the reliability of Euclidean time extrapolations in NLEFT.
Analysis of extrapolation cascadic multigrid method(EXCMG)
Institute of Scientific and Technical Information of China (English)
2008-01-01
Based on an asymptotic expansion of finite element,a new extrapolation formula and extrapolation cascadic multigrid method(EXCMG)are proposed,in which the new extrapolation and quadratic interpolation are used to provide a better initial value on refined grid.In the case of triple grids,the error of the new initial value is analyzed in detail.A larger scale computation is completed in PC.
3D Hail Size Distribution Interpolation/Extrapolation Algorithm
Lane, John
2013-01-01
Radar data can usually detect hail; however, it is difficult for present day radar to accurately discriminate between hail and rain. Local ground-based hail sensors are much better at detecting hail against a rain background, and when incorporated with radar data, provide a much better local picture of a severe rain or hail event. The previous disdrometer interpolation/ extrapolation algorithm described a method to interpolate horizontally between multiple ground sensors (a minimum of three) and extrapolate vertically. This work is a modification to that approach that generates a purely extrapolated 3D spatial distribution when using a single sensor.
Multi-State Extrapolation of Uv/vis Absorption Spectra with Qm/qm Hybrid Methods
Ren, Sijin; Caricato, Marco
2017-06-01
In this work, we present a simple approach to obtain absorption spectra from hybrid QM/QM calculations. The goal is to obtain reliable spectra for compounds that are too large to be treated entirely at a high level of theory. The approach is based on the extrapolation of the entire absorption spectrum obtained by individual subcalculations. Our program locates the main spectral features in each subcalculation, e.g. band peaks and shoulders, and fits them to Gaussian functions. Each Gaussian is then extrapolated with a formula similar to that of ONIOM (Our own N-layered Integrated molecular Orbital molecular Mechanics). However, information about individual excitations is not necessary so that difficult state-matching across subcalculations is avoided. This multi-state extrapolation thus requires relatively low implementation effort while affording maximum flexibility in the choice of methods to be combined in the hybrid approach. The test calculations show the efficacy and robustness of this methodology in reproducing the spectrum computed for the entire molecule at a high level of theory.
The chemistry side of AOP: implications for toxicity extrapolation
An adverse outcome pathway (AOP) is a structured representation of the biological events that lead to adverse impacts following a molecular initiating event caused by chemical interaction with a macromolecule. AOPs have been proposed to facilitate toxicity extrapolation across s...
Multidimensional signal restoration and band-limited extrapolation, 2
Sanz, J. L. C.; Huang, T. S.
1982-12-01
This technical report consists of three parts. The central problem is the extrapolation of band-limited signals. In part 1, several existing algorithms for band-limited extrapolation are compared: Two-step procedures appeared to give better reconstructions and require less computing time than iterative algorithms. In part 2, five basic procedures for iterative restoration are unified using a Hilbert Space approach. In particular, all known interative algorithms for extrapolation of band-limited signals are shown to be special cases of Bialy's iteration. The authors also obtained faster algorithms than that of Papoulis-Gerchberg. In part 3, the extrapolation problem is presented in a more general setting: Continuation of certain analytic functions. Presented are two steps procedures for finding the continuation of these functions. Some new procedures for band-limited continuation are also discussed as well as the case in which the signal is contaminated with noise.
Extrapolating demography with climate, proximity and phylogeny: approach with caution.
Coutts, Shaun R; Salguero-Gómez, Roberto; Csergő, Anna M; Buckley, Yvonne M
2016-12-01
Plant population responses are key to understanding the effects of threats such as climate change and invasions. However, we lack demographic data for most species, and the data we have are often geographically aggregated. We determined to what extent existing data can be extrapolated to predict population performance across larger sets of species and spatial areas. We used 550 matrix models, across 210 species, sourced from the COMPADRE Plant Matrix Database, to model how climate, geographic proximity and phylogeny predicted population performance. Models including only geographic proximity and phylogeny explained 5-40% of the variation in four key metrics of population performance. However, there was poor extrapolation between species and extrapolation was limited to geographic scales smaller than those at which landscape scale threats typically occur. Thus, demographic information should only be extrapolated with caution. Capturing demography at scales relevant to landscape level threats will require more geographically extensive sampling. © 2016 John Wiley & Sons Ltd/CNRS.
Biosimilar monoclonal antibodies : The scientific basis for extrapolation
Schellekens, Huub; Lietzan, Erika; Faccin, Freddy; Venema, Jaap
2015-01-01
Introduction: Biosimilars are biologic products that receive authorization based on an abbreviated regulatory application containing comparative quality and nonclinical and clinical data that demonstrate similarity to a licensed biologic product. Extrapolation of safety and efficacy has emerged as a
Wildlife toxicity extrapolations: Allometry versus physiologically-based toxicokinetics
Energy Technology Data Exchange (ETDEWEB)
Fairbrother, A. [Ecological Planning and Toxicology Inc., Corvallis, OR (United States); Berg, M. van den [Univ. of Utrecht (Netherlands). Research Inst. of Toxicology
1995-12-31
Ecotoxicological assessments must rely on the extrapolation of toxicity data from a few indicator species to many species of concern. Data are available from laboratory studies (e.g., quail, mallards, rainbow trout, fathead minnow) and some planned or serendipitous field studies of a broader, but by no means comprehensive, suite of species. Yet all ecological risk assessments begin with an estimate of risk based on information gleaned from the literature. The authors are then confronted with the necessity of extrapolating toxicity information from a limited number of indicator species to all organisms of interest. This is a particularly acute problem when trying to estimate hazards to wildlife in terrestrial systems as there is an extreme paucity of data for most chemicals in all but a handful of species. The question arises of how interspecific extrapolations should be made. Should extrapolations be limited to animals within the same class, order, family or genus? Alteratively, should extrapolations be made along trophic levels or physiologic similarities rather than by taxonomic classification? In other words, is an avian carnivore more like a mammalian carnivore or an avian granivore in its response to a toxic substance? Can general rules be set or does the type of extrapolation depend upon the class of chemical and its mode of uptake and toxicologic effect?
Implicit extrapolation methods for multilevel finite element computations
Energy Technology Data Exchange (ETDEWEB)
Jung, M.; Ruede, U. [Technische Universitaet Chemnitz-Zwickau (Germany)
1994-12-31
The finite element package FEMGP has been developed to solve elliptic and parabolic problems arising in the computation of magnetic and thermomechanical fields. FEMGP implements various methods for the construction of hierarchical finite element meshes, a variety of efficient multilevel solvers, including multigrid and preconditioned conjugate gradient iterations, as well as pre- and post-processing software. Within FEMGP, multigrid {tau}-extrapolation can be employed to improve the finite element solution iteratively to higher order. This algorithm is based on an implicit extrapolation, so that the algorithm differs from a regular multigrid algorithm only by a slightly modified computation of the residuals on the finest mesh. Another advantage of this technique is, that in contrast to explicit extrapolation methods, it does not rely on the existence of global error expansions, and therefore neither requires uniform meshes nor global regularity assumptions. In the paper the authors will analyse the {tau}-extrapolation algorithm and present experimental results in the context of the FEMGP package. Furthermore, the {tau}-extrapolation results will be compared to higher order finite element solutions.
Do common systems control eye movements and motion extrapolation?
Makin, Alexis D J; Poliakoff, Ellen
2011-07-01
People are able to judge the current position of occluded moving objects. This operation is known as motion extrapolation. It has previously been suggested that motion extrapolation is independent of the oculomotor system. Here we revisited this question by measuring eye position while participants completed two types of motion extrapolation task. In one task, a moving visual target travelled rightwards, disappeared, then reappeared further along its trajectory. Participants discriminated correct reappearance times from incorrect (too early or too late) with a two-alternative forced-choice button press. In the second task, the target travelled rightwards behind a visible, rectangular occluder, and participants pressed a button at the time when they judged it should reappear. In both tasks, performance was significantly different under fixation as compared to free eye movement conditions. When eye movements were permitted, eye movements during occlusion were related to participants' judgements. Finally, even when participants were required to fixate, small changes in eye position around fixation (<2°) were influenced by occluded target motion. These results all indicate that overlapping systems control eye movements and judgements on motion extrapolation tasks. This has implications for understanding the mechanism underlying motion extrapolation.
Escudero, Alberto; Becerro, Ana I.; Carrillo-Carrión, Carolina; Núñez, Nuria O.; Zyuzin, Mikhail V.; Laguna, Mariano; González-Mancebo, Daniel; Ocaña, Manuel; Parak, Wolfgang J.
2017-06-01
Rare earth based nanostructures constitute a type of functional materials widely used and studied in the recent literature. The purpose of this review is to provide a general and comprehensive overview of the current state of the art, with special focus on the commonly employed synthesis methods and functionalization strategies of rare earth based nanoparticles and on their different bioimaging and biosensing applications. The luminescent (including downconversion, upconversion and permanent luminescence) and magnetic properties of rare earth based nanoparticles, as well as their ability to absorb X-rays, will also be explained and connected with their luminescent, magnetic resonance and X-ray computed tomography bioimaging applications, respectively. This review is not only restricted to nanoparticles, and recent advances reported for in other nanostructures containing rare earths, such as metal organic frameworks and lanthanide complexes conjugated with biological structures, will also be commented on.
Chiral extrapolation beyond the power-counting regime
Hall, J M M; Leinweber, D B; Liu, K F; Mathur, N; Young, R D; Zhang, J B
2011-01-01
Chiral effective field theory can provide valuable insight into the chiral physics of hadrons when used in conjunction with non-perturbative schemes such as lattice QCD. In this discourse, the attention is focused on extrapolating the mass of the rho meson to the physical pion mass in quenched QCD (QQCD). With the absence of a known experimental value, this serves to demonstrate the ability of the extrapolation scheme to make predictions without prior bias. By using extended effective field theory developed previously, an extrapolation is performed using quenched lattice QCD data that extends outside the chiral power-counting regime (PCR). The method involves an analysis of the renormalization flow curves of the low energy coefficients in a finite-range regularized effective field theory. The analysis identifies an optimal regulator, which is embedded in the lattice QCD data themselves. This optimal regulator is the regulator value at which the renormalization of the low energy coefficients is approximately i...
Submarine Magnetic Field Extrapolation Based on Boundary Element Method
Institute of Scientific and Technical Information of China (English)
GAO Jun-ji; LIU Da-ming; YAO Qiong-hui; ZHOU Guo-hua; YAN Hui
2007-01-01
In order to master the magnetic field distribution of submarines in the air completely and exactly and study the magnetic stealthy performance of submarine, a mathematic model of submarine magnetic field extrapolation is built based on the boundary element method (BEM). An experiment is designed to measure three components of magnetic field on the envelope surface surrounding a model submarine. The data in differentheights above the model submarine are obtained by use of tri-axial magnetometers. The results show that this extrapolation model has good stabilities and high accuracies compared the measured data with the extrapolated data. Moreover, the model can reflect the submarine magnetic field distribution in the air exactly, and is valuable in practical engineering.
Rubio de Francia's extrapolation theory: estimates for the distribution function
Carro, María J; Torres, Rodolfo H
2010-01-01
Let $T$ be an arbitrary operator bounded from $L^{p_0}(w)$ into $L^{p_0, \\infty}(w)$ for every weight $w$ in the Muckenhoupt class $A_{p_0}$. It is proved in this article that the distribution function of $Tf$ with respect to any weight $u$ can be essentially majorized by the distribution function of $Mf$ with respect to $u$ (plus an integral term easy to control). As a consequence, well-known extrapolation results, including results in a multilinear setting, can be obtained with very simple proofs. New applications in extrapolation for two-weight problems and estimates on rearrangement invariant spaces are established too.
Splitting extrapolation based on domain decomposition for finite element approximations
Institute of Scientific and Technical Information of China (English)
吕涛; 冯勇
1997-01-01
Splitting extrapolation based on domain decomposition for finite element approximations is a new technique for solving large scale scientific and engineering problems in parallel. By means of domain decomposition, a large scale multidimensional problem is turned to many discrete problems involving several grid parameters The multi-variate asymptotic expansions of finite element errors on independent grid parameters are proved for linear and nonlin ear second order elliptic equations as well as eigenvalue problems. Therefore after solving smaller problems with similar sizes in parallel, a global fine grid approximation with higher accuracy is computed by the splitting extrapolation method.
Functional differential equations with unbounded delay in extrapolation spaces
Directory of Open Access Journals (Sweden)
Mostafa Adimy
2014-08-01
Full Text Available We study the existence, regularity and stability of solutions for nonlinear partial neutral functional differential equations with unbounded delay and a Hille-Yosida operator on a Banach space X. We consider two nonlinear perturbations: the first one is a function taking its values in X and the second one is a function belonging to a space larger than X, an extrapolated space. We use the extrapolation techniques to prove the existence and regularity of solutions and we establish a linearization principle for the stability of the equilibria of our equation.
Extrapolation of scattering data to the negative-energy region
Blokhintsev, L D; Mukhamedzhanov, A M; Savin, D A
2016-01-01
Explicit analytic expressions are derived for the effective-range function for the case when the interaction is represented by a sum of the short-range square-well and long-range Coulomb potentials. These expressions are then transformed into forms convenient for extrapolating to the negative-energy region and obtaining the information about bound-state properties. Alternative ways of extrapolation are discussed. Analytic properties of separate terms entering these expressions for the effective-range function and the partial-wave scattering amplitude are investigated.
Weights, Extrapolation and the Theory of Rubio de Francia
Cruz-Uribe, David; Perez, Carlos
2011-01-01
This book provides a systematic development of the Rubio de Francia theory of extrapolation, its many generalizations and its applications to one and two-weight norm inequalities. The book is based upon a new and elementary proof of the classical extrapolation theorem that fully develops the power of the Rubio de Francia iteration algorithm. This technique allows us to give a unified presentation of the theory and to give important generalizations to Banach function spaces and to two-weight inequalities. We provide many applications to the classical operators of harmonic analysis to illustrate
Panel discussion on Chiral extrapolation of physical observables
Bernard, C; Leinweber, D B; Lepage, P; Pallante, E; Sharpe, S R; Wittig, H; Bernard, Claude; Hashimoto, Shoji; Leinweber, Derek B.; Lepage, Peter; Pallante, Elisabetta; Sharpe, Stephen R.; Wittig, Hartmut
2002-01-01
This is an approximate reconstruction of the panel discussion on chiral extrapolation of physical observables. The session consisted of brief presentations from panelists, followed by responses from the panel, and concluded with questions and comments from the floor with answers from panelists. In the following, the panelists have summarized their statements, and the ensuing discussion has been approximately reconstructed from notes.
Biosimilars and the extrapolation of indications for inflammatory conditions
Tesser, John RP; Furst, Daniel E; Jacobs, Ira
2017-01-01
Extrapolation is the approval of a biosimilar for use in an indication held by the originator biologic not directly studied in a comparative clinical trial with the biosimilar. Extrapolation is a scientific rationale that bridges all the data collected (ie, totality of the evidence) from one indication for the biosimilar product to all the indications originally approved for the originator. Regulatory approval and marketing authorization of biosimilars in inflammatory indications are made on a case-by-case and agency-by-agency basis after evaluating the totality of evidence from the entire development program. This totality of the evidence comprises extensive comparative analytical, functional, nonclinical, and clinical pharmacokinetic/pharmacodynamic, efficacy, safety, and immunogenicity studies used by regulators when evaluating whether a product can be considered a biosimilar. Extrapolation reduces or eliminates the need for duplicative clinical studies of the biosimilar but must be justified scientifically with appropriate data. Understanding the concept, application, and regulatory decisions based on the extrapolation of data is important since biosimilars have the potential to significantly impact patient care in inflammatory diseases. PMID:28255229
Panel discussion on chiral extrapolation of physical observables
Bernard, Claude; Hashimoto, Shoji; Leinweber, Derek B.; Lepage, Peter; Pallante, Elisabetta; Sharpe, Stephen R.; Wittig, Hartmut
2003-01-01
This is an approximate reconstruction of the panel discussion on chiral extrapolation of physical observables. The session consisted of brief presentations from panelists, followed by responses from the panel, and concluded with questions and comments from the floor with answers from panelists. In t
Panel discussion on chiral extrapolation of physical observables
Bernard, Claude; Hashimoto, Shoji; Leinweber, Derek B.; Lepage, Peter; Pallante, Elisabetta; Sharpe, Stephen R.; Wittig, Hartmut
2003-01-01
This is an approximate reconstruction of the panel discussion on chiral extrapolation of physical observables. The session consisted of brief presentations from panelists, followed by responses from the panel, and concluded with questions and comments from the floor with answers from panelists. In t
Genetic effects of radiation. [Extrapolation of mouse data to man
Energy Technology Data Exchange (ETDEWEB)
Selby, P.B.
1976-01-01
Data are reviewed from studies on the genetic effects of x radiation in mice and the extrapolation of the findings for estimating genetic hazards in man is discussed. Data are included on the frequency of mutation induction following acute or chronic irradiation of male or female mice at various doses and dose rates.
Extrapolations of nuclear binding energies from new linear mass relations
DEFF Research Database (Denmark)
Hove, D.; Jensen, A. S.; Riisager, K.
2013-01-01
We present a method to extrapolate nuclear binding energies from known values for neighboring nuclei. We select four specific mass relations constructed to eliminate smooth variation of the binding energy as function nucleon numbers. The fast odd-even variations are avoided by comparing nuclei...
Proposition of Improved Methodology in Creep Life Extrapolation
Energy Technology Data Exchange (ETDEWEB)
Kim, Woo Gon; Park, Jae Young; Jang, Jin Sung [KAERI, Daejeon (Korea, Republic of)
2016-05-15
To design SFRs for a 60-year operation, it is desirable to have the experimental creep-rupture data for Gr. 91 steel close to 20 y, or at least rupture lives significantly higher than 10{sup 5} h. This requirement arises from the fact that, for the creep design, a factor of 3 times for extrapolation is considered to be appropriate. However, obtaining experimental data close to 20 y would be expensive and also take considerable time. Therefore, reliable creep life extrapolation techniques become necessary for a safe design life of 60 y. In addition, it is appropriate to obtain experimental longterm creep-rupture data in the range 10{sup 5} ∼ 2x10{sup 5} h to improve the reliability of extrapolation. In the present investigation, a new function of a hyperbolic sine ('sinh') form for a master curve in time-temperature parameter (TTP) methods, was proposed to accurately extrapolate the long-term creep rupture stress of Gr. 91 steel. Constant values used for each parametric equation were optimized on the basis of the creep rupture data. Average stress values predicted for up to 60 y were evaluated and compared with those of French Nuclear Design Code, RCC-MRx. The results showed that the master curve of the 'sinh' function was a wider acceptance with good flexibility in the low stress ranges beyond the experimental data. It was clarified clarified that the 'sinh' function was reasonable in creep life extrapolation compared with polynomial forms, which have been used conventionally until now.
Directory of Open Access Journals (Sweden)
Trevor G. Jones
2014-07-01
Full Text Available Information derived from high spatial resolution remotely sensed data is critical for the effective management of forested ecosystems. However, high spatial resolution data-sets are typically costly to acquire and process and usually provide limited geographic coverage. In contrast, moderate spatial resolution remotely sensed data, while not able to provide the spectral or spatial detail required for certain types of products and applications, offer inexpensive, comprehensive landscape-level coverage. This study assessed using an object-based approach to extrapolate detailed tree species heterogeneity beyond the extent of hyperspectral/LiDAR flightlines to the broader area covered by a Landsat scene. Using image segments, regression trees established ecologically decipherable relationships between tree species heterogeneity and the spectral properties of Landsat segments. The spectral properties of Landsat bands 4 (i.e., NIR: 0.76–0.90 µm, 5 (i.e., SWIR: 1.55–1.75 µm and 7 (SWIR: 2.08–2.35 µm were consistently selected as predictor variables, explaining approximately 50% of variance in richness and diversity. Results have important ramifications for ongoing management initiatives in the study area and are applicable to wide range of applications.
Image reconstruction: a unifying model for resolution enhancement and data extrapolation. Tutorial
Shieh, Hsin M.; Byrne, Charles L.; Fiddy, Michael A.
2006-02-01
In reconstructing an object function F(r) from finitely many noisy linear-functional values ∫F(r)Gn(r)dr we face the problem that finite data, noisy or not, are insufficient to specify F(r) uniquely. Estimates based on the finite data may succeed in recovering broad features of F(r), but may fail to resolve important detail. Linear and nonlinear, model-based data extrapolation procedures can be used to improve resolution, but at the cost of sensitivity to noise. To estimate linear-functional values of F(r) that have not been measured from those that have been, we need to employ prior information about the object F(r), such as support information or, more generally, estimates of the overall profile of F(r). One way to do this is through minimum-weighted-norm (MWN) estimation, with the prior information used to determine the weights. The MWN approach extends the Gerchberg-Papoulis band-limited extrapolation method and is closely related to matched-filter linear detection, the approximation of the Wiener filter, and to iterative Shannon-entropy-maximization algorithms. Nonlinear versions of the MWN method extend the noniterative, Burg, maximum-entropy spectral-estimation procedure.
Orton, Glenn; Momary, Thomas; Bolton, Scott; Levin, Steven; Hansen, Candice; Janssen, Michael; Adriani, Alberto; Gladstone, G. Randall; Bagenal, Fran; Ingersoll, Andrew
2017-04-01
The Juno mission has promoted and coordinated a network of Earth-based observations, including both Earth-proximal and ground-based facilities, to extend and enhance observations made by the Juno mission. The spectral region and timeline of all of these observations are summarized in the web site: https://www.missionjuno.swri.edu/planned-observations. Among the earliest of these were observation of Jovian auroral phenomena at X-ray, ultraviolet and infrared wavelengths and measurements of Jovian synchrotron radiation from the Earth simultaneously with the measurement of properties of the upstream solar wind. Other observations of significance to the magnetosphere measured the mass loading from Io by tracking its observed volcanic activity and the opacity of its torus. Observations of Jupiter's neutral atmosphere included observations of reflected sunlight from the near-ultraviolet through the near-infrared and thermal emission from 5 μm through the radio region. The point of these measurements is to relate properties of the deep atmosphere that are the focus of Juno's mission to the state of the "weather layer" at much higher atmospheric levels. These observations cover spectral regions not included in Juno's instrumentation, provide spatial context for Juno's often spatially limited coverage of Jupiter, and they describe the evolution of atmospheric features in time that are measured only once by Juno. We will summarize the results of measurements during the approach phase of the mission that characterized the state of the atmosphere, as well as observations made by Juno and the supporting campaign during Juno's perijoves 1 (2016 August 27), 3 (2016 December 11), 4 (2017 February 2) and possibly "early" results from 5 (2017 March 27). Besides a global network of professional astronomers, the Juno mission also benefited from the enlistment of a network of dedicated amateur astronomers who provided a quasi-continuous picture of the evolution of features observed by
Phase unwrapping using an extrapolation-projection algorithm
Marendic, Boris; Yang, Yongyi; Stark, Henry
2006-08-01
We explore an approach to the unwrapping of two-dimensional phase functions using a robust extrapolation-projection algorithm. Phase unwrapping is essential for imaging systems that construct the image from phase information. Unlike some existing methods where unwrapping is performed locally on a pixel-by-pixel basis, this work approaches the unwrapping problem from a global point of view. The unwrapping is done iteratively by a modification of the Gerchberg-Papoulis extrapolation algorithm, and the solution is refined by projecting onto the available global data at each iteration. Robustness of the algorithm is demonstrated through its performance in a noisy environment, and in comparison with a least-squares algorithm well-known in the literature.
Outlier robustness for wind turbine extrapolated extreme loads
DEFF Research Database (Denmark)
Natarajan, Anand; Verelst, David Robert
2012-01-01
Methods for extrapolating extreme loads to a 50 year probability of exceedance, which display robustness to the presence of outliers in simulated loads data set, are described. Case studies of isolated high extreme out-of-plane loads are discussed to emphasize their underlying physical reasons....... Stochastic identification of numerical artifacts in simulated loads is demonstrated using the method of principal component analysis. The extrapolation methodology is made robust to outliers through a weighted loads approach, whereby the eigenvalues of the correlation matrix obtained using the loads with its...... simulation is demonstrated and compared with published results. Further effects of varying wind inflow angles and shear exponent is brought out. Parametric fitting techniques that consider all extreme loads including ‘outliers’ are proposed, and the physical reasons that result in isolated high extreme loads...
Temperature extrapolation of multicomponent grand canonical free energy landscapes
Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.
2017-08-01
We derive a method for extrapolating the grand canonical free energy landscape of a multicomponent fluid system from one temperature to another. Previously, we introduced this statistical mechanical framework for the case where kinetic energy contributions to the classical partition function were neglected for simplicity [N. A. Mahynski et al., J. Chem. Phys. 146, 074101 (2017)]. Here, we generalize the derivation to admit these contributions in order to explicitly illustrate the differences that result. Specifically, we show how factoring out kinetic energy effects a priori, in order to consider only the configurational partition function, leads to simpler mathematical expressions that tend to produce more accurate extrapolations than when these effects are included. We demonstrate this by comparing and contrasting these two approaches for the simple cases of an ideal gas and a non-ideal, square-well fluid.
A regularization method for extrapolation of solar potential magnetic fields
Gary, G. A.; Musielak, Z. E.
1992-01-01
The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.
Interpolation and Extrapolation of Precipitation Quantities in Serbia
Directory of Open Access Journals (Sweden)
Rastislav Stojsavljević
2013-01-01
Full Text Available The aim of this paper is to indicate the problems with filling the missing data in precipitation database using interpolation and extrapolation methods. Investigated periods were from 1981 to 2010 for Northern (Autonomous Province of Vojvodina and Proper Serbia and from 1971 to 2000 for Southern Serbia (Autonomous Province of Kosovo and Metohia. Database included time series from 78 meteorological stations that had less than 20% of missing data. Interpolation was performed if station had missing data for five consecutive months or less. If station had missing data for six consecutive months or more, extrapolation was performed. For every station with mising data correlation with at least three surrounding stations was performed. The lowest acceptable value of correlation coefficient for precipitation was set at 0,300
An efficient extrapolation to the (T)/CBS limit
Ranasinghe, Duminda S.; Barnes, Ericka C.
2014-05-01
We extrapolate to the perturbative triples (T)/complete basis set (CBS) limit using double ζ basis sets without polarization functions (Wesleyan-1-Triples-2ζ or "Wes1T-2Z") and triple ζ basis sets with a single level of polarization functions (Wesleyan-1-Triples-3ζ or "Wes1T-3Z"). These basis sets were optimized for 102 species representing the first two rows of the Periodic Table. The species include the entire set of neutral atoms, positive and negative atomic ions, as well as several homonuclear diatomic molecules, hydrides, rare gas dimers, polar molecules, such as oxides and fluorides, and a few transition states. The extrapolated Wes1T-(2,3)Z triples energies agree with (T)/CBS benchmarks to within ±0.65 mEh, while the rms deviations of comparable model chemistries W1, CBS-APNO, and CBS-QB3 for the same test set are ±0.23 mEh, ±2.37 mEh, and ±5.80 mEh, respectively. The Wes1T-(2,3)Z triples calculation time for the largest hydrocarbon in the G2/97 test set, C6H5Me+, is reduced by a factor of 25 when compared to W1. The cost-effectiveness of the Wes1T-(2,3)Z extrapolation validates the usefulness of the Wes1T-2Z and Wes1T-3Z basis sets which are now available for a more efficient extrapolation of the (T) component of any composite model chemistry.
Revisiting Chiral Extrapolation by Studying a Lattice Quark Propagator
Institute of Scientific and Technical Information of China (English)
ZHANG Yan-Bin; SUN Wei-Min; L(U) Xiao-Fu; ZONG Hong-Shi
2009-01-01
The quark propagator in the Landau gauge is studied on the lattice,including the quenched and the unquenched results.No obvious unquenched effects are found by comparing the quenched quark propagator with the dynamical one.For the quenched and unquenched configurations,the results with different quark masses have been computed.For the quark mass function,a nonlinear chiral extrapolating behavior is found in the in/tared region for both the quenched and dynamical results.
An efficient extrapolation to the (T)/CBS limit.
Ranasinghe, Duminda S; Barnes, Ericka C
2014-05-14
We extrapolate to the perturbative triples (T)/complete basis set (CBS) limit using double ζ basis sets without polarization functions (Wesleyan-1-Triples-2ζ or "Wes1T-2Z") and triple ζ basis sets with a single level of polarization functions (Wesleyan-1-Triples-3ζ or "Wes1T-3Z"). These basis sets were optimized for 102 species representing the first two rows of the Periodic Table. The species include the entire set of neutral atoms, positive and negative atomic ions, as well as several homonuclear diatomic molecules, hydrides, rare gas dimers, polar molecules, such as oxides and fluorides, and a few transition states. The extrapolated Wes1T-(2,3)Z triples energies agree with (T)/CBS benchmarks to within ±0.65 mEh, while the rms deviations of comparable model chemistries W1, CBS-APNO, and CBS-QB3 for the same test set are ±0.23 mEh, ±2.37 mEh, and ±5.80 mEh, respectively. The Wes1T-(2,3)Z triples calculation time for the largest hydrocarbon in the G2/97 test set, C6H5Me(+), is reduced by a factor of 25 when compared to W1. The cost-effectiveness of the Wes1T-(2,3)Z extrapolation validates the usefulness of the Wes1T-2Z and Wes1T-3Z basis sets which are now available for a more efficient extrapolation of the (T) component of any composite model chemistry.
An efficient extrapolation to the (T)/CBS limit
Energy Technology Data Exchange (ETDEWEB)
Ranasinghe, Duminda S. [Hall-Atwater Laboratories of Chemistry, Wesleyan University, Middletown, Connecticut 06459-0180 (United States); Barnes, Ericka C., E-mail: barnese8@southernct.edu [Department of Chemistry, Southern Connecticut State University, 501 Crescent Street, New Haven, Connecticut 06515-1355 (United States)
2014-05-14
We extrapolate to the perturbative triples (T)/complete basis set (CBS) limit using double ζ basis sets without polarization functions (Wesleyan-1-Triples-2ζ or “Wes1T-2Z”) and triple ζ basis sets with a single level of polarization functions (Wesleyan-1-Triples-3ζ or “Wes1T-3Z”). These basis sets were optimized for 102 species representing the first two rows of the Periodic Table. The species include the entire set of neutral atoms, positive and negative atomic ions, as well as several homonuclear diatomic molecules, hydrides, rare gas dimers, polar molecules, such as oxides and fluorides, and a few transition states. The extrapolated Wes1T-(2,3)Z triples energies agree with (T)/CBS benchmarks to within ±0.65 mE{sub h}, while the rms deviations of comparable model chemistries W1, CBS-APNO, and CBS-QB3 for the same test set are ±0.23 mE{sub h}, ±2.37 mE{sub h}, and ±5.80 mE{sub h}, respectively. The Wes1T-(2,3)Z triples calculation time for the largest hydrocarbon in the G2/97 test set, C{sub 6}H{sub 5}Me{sup +}, is reduced by a factor of 25 when compared to W1. The cost-effectiveness of the Wes1T-(2,3)Z extrapolation validates the usefulness of the Wes1T-2Z and Wes1T-3Z basis sets which are now available for a more efficient extrapolation of the (T) component of any composite model chemistry.
Effective Orthorhombic Anisotropic Models for Wave field Extrapolation
Ibanez Jacome, Wilson
2013-05-01
Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models, to reproduce wave propagation phenomena in the Earth\\'s subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, I generate effective isotropic inhomogeneous models that are capable of reproducing the first-arrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, I develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic one, is represented by a sixth order polynomial equation that includes the fastest solution corresponding to outgoing P-waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, which is done by explicitly solving the isotropic eikonal equation for the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. I extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the
Line-of-sight extrapolation noise in dust polarization
Energy Technology Data Exchange (ETDEWEB)
Poh, Jason; Dodelson, Scott
2017-05-19
The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g., 350 GHz) is due solely to dust and then extrapolate the signal down to lower frequency (e.g., 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typical Galactic thermal dust temperatures of about 20K, these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r < 0.0015 .
Biosimilars in Inflammatory Bowel Disease: Facts and Fears of Extrapolation.
Ben-Horin, Shomron; Vande Casteele, Niels; Schreiber, Stefan; Lakatos, Peter Laszlo
2016-12-01
Biologic drugs such as infliximab and other anti-tumor necrosis factor monoclonal antibodies have transformed the treatment of immune-mediated inflammatory conditions such as Crohn's disease and ulcerative colitis (collectively known as inflammatory bowel disease [IBD]). However, the complex manufacturing processes involved in producing these drugs mean their use in clinical practice is expensive. Recent or impending expiration of patents for several biologics has led to development of biosimilar versions of these drugs, with the aim of providing substantial cost savings and increased accessibility to treatment. Biosimilars undergo an expedited regulatory process. This involves proving structural, functional, and biological biosimilarity to the reference product (RP). It is also expected that clinical equivalency/comparability will be demonstrated in a clinical trial in one (or more) sensitive population. Once these requirements are fulfilled, extrapolation of biosimilar approval to other indications for which the RP is approved is permitted without the need for further clinical trials, as long as this is scientifically justifiable. However, such justification requires that the mechanism(s) of action of the RP in question should be similar across indications and also comparable between the RP and the biosimilar in the clinically tested population(s). Likewise, the pharmacokinetics, immunogenicity, and safety of the RP should be similar across indications and comparable between the RP and biosimilar in the clinically tested population(s). To date, most anti-tumor necrosis factor biosimilars have been tested in trials recruiting patients with rheumatoid arthritis. Concerns have been raised regarding extrapolation of clinical data obtained in rheumatologic populations to IBD indications. In this review, we discuss the issues surrounding indication extrapolation, with a focus on extrapolation to IBD.
Efficient extrapolation methods for electro- and magnetoquasistatic field simulations
Directory of Open Access Journals (Sweden)
M. Clemens
2003-01-01
Full Text Available In magneto- and electroquasi-static time domain simulations with implicit time stepping schemes the iterative solvers applied to the large sparse (non-linear systems of equations are observed to converge faster if more accurate start solutions are available. Different extrapolation techniques for such new time step solutions are compared in combination with the preconditioned conjugate gradient algorithm. Simple extrapolation schemes based on Taylor series expansion are used as well as schemes derived especially for multi-stage implicit Runge-Kutta time stepping methods. With several initial guesses available, a new subspace projection extrapolation technique is proven to produce an optimal initial value vector. Numerical tests show the resulting improvements in terms of computational efficiency for several test problems. In quasistatischen elektromagnetischen Zeitbereichsimulationen mit impliziten Zeitschrittverfahren zeigt sich, dass die iterativen Lösungsverfahren für die großen dünnbesetzten (nicht-linearen Gleichungssysteme schneller konvergieren, wenn genauere Startlösungen vorgegeben werden. Verschiedene Extrapolationstechniken werden für jeweils neue Zeitschrittlösungen in Verbindung mit dem präkonditionierten Konjugierte Gradientenverfahren vorgestellt. Einfache Extrapolationsverfahren basierend auf Taylorreihenentwicklungen werden ebenso benutzt wie speziell für mehrstufige implizite Runge-Kutta-Verfahren entwickelte Verfahren. Sind verschiedene Startlösungen verfügbar, so erlaubt ein neues Unterraum-Projektion- Extrapolationsverfahren die Konstruktion eines optimalen neuen Startvektors. Numerische Tests zeigen die aus diesen Verfahren resultierenden Verbesserungen der numerischen Effizienz.
Statistically extrapolated nowcasting of summertime precipitation over the Eastern Alps
Chen, Min; Bica, Benedikt; Tüchler, Lukas; Kann, Alexander; Wang, Yong
2017-07-01
This paper presents a new multiple linear regression (MLR) approach to updating the hourly, extrapolated precipitation forecasts generated by the INCA (Integrated Nowcasting through Comprehensive Analysis) system for the Eastern Alps. The generalized form of the model approximates the updated precipitation forecast as a linear response to combinations of predictors selected through a backward elimination algorithm from a pool of predictors. The predictors comprise the raw output of the extrapolated precipitation forecast, the latest radar observations, the convective analysis, and the precipitation analysis. For every MLR model, bias and distribution correction procedures are designed to further correct the systematic regression errors. Applications of the MLR models to a verification dataset containing two months of qualified samples, and to one-month gridded data, are performed and evaluated. Generally, MLR yields slight, but definite, improvements in the intensity accuracy of forecasts during the late evening to morning period, and significantly improves the forecasts for large thresholds. The structure-amplitude-location scores, used to evaluate the performance of the MLR approach, based on its simulation of morphological features, indicate that MLR typically reduces the overestimation of amplitudes and generates similar horizontal structures in precipitation patterns and slightly degraded location forecasts, when compared with the extrapolated nowcasting.
Effective ellipsoidal models for wavefield extrapolation in tilted orthorhombic media
Waheed, Umair Bin
2016-04-22
Wavefield computations using the ellipsoidally anisotropic extrapolation operator offer significant cost reduction compared to that for the orthorhombic case, especially when the symmetry planes are tilted and/or rotated. However, ellipsoidal anisotropy does not provide accurate wavefield representation or imaging for media of orthorhombic symmetry. Therefore, we propose the use of ‘effective ellipsoidally anisotropic’ models that correctly capture the kinematic behaviour of wavefields for tilted orthorhombic (TOR) media. We compute effective velocities for the ellipsoidally anisotropic medium using kinematic high-frequency representation of the TOR wavefield, obtained by solving the TOR eikonal equation. The effective model allows us to use the cheaper ellipsoidally anisotropic wave extrapolation operators. Although the effective models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including frequency dependency and caustics, if present, with reasonable accuracy. The proposed methodology offers a much better cost versus accuracy trade-off for wavefield computations in TOR media, particularly for media of low to moderate anisotropic strength. Furthermore, the computed wavefield solution is free from shear-wave artefacts as opposed to the conventional finite-difference based TOR wave extrapolation scheme. We demonstrate applicability and usefulness of our formulation through numerical tests on synthetic TOR models. © 2016 Institute of Geophysics of the ASCR, v.v.i
An efficient wave extrapolation method for anisotropic media with tilt
Waheed, Umair bin
2015-03-23
Wavefield extrapolation operators for elliptically anisotropic media offer significant cost reduction compared with that for the transversely isotropic case, particularly when the axis of symmetry exhibits tilt (from the vertical). However, elliptical anisotropy does not provide accurate wavefield representation or imaging for transversely isotropic media. Therefore, we propose effective elliptically anisotropic models that correctly capture the kinematic behaviour of wavefields for transversely isotropic media. Specifically, we compute source-dependent effective velocities for the elliptic medium using kinematic high-frequency representation of the transversely isotropic wavefield. The effective model allows us to use cheaper elliptic wave extrapolation operators. Despite the fact that the effective models are obtained by matching kinematics using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy trade-off for wavefield computations in transversely isotropic media, particularly for media of low to moderate complexity. In addition, the wavefield solution is free from shear-wave artefacts as opposed to the conventional finite-difference-based transversely isotropic wave extrapolation scheme. We demonstrate these assertions through numerical tests on synthetic tilted transversely isotropic models.
Efficient anisotropic wavefield extrapolation using effective isotropic models
Alkhalifah, Tariq Ali
2013-06-10
Isotropic wavefield extrapolation is more efficient than anisotropic extrapolation, and this is especially true when the anisotropy of the medium is tilted (from the vertical). We use the kinematics of the wavefield, appropriately represented in the high-frequency asymptotic approximation by the eikonal equation, to develop effective isotropic models, which are used to efficiently and approximately extrapolate anisotropic wavefields using the isotropic, relatively cheaper, operators. These effective velocity models are source dependent and tend to embed the anisotropy in the inhomogeneity. Though this isotropically generated wavefield theoretically shares the same kinematic behavior as that of the first arrival anisotropic wavefield, it also has the ability to include all the arrivals resulting from a complex wavefield propagation. In fact, the effective models reduce to the original isotropic model in the limit of isotropy, and thus, the difference between the effective model and, for example, the vertical velocity depends on the strength of anisotropy. For reverse time migration (RTM), effective models are developed for the source and receiver fields by computing the traveltime for a plane wave source stretching along our source and receiver lines in a delayed shot migration implementation. Applications to the BP TTI model demonstrates the effectiveness of the approach.
Directory of Open Access Journals (Sweden)
Rui CARDOSO
2015-12-01
Full Text Available The Alto Douro Wine Region, located in the northeast of Portugal, a UNESCO World Heritage Site, presents a relevant tabique building stock, a traditional vernacular building technology. A technology based on a timber framed structure filled with a composite earth-based material. Meanwhile, previous research works have revealed that, principally in rural areas, this Portuguese heritage is highly deteriorated and damaged because of the rareness of conservation and strengthening works, which is partly related to the non-engineered character of this technology and to the growing phenomenon of rural to urban migration. Those aspects associated with the lack of scientific studies related to this technology motivated the writing of this paper, whose main purpose is the physical and chemical characterization of the earth-based material applied in the tabique buildings of that region. Consequently, an experimental work was conducted and the results obtained allowed, among others, the proposal of a particle size distribution envelope in respect to this material. This information will provide the means to assess the suitability of a given earth-based material in regard to this technology. The knowledge from this study could be very useful for the development of future normative documents and as a reference for architects and engineers that work with earth to guide and regulate future conservation, rehabilitation or construction processes helping to preserve this fabulous legacy.
Shen, Jie; Wang, Li-Lian
2011-01-01
Along with finite differences and finite elements, spectral methods are one of the three main methodologies for solving partial differential equations on computers. This book provides a detailed presentation of basic spectral algorithms, as well as a systematical presentation of basic convergence theory and error analysis for spectral methods. Readers of this book will be exposed to a unified framework for designing and analyzing spectral algorithms for a variety of problems, including in particular high-order differential equations and problems in unbounded domains. The book contains a large
Clark, R. N.; Mccord, T. B.
1982-01-01
A description is presented of new earth-based reflectance spectra of the Martian north residual polar cap. The spectra indicate that the composition is at least mostly water ice plus another component with a 'gray' reflectance. The other minerals in the ice cap appear to be hydrated. The data were obtained with a cooled circular variable filter spectrometer on February 20, 1978, using the 2.2-m telescope on Mauna Kea, Hawaii. It is pointed out that the identification of water ice in the north polar cap alone does not indicate that water makes up all or even most of the bulk of the cap. Kieffer (1970) has shown that a small amount of water will mask the spectral features of CO2.
Smooth extrapolation of unknown anatomy via statistical shape models
Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.
2015-03-01
Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.
Extrapolation of vertical target motion through a brief visual occlusion.
Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco
2010-03-01
It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects.
Singularity-preserving image interpolation using wavelet transform extrema extrapolation
Zhai, Guangtao; Zhang, Yang; Zheng, Xiaoshi
2003-09-01
One common task of image interpolation is to enhance the resolution of the image, which means to magnify the image without loss in its clarity. Traditional methods often assume that the original images are smooth enough so as to possess continues derivatives, which tend to blur the edges of the interpolated image. A novel fast image interpolation algorithm based on wavelet transform and multi-resolution analysis is proposed in this paper. It uses interpolation and extrapolation polynomial to estimate the higher resolution informatoin of the image and generate a new sub-band of wavelet transform coefficients to get processed image with shaper edges and preserved singularities.
Novel Extrapolation Method in the Monte Carlo Shell Model
Shimizu, Noritaka; Mizusaki, Takahiro; Otsuka, Takaharu; Abe, Takashi; Honma, Michio
2010-01-01
We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model in order to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full $pf$-shell calculation of $^{56}$Ni, and the applicability of the method to a system beyond current limit of exact diagonalization is shown for the $pf$+$g_{9/2}$-shell calculation of $^{64}$Ge.
Mass extrapolation of quarks and leptons to higher generations
Energy Technology Data Exchange (ETDEWEB)
Barik, N. (Utkal Univ., Bhubaneswar (India). Dept. of Physics)
1981-05-01
An empirical mass formula is tested for the basic fermion sequences of charged quarks and leptons. This relation is a generalization of Barut's mass formula for the lepton sequence (e, ..mu.., tau ....). It is found that successful mass extrapolation to the third and possibly to other higher generations (N > 2) can be obtained with the first and second generation masses as inputs, which predicts the top quark mass msub(t) to be around 20 GeV. This also leads to the mass ratios between members of two different sequences (i) and (i') corresponding to the same higher generations (N > 2).
QCD thermodynamics with continuum extrapolated dynamical overlap fermions
Borsanyi, Sz; Lippert, T; Nogradi, D; Pittler, F; Szabo, K K; Toth, B C
2015-01-01
We study the finite temperature transition in QCD with two flavors of dynamical fermions at a pseudoscalar pion mass of about 350 MeV. We use lattices with temporal extent of $N_t$=8, 10 and 12. For the first time in the literature a continuum limit is carried out for several observables with dynamical overlap fermions. These findings are compared with results obtained within the staggered fermion formalism at the same pion masses and extrapolated to the continuum limit. The presented results correspond to fixed topology and its effect is studied in the staggered case. Nice agreement is found between the overlap and staggered results.
Ketcheson, David I.
2014-06-13
We compare the three main types of high-order one-step initial value solvers: extrapolation, spectral deferred correction, and embedded Runge–Kutta pairs. We consider orders four through twelve, including both serial and parallel implementations. We cast extrapolation and deferred correction methods as fixed-order Runge–Kutta methods, providing a natural framework for the comparison. The stability and accuracy properties of the methods are analyzed by theoretical measures, and these are compared with the results of numerical tests. In serial, the eighth-order pair of Prince and Dormand (DOP8) is most efficient. But other high-order methods can be more efficient than DOP8 when implemented in parallel. This is demonstrated by comparing a parallelized version of the wellknown ODEX code with the (serial) DOP853 code. For an N-body problem with N = 400, the experimental extrapolation code is as fast as the tuned Runge–Kutta pair at loose tolerances, and is up to two times as fast at tight tolerances.
Space-based and Earth-based Prospects for Measuring the Moment of Inertia of Venus
Margot, Jean-Luc; Campbell, Donald B.; Ghigo, Frank D.
2016-10-01
The moment of inertia is an essential integral constraint on models of planetary interiors. Our ignorance about Venus's moment of inertia prevents us from obtaining definite answers to key questions related to the size of the core, the thermal evolution history of the planet, the absence of a global magnetic field, and the evolution of the spin state. The technical challenge and cost of Venus landers make a direct measurement of the core size with seismology unlikely in the near future. For the same reasons, lander-based measurements of the spin precession rate, which yields the moment of inertia, are improbable in the near term. Tracking of the spin axis orientation with spacecraft or Earth-based radar over a decade or more offers more promising avenues. We use a precession model and the characteristics of existing data sets to quantify measurement prospects. The best Magellan estimates of the pole orientation have uncertainties of ~15 arcseconds (Konopliv et al., 1999) and an epoch that corresponds to the mid-point of the observations (~Oct. 1993). We describe achievable measurement uncertainties for a variety of scenarios including an additional spacecraft data point (e.g., at epoch 2023) with comparable or better precision than that of Magellan. Our 14 existing Earth-based radar observations obtained in 2006-2014 are sufficient to improve upon the best Magellan values and to unambiguously detect Venus's spin precession. We describe these results and quantify the uncertainties achievable on spin precession rate and moment of inertia with additional observations in the 2016-2023 interval. The Earth-based radar technique yielded a measurement of the spin axis orientation of Mercury with <5 arcsecond precision (Margot et al., 2012) that was later validated to <1 arcsecond level agreement with an independent, MESSENGER-based estimate (Stark et al., 2015).
Navigating the Return Trip from the Moon Using Earth-Based Ground Tracking and GPS
Berry, Kevin; Carpenter, Russell; Moreau, Michael C.; Lee, Taesul; Holt, Gregg N.
2009-01-01
NASA s Constellation Program is planning a human return to the Moon late in the next decade. From a navigation perspective, one of the most critical phases of a lunar mission is the series of burns performed to leave lunar orbit, insert onto a trans-Earth trajectory, and target a precise re-entry corridor in the Earth s atmosphere. A study was conducted to examine sensitivity of the navigation performance during this phase of the mission to the type and availability of tracking data from Earth-based ground stations, and the sensitivity to key error sources. This study also investigated whether GPS measurements could be used to augment Earth-based tracking data, and how far from the Earth GPS measurements would be useful. The ability to track and utilize weak GPS signals transmitted across the limb of the Earth is highly dependent on the configuration and sensitivity of the GPS receiver being used. For this study three GPS configurations were considered: a "standard" GPS receiver with zero dB antenna gain, a "weak signal" GPS receiver with zero dB antenna gain, and a "weak signal" GPS receiver with an Earth-pointing direction antenna (providing 10 dB additional gain). The analysis indicates that with proper selection and configuration of the GPS receiver on the Orion spacecraft, GPS can potentially improve navigation performance during the critical final phases of flight prior to Earth atmospheric entry interface, and may reduce reliance on two-way range tracking from Earth-based ground stations.
Evidence for risk extrapolation in decision making by tadpoles
Crane, Adam L.; Ferrari, Maud C. O.
2017-01-01
Through time, the activity patterns, morphology, and development of both predators and prey change, which in turn alter the relative vulnerability of prey to their coexisting predators. Recognizing these changes can thus allow prey to make optimal decisions by projecting risk trends into the future. We used tadpoles (Lithobates sylvaticus) to test the hypothesis that tadpoles can extrapolate information about predation risk from past information. We exposed tadpoles to an odour that represented either a temporally consistent risk or an increasing risk. When tested for their response to the odour, the initial antipredator behaviour of tadpoles did not differ, appearing to approach the limit of their maximum response, but exposure to increasing risk induced longer retention of these responses. When repeating the experiment using lower risk levels, heightened responses occurred for tadpoles exposed to increasing risk, and the strongest responses were exhibited by those that received an abrupt increase compared to a steady increase. Our results indicate that tadpoles can assess risk trends through time and adjust their antipredator responses in a way consistent with an extrapolated trend. This is a sophisticated method for prey to avoid threats that are becoming more (or less) dangerous over part of their lifespan. PMID:28230097
Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media
Waheed, Umair bin
2014-05-01
Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.
Calculating excitation energies by extrapolation along adiabatic connections
Rebolini, Elisa; Teale, Andrew M; Helgaker, Trygve; Savin, Andreas
2015-01-01
In this paper, an alternative method to range-separated linear-response time-dependent density-functional theory and perturbation theory is proposed to improve the estimation of the energies of a physical system from the energies of a partially interacting system. Starting from the analysis of the Taylor expansion of the energies of the partially interacting system around the physical system, we use an extrapolation scheme to improve the estimation of the energies of the physical system at an intermediate point of the range-separated or linear adiabatic connection where either the electron--electron interaction is scaled or only the long-range part of the Coulomb interaction is included. The extrapolation scheme is first applied to the range-separated energies of the helium and beryllium atoms and of the hydrogen molecule at its equilibrium and stretched geometries. It improves significantly the convergence rate of the energies toward their exact limit with respect to the range-separation parameter. The range...
Cecconi, Jaures
2011-01-01
G. Bottaro: Quelques resultats d'analyse spectrale pour des operateurs differentiels a coefficients constants sur des domaines non bornes.- L. Garding: Eigenfuction expansions.- C. Goulaouic: Valeurs propres de problemes aux limites irreguliers: applications.- G. Grubb: Essential spectra of elliptic systems on compact manifolds.- J.Cl. Guillot: Quelques resultats recents en Scattering.- N. Schechter: Theory of perturbations of partial differential operators.- C.H. Wilcox: Spectral analysis of the Laplacian with a discontinuous coefficient.
Earth-Base: A Free And Open Source, RESTful Earth Sciences Platform
Kishor, P.; Heim, N. A.; Peters, S. E.; McClennen, M.
2012-12-01
This presentation describes the motivation, concept, and architecture behind Earth-Base, a web-based, RESTful data-management, analysis and visualization platform for earth sciences data. Traditionally web applications have been built directly accessing data from a database using a scripting language. While such applications are great at bring results to a wide audience, they are limited in scope to the imagination and capabilities of the application developer. Earth-Base decouples the data store from the web application by introducing an intermediate "data application" tier. The data application's job is to query the data store using self-documented, RESTful URIs, and send the results back formatted as JavaScript Object Notation (JSON). Decoupling the data store from the application allows virtually limitless flexibility in developing applications, both web-based for human consumption or programmatic for machine consumption. It also allows outside developers to use the data in their own applications, potentially creating applications that the original data creator and app developer may not have even thought of. Standardized specifications for URI-based querying and JSON-formatted results make querying and developing applications easy. URI-based querying also allows utilizing distributed datasets easily. Companion mechanisms for querying data snapshots aka time-travel, usage tracking and license management, and verification of semantic equivalence of data are also described. The latter promotes the "What You Expect Is What You Get" (WYEIWYG) principle that can aid in data citation and verification.
The solution of coupled Schroedinger equations using an extrapolation method
Goorvitch, D.; Galant, D. C.
1992-01-01
In this paper, extrapolation to the limit in a finite-difference method is applied to solve a system of coupled Schroedinger equations. This combination results in a method that only requires knowledge of the potential energy functions for the system. This numerical procedure has several distinct advantages over the more conventional methods. Namely, initial guesses for the term values are not needed; assumptions need be made about the behavior of the wavefunctions, such as the slope or magnitude in the nonclassical region; and the algorithm is easy to implement, has a firm mathematical foundation, and provides error estimates. Moreover, the method is less sensitive to round-off error than other methods since a small number of mesh points is used and it can be implemented on small computers. A comparison of the method with another numerical method shows results agreeing within 1 part in 10 exp 4.
Nuclear Lattice Simulations using Symmetry-Sign Extrapolation
Lähde, Timo A; Lee, Dean; Meißner, Ulf-G; Epelbaum, Evgeny; Krebs, Hermann; Rupak, Gautam
2015-01-01
Projection Monte Carlo calculations of lattice Chiral Effective Field Theory suffer from sign oscillations to a varying degree dependent on the number of protons and neutrons. Hence, such studies have hitherto been concentrated on nuclei with equal numbers of protons and neutrons, and especially on the alpha nuclei where the sign oscillations are smallest. We now introduce the technique of "symmetry-sign extrapolation" which allows us to use the approximate Wigner SU(4) symmetry of the nuclear interaction to control the sign oscillations without introducing unknown systematic errors. We benchmark this method by calculating the ground-state energies of the $^{12}$C, $^6$He and $^6$Be nuclei, and discuss its potential for studies of neutron-rich halo nuclei and asymmetric nuclear matter.
Nuclear lattice simulations using symmetry-sign extrapolation
Energy Technology Data Exchange (ETDEWEB)
Laehde, Timo A.; Luu, Thomas [Forschungszentrum Juelich, Institute for Advanced Simulation, Institut fuer Kernphysik, and Juelich Center for Hadron Physics, Juelich (Germany); Lee, Dean [North Carolina State University, Department of Physics, Raleigh, NC (United States); Meissner, Ulf G. [Universitaet Bonn, Helmholtz-Institut fuer Strahlen- und Kernphysik and Bethe Center for Theoretical Physics, Bonn (Germany); Forschungszentrum Juelich, Institute for Advanced Simulation, Institut fuer Kernphysik, and Juelich Center for Hadron Physics, Juelich (Germany); Forschungszentrum Juelich, JARA - High Performance Computing, Juelich (Germany); Epelbaum, Evgeny; Krebs, Hermann [Ruhr-Universitaet Bochum, Institut fuer Theoretische Physik II, Bochum (Germany); Rupak, Gautam [Mississippi State University, Department of Physics and Astronomy, Mississippi State, MS (United States)
2015-07-15
Projection Monte Carlo calculations of lattice Chiral Effective Field Theory suffer from sign oscillations to a varying degree dependent on the number of protons and neutrons. Hence, such studies have hitherto been concentrated on nuclei with equal numbers of protons and neutrons, and especially on the alpha nuclei where the sign oscillations are smallest. Here, we introduce the ''symmetry-sign extrapolation'' method, which allows us to use the approximate Wigner SU(4) symmetry of the nuclear interaction to systematically extend the Projection Monte Carlo calculations to nuclear systems where the sign problem is severe. We benchmark this method by calculating the ground-state energies of the {sup 12}C, {sup 6}He and {sup 6}Be nuclei, and discuss its potential for studies of neutron-rich halo nuclei and asymmetric nuclear matter. (orig.)
UFOs in the LHC: Observations, studies and extrapolations
Baer, T; Cerutti, F; Ferrari, A; Garrel, N; Goddard, B; Holzer, EB; Jackson, S; Lechner, A; Mertens, V; Misiowiec, M; Nebot del Busto, E; Nordt, A; Uythoven, J; Vlachoudis, V; Wenninger, J; Zamantzas, C; Zimmermann, F; Fuster, N
2012-01-01
Unidentified falling objects (UFOs) are potentially a major luminosity limitation for nominal LHC operation. They are presumably micrometer sized dust particles which lead to fast beam losses when they interact with the beam. With large-scale increases and optimizations of the beam loss monitor (BLM) thresholds, their impact on LHC availability was mitigated from mid 2011 onwards. For higher beam energy and lower magnet quench limits, the problem is expected to be considerably worse, though. In 2011/12, the diagnostics for UFO events were significantly improved: dedicated experiments and measurements in the LHC and in the laboratory were made and complemented by FLUKA simulations and theoretical studies. The state of knowledge, extrapolations for nominal LHC operation and mitigation strategies are presented
Spatial extrapolation of lysimeter results using thermal infrared imaging
Voortman, B. R.; Bosveld, F. C.; Bartholomeus, R. P.; Witte, J. P. M.
2016-12-01
Measuring evaporation (E) with lysimeters is costly and prone to numerous errors. By comparing the energy balance and the remotely sensed surface temperature of lysimeters with those of the undisturbed surroundings, we were able to assess the representativeness of lysimeter measurements and to quantify differences in evaporation caused by spatial variations in soil moisture content. We used an algorithm (the so called 3T model) to spatially extrapolate the measured E of a reference lysimeter based on differences in surface temperature, net radiation and soil heat flux. We tested the performance of the 3T model on measurements with multiple lysimeters (47.5 cm inner diameter) and micro lysimeters (19.2 cm inner diameter) installed in bare sand, moss and natural dry grass. We developed different scaling procedures using in situ measurements and remotely sensed surface temperatures to derive spatially distributed estimates of Rn and G and explored the physical soundness of the 3T model. Scaling of Rn and G considerably improved the performance of the 3T model for the bare sand and moss experiments (Nash-Sutcliffe efficiency (NSE) increasing from 0.45 to 0.89 and from 0.81 to 0.94, respectively). For the grass surface, the scaling procedures resulted in a poorer performance of the 3T model (NSE decreasing from 0.74 to 0.70), which was attributed to effects of shading and the difficulty to correct for differences in emissivity between dead and living biomass. The 3T model is physically unsound if the field scale average air temperature, measured at an arbitrarily chosen reference height, is used as input to the model. The proposed measurement system is relatively cheap, since it uses a zero tension (freely draining) lysimeter which results are extrapolated by the 3T model to the unaffected surroundings. The system is promising for bridging the gap between ground observations and satellite based estimates of E.
Vigna, Sebastiano
2009-01-01
This note tries to attempt a sketch of the history of spectral ranking, a general umbrella name for techniques that apply the theory of linear maps (in particular, eigenvalues and eigenvectors) to matrices that do not represent geometric transformations, but rather some kind of relationship between entities. Albeit recently made famous by the ample press coverage of Google's PageRank algorithm, spectral ranking was devised more than fifty years ago, almost exactly in the same terms, and has been studied in psychology and social sciences. I will try to describe it in precise and modern mathematical terms, highlighting along the way the contributions given by previous scholars.
Investigating the stratigraphy of Mare Imbrium flow emplacement with Earth-based radar
Morgan, G. A.; Campbell, B. A.; Campbell, D. B.; Hawke, B. R.
2016-08-01
The lunar maria are the product of extensive basaltic volcanism that flooded widespread portions of the Moon's surface. Constraining mare volcanic history therefore provides a window into the endogenic processes responsible for shaping the Moon. Due to the low magma viscosity and the associated thin nature of lava units, the majority of mare surface structures are masked and subdued by impact regolith. Subtle individual mare flow morphologies, coupled with spatial limitations in the use of crater size distributions to distinguish surface units close in age, restrict our understanding of mare stratigraphy. Earth-based 70 cm wavelength (P band) radar can reveal features beneath the regolith and highlight very subtle changes in the ilmenite content of the flows, providing a unique means to map mare units. Here we map volcanic units in Mare Imbrium using high-resolution (200 m/pixel), Earth-based P band data. Situated within the heat-producing potassium, rare earth element, and phosphorus terrane, Mare Imbrium experienced some of the most long-lived (and recent) lunar volcanism, and its surface exhibits a significant diversity of basaltic chemistry. Our investigation identifies at least four distinct stages of volcanic activity, originating from multiple sources within Imbrium. The most recent of these stages comprises extensive, yet relatively thin volcanic flow units that left remnant kipukas of older mare material distributed across much of the basin. From a future mission perspective, it may be possible to collect samples expressing a wide range in age from small areas of Mare Imbrium. Our map also places important constraints on the interpretation of the Chang'e-3 Lunar Penetrating Radar measurements.
Energy Technology Data Exchange (ETDEWEB)
Smartt, Heidi A. [Sandia National Laboratories (United States)
2003-05-01
This research examines the feasibility of spectral tagging, which involves modifying the spectral signature of a target, e.g. by mixing an additive with the target's paint. The target is unchanged to the human eye, but the tag is revealed when viewed with a spectrometer. This project investigates a layer of security that is not obvious, and therefore easy to conceal. The result is a tagging mechanism that is difficult to counterfeit. Uniquely tagging an item is an area of need in safeguards and security and non-proliferation. The powdered forms of the minerals lapis lazuli and olivine were selected as the initial test tags due to their availability and uniqueness in the visible to near-infrared spectral region. They were mixed with paints and applied to steel. In order to verify the presence of the tags quantitatively, the data from the spectrometer was input into unmixing models and signal detection algorithms. The mixture with the best results was blue paint mixed with lapis lazuli and olivine. The tag had a 0% probability of false alarm and a 100% probability of detection. The research proved that spectral tagging is feasible, although certain tag/paint mixtures are more detectable than others.
Border extrapolation using fractal attributes in remote sensing images
Cipolletti, M. P.; Delrieux, C. A.; Perillo, G. M. E.; Piccolo, M. C.
2014-01-01
In management, monitoring and rational use of natural resources the knowledge of precise and updated information is essential. Satellite images have become an attractive option for quantitative data extraction and morphologic studies, assuring a wide coverage without exerting negative environmental influence over the study area. However, the precision of such practice is limited by the spatial resolution of the sensors and the additional processing algorithms. The use of high resolution imagery (i.e., Ikonos) is very expensive for studies involving large geographic areas or requiring long term monitoring, while the use of less expensive or freely available imagery poses a limit in the geographic accuracy and physical precision that may be obtained. We developed a methodology for accurate border estimation that can be used for establishing high quality measurements with low resolution imagery. The method is based on the original theory by Richardson, taking advantage of the fractal nature of geographic features. The area of interest is downsampled at different scales and, at each scale, the border is segmented and measured. Finally, a regression of the dependence of the measured length with respect to scale is computed, which then allows for a precise extrapolation of the expected length at scales much finer than the originally available. The method is tested with both synthetic and satellite imagery, producing accurate results in both cases.
Full waveform inversion with extrapolated low frequency data
Li, Yunyue Elita
2016-01-01
The availability of low frequency data is an important factor in the success of full waveform inversion (FWI) in the acoustic regime. The low frequencies help determine the kinematically relevant, low-wavenumber components of the velocity model, which are in turn needed to avoid convergence of FWI to spurious local minima. However, acquiring data below 2 or 3 Hz from the field is a challenging and expensive task. In this paper we explore the possibility of synthesizing the low frequencies computationally from high-frequency data, and use the resulting prediction of the missing data to seed the frequency sweep of FWI. As a signal processing problem, bandwidth extension is a very nonlinear and delicate operation. It requires a high-level interpretation of bandlimited seismic records into individual events, each of which is extrapolable to a lower (or higher) frequency band from the non-dispersive nature of the wave propagation model. We propose to use the phase tracking method for the event separation task. The...
Delayed inhibition of an anticipatory action during motion extrapolation
Directory of Open Access Journals (Sweden)
Riek Stephan
2010-04-01
Full Text Available Abstract Background Continuous visual information is important for movement initiation in a variety of motor tasks. However, even in the absence of visual information people are able to initiate their responses by using motion extrapolation processes. Initiation of actions based on these cognitive processes, however, can demand more attentional resources than that required in situations in which visual information is uninterrupted. In the experiment reported we sought to determine whether the absence of visual information would affect the latency to inhibit an anticipatory action. Methods The participants performed an anticipatory timing task where they were instructed to move in synchrony with the arrival of a moving object at a determined contact point. On 50% of the trials, a stop sign appeared on the screen and it served as a signal for the participants to halt their movements. They performed the anticipatory task under two different viewing conditions: Full-View (uninterrupted and Occluded-View (occlusion of the last 500 ms prior to the arrival at the contact point. Results The results indicated that the absence of visual information prolonged the latency to suppress the anticipatory movement. Conclusion We suggest that the absence of visual information requires additional cortical processing that creates competing demand for neural resources. Reduced neural resources potentially causes increased reaction time to the inhibitory input or increased time estimation variability, which in combination would account for prolonged latency.
Institute of Scientific and Technical Information of China (English)
Qiumei Huang; Yidu Yang
2008-01-01
In this paper,we introduce a new extrapolation formula by combining Richardson extrapolation and Sloan iteration algorithms.Using this extrapolation formula,we obtain some asymptotic expansions of the Galerkin finite element method for semi-simple eigenvalue problems of Fredholm integral equations of the second kind and improve the accuracy of the numerical approximations of the corresponding eigenvalues.Some numerical experiments are carried out to demonstrate the effectiveness of OUr new method and to confirm our theoretical results.
Energy Technology Data Exchange (ETDEWEB)
Ibarria, L; Lindstrom, P; Rossignac, J
2006-11-17
Many scientific, imaging, and geospatial applications produce large high-precision scalar fields sampled on a regular grid. Lossless compression of such data is commonly done using predictive coding, in which weighted combinations of previously coded samples known to both encoder and decoder are used to predict subsequent nearby samples. In hierarchical, incremental, or selective transmission, the spatial pattern of the known neighbors is often irregular and varies from one sample to the next, which precludes prediction based on a single stencil and fixed set of weights. To handle such situations and make the best use of available neighboring samples, we propose a local spectral predictor that offers optimal prediction by tailoring the weights to each configuration of known nearby samples. These weights may be precomputed and stored in a small lookup table. We show that predictive coding using our spectral predictor improves compression for various sources of high-precision data.
Measurement of fatty acid oxidation: validation of isotopic equilibrium extrapolation
Energy Technology Data Exchange (ETDEWEB)
Robin, A.P.; Jeevanandam, M.; Elwyn, D.H.; Askanazi, J.; Kinney, J.M.
1989-01-01
Measurement of whole body substrate oxidation requires prolonged isotope infusion to attain plateau specific activity (SA) of expired CO/sub 2/. We have investigated in 13 hospitalized patients a technique whereby plateau /sup 14/CO/sub 2/ SA is extrapolated using computer curve fitting based upon the early exponential rise. A primed-constant infusion of albumin-bound 1-/sup 14/C-palmitate was continued for 260 minutes with isotope priming of the secondary bicarbonate pool at 70 minutes. Plasma free fatty acid (FFA) SA reached steady state by 40 minutes and was 91% +/- 4% (SE) of values obtained at 190 to 260 minutes. At 70 minutes /sup 14/CO/sub 2/ SA reached only 44% +/- 1% of the 190 to 260 minute values, which were consistently at plateau. The predicted steady state /sup 14/CO/sub 2/ SA from the 40 to 70 minute curves and the FFA oxidation rates calculated from those values were 94% +/- 2% and 102% +/- 4%, respectively, of values measured at steady state (190 to 260 minutes). The relationship between predicted and measured values approximated the line of identity for /sup 14/CO/sub 2/ SA (y = 0.90x + 0.14, r = .98, P less than .001) and FFA oxidation (y = 1.02x, r = .98, P less than .001). The results suggest that FFA oxidation can be accurately calculated using a short infusion of labeled FFA without bicarbonate pool priming, thus avoiding overpriming or underpriming and possibly allowing multiple studies and diminished radioisotope exposure.
Ternary rare-earth based alternative gate-dielectrics for future integration in MOSFETs
Energy Technology Data Exchange (ETDEWEB)
Schubert, Juergen; Lopes, Joao Marcelo; Durgun Oezben, Eylem; Luptak, Roman; Lenk, Steffi; Zander, Willi; Roeckerath, Martin [IBN 1-IT, Forschungszentrum Juelich, 52425 Juelich (Germany)
2009-07-01
The dielectric SiO{sub 2} has been the key to the tremendous improvements in Si-based metal-oxide-semiconductor (MOS) device performance over the past four decades. It has, however, reached its limit in terms of scaling since it exhibits a leakage current density higher than 1 A/cm{sup 2} and does not retain its intrinsic physical properties at thicknesses below 1.5 nm. In order to overcome these problems and keep Moore's law ongoing, the use of higher dielectric constant (k) gate oxides has been suggested. These high-k materials must satisfy numerous requirements such as the high k, low leakage currents, suitable band gap und offsets to silicon. Rare-earth based dielectrics are promising materials which fulfill these needs. We will review the properties of REScO{sub 3} (RE = La, Dy, Gd, Sm, Tb) and LaLuO{sub 3} thin films, grown with pulsed laser deposition, e-gun evaporation or molecular beam deposition, integrated in capacitors and transistors. A k > 20 for the REScO{sub 3} (RE = Dy, Gd) and around 30 for (RE = La, Sm, Tb) and LaLuO{sub 3} are obtained. Transistors prepared on SOI and sSOI show mobility values up to 380 cm{sup 2}/Vs on sSOI, which are comparable to such prepared with HfO{sub 2}.
in vivo EFFECTS OF RARE-EARTH BASED NANOPARTICLES ON OXIDATIVE BALANCE IN RATS
Directory of Open Access Journals (Sweden)
V. K. Klochkov
2016-12-01
Full Text Available The purpose of the research was to find the influence of rare-earth based nanoparticles (CeO2, GdVO2: Eu3+ on the oxidative balance in rats. We analyzed biochemical markers of oxidative stress (lipid peroxidation level, nitric oxide metabolites, sulfhydryl groups content and enzyme activities (superoxide dismutase, catalase in tissues of rats. It has been found that administration of both types of the nanoparticles increased nitric oxide metabolites and products of lipid peroxidation in liver and spleen within 5 days. At injections of GdVO2: Eu3+ lipid peroxidation products, nitric oxide metabolites in serum at 5, 10 and 15 days of the experiment was also increased whereas the level of sulfhydryl groups decreased compared to the intact state and the control. In contrast, under the influence of nanoparticle CeO2 level diene conjugates were not significantly changed and the level of nitric oxide metabolites within 15 day even decreased. During this period, under the influence of both types of nanoparticles the activity of superoxide dismutase was increased, catalase activity was not changed. Oxidative stress coefficient showed the less pronounced CeO2 prooxidant effect (2.04 in comparison to GdVO2: Eu3+ (6.89. However, after-effect of both types of nanoparticles showed complete restoration of oxidative balance values.
Comparison of reusable insulation systems for cryogenically-tanked earth-based space vehicles
Sumner, I. E.; Barber, J. R.
1978-01-01
Three reusable insulation systems concepts have been developed for use with cryogenic tanks of earth-based space vehicles. Two concepts utilized double-goldized Kapton (DGK) or double-aluminized Mylar (DAM) multilayer insulation (MLI), while the third utilized a hollow-glass-microsphere, load-bearing insulation (LBI). All three insulation systems have recently undergone experimental testing and evaluation under NASA-sponsored programs. Thermal performance measurements were made under space-hold (vacuum) conditions for insulation warm boundary temperatures of approximately 291 K. The resulting effective thermal conductivity was approximately .00008 W/m-K for the MLI systems (liquid hydrogen test results) and .00054 W/m-K for the LBI system (liquid nitrogen test results corrected to liquid hydrogen temperature). The DGK MLI system experienced a maximum thermal degradation of 38 percent, the DAM MLI system 14 percent, and the LBI system 6.7 percent due to repeated thermal cycling representing typical space flight conditions. Repeated exposure of the DAM MLI system to a high humidity environment for periods as long as 8 weeks provided a maximum degradation of only 24 percent.
Extrapolating human judgments from skip-gram vector representations of word meaning.
Hollis, Geoff; Westbury, Chris; Lefsrud, Lianne
2017-08-01
There is a growing body of research in psychology that attempts to extrapolate human lexical judgments from computational models of semantics. This research can be used to help develop comprehensive norm sets for experimental research, it has applications to large-scale statistical modelling of lexical access and has broad value within natural language processing and sentiment analysis. However, the value of extrapolated human judgments has recently been questioned within psychological research. Of primary concern is the fact that extrapolated judgments may not share the same pattern of statistical relationship with lexical and semantic variables as do actual human judgments; often the error component in extrapolated judgments is not psychologically inert, making such judgments problematic to use for psychological research. We present a new methodology for extrapolating human judgments that partially addresses prior concerns of validity. We use this methodology to extrapolate human judgments of valence, arousal, dominance, and concreteness for 78,286 words. We also provide resources for users to extrapolate these human judgments for three million English words and short phrases. Applications for large sets of extrapolated human judgments are demonstrated and discussed.
Load extrapolations based on measurements from an offshore wind turbine at alpha ventus
Lott, Sarah; Cheng, Po Wen
2016-09-01
Statistical extrapolations of loads can be used to estimate the extreme loads that are supposed to occur on average once in a given return period. Load extrapolations of extreme loads recorded for a period of three years at different measurement positions of an offshore wind turbine at the alpha ventus offshore test field have been performed. The difficulties that arise when using measured instead of simulated extreme loads in order to determine 50-year return loads will be discussed in detail. The main challenge are outliers in the databases that have a significant influence on the extrapolated extreme loads. Results of the short- and longterm extreme load extrapolations, comprising different methods for the extreme load extraction, the choice of the statistical distribution function as well as the fitting method are presented. Generally, load extrapolation with measurement data is possible, but care should be taken in terms of the selection of the database and the choice of the distribution function and fitting method.
Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?
Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.
2016-02-01
It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.
Strong, James Asa; Elliott, Michael
2017-03-15
The reporting of ecological phenomena and environmental status routinely required point observations, collected with traditional sampling approaches to be extrapolated to larger reporting scales. This process encompasses difficulties that can quickly entrain significant errors. Remote sensing techniques offer insights and exceptional spatial coverage for observing the marine environment. This review provides guidance on (i) the structures and discontinuities inherent within the extrapolative process, (ii) how to extrapolate effectively across multiple spatial scales, and (iii) remote sensing techniques and data sets that can facilitate this process. This evaluation illustrates that remote sensing techniques are a critical component in extrapolation and likely to underpin the production of high-quality assessments of ecological phenomena and the regional reporting of environmental status. Ultimately, is it hoped that this guidance will aid the production of robust and consistent extrapolations that also make full use of the techniques and data sets that expedite this process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fang, Jun; Song, Haifeng; Wang, Han
2016-01-01
Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn-Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps or more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choice...
Google Earth-Based Grand Tours of the World's Ocean Basins and Marine Sediments
St John, K. K.; De Paor, D. G.; Suranovic, B.; Robinson, C.; Firth, J. V.; Rand, C.
2016-12-01
The GEODE project has produced a collection of Google Earth-based marine geology teaching resources that offer grand tours of the world's ocean basins and marine sediments. We use a map of oceanic crustal ages from Müller et al (2008; doi:10.1029/2007GC001743), and a set of emergent COLLADA models of IODP drill core data as a basis for a Google Earth tour introducing students to the world's ocean basins. Most students are familiar with basic seafloor spreading patterns but teaching experience suggests that few students have an appreciation of the number of abandoned ocean basins on Earth. Students also lack a valid visualization of the west Pacific where the oldest crust forms an isolated triangular patch and the ocean floor becomes younger towards the subduction zones. Our tour links geographic locations to mechanical models of rifting, seafloor spreading, subduction, and transform faulting. Google Earth's built-in earthquake and volcano data are related to ocean floor patterns. Marine sediments are explored in a Google Earth tour that draws on exemplary IODP core samples of a range of sediment types (e.g., turbidites, diatom ooze). Information and links are used to connect location to sediment type. This tour compliments a physical core kit of core catcher sections that can be employed for classroom instruction (geode.net/marine-core-kit/). At a larger scale, we use data from IMLGS to explore the distribution of the marine sediments types in the modern global ocean. More than 2,500 sites are plotted with access to original data. Students are guided to compare modern "type sections" of primary marine sediment lithologies, as well as examine site transects to address questions of bathymetric setting, ocean circulation, chemistry (e.g., CCD), and bioproductivity as influences on modern seafloor sedimentation. KMZ files, student exercises, and tips for instructors are available at geode.net/exploring-marine-sediments-using-google-earth.
Integration of an Earth-Based Science Team During Human Exploration of Mars
Chappell, Steven P.; Beaton, Kara H.; Newton, Carolyn; Graff, Trevor G.; Young, Kelsey E.; Coan, David; Abercromby, Andrew F. J.; Gernhardt, Michael L.
2017-01-01
NASA Extreme Environment Mission Operations (NEEMO) is an underwater spaceflight analog that allows a true mission-like operational environment and uses buoyancy effects and added weight to simulate different gravity levels. A mission was undertaken in 2016, NEEMO 21, at the Aquarius undersea research habitat. During the mission, the effects of varied oper-ations concepts with representative communication latencies as-sociated with Mars missions were studied. Six subjects were weighed out to simulate partial gravity and evaluated different operations concepts for integration and management of a simulated Earth-based science team (ST) who provided input and direction during exploration activities. Exploration traverses were planned in advance based on precursor data collected. Subjects completed science-related tasks including presampling surveys and marine-science-based sampling during saturation dives up to 4 hours in duration that simulated extravehicular activity (EVA) on Mars. A communication latency of 15 minutes in each direction between space and ground was simulated throughout the EVAs. Objective data included task completion times, total EVA time, crew idle time, translation time, ST assimilation time (defined as time available for the science team to discuss, to review and act upon data/imagery after they have been collected and transmitted to the ground). Subjective data included acceptability, simulation quality, capability assessment ratings, and comments. In addition, comments from both the crew and the ST were captured during the post-mission debrief. Here, we focus on the acceptability of the operations concepts studied and the capabilities most enhancing or enabling in the operations concept. The importance and challenges of designing EVA time-lines to account for the length of the task, level of interaction with the ground that is required/desired, and communication latency, are discussed.
Chen, Yuan; Liu, Liling; Nguyen, Khanh; Fretland, Adrian J
2011-03-01
Reaction phenotyping using recombinant human cytochromes P450 (P450) has great utility in early discovery. However, to fully realize the advantages of using recombinant expressed P450s, the extrapolation of data from recombinant systems to human liver microsomes (HLM) is required. In this study, intersystem extrapolation factors (ISEFs) were established for CYP1A2, CYP2C8, CYP2C9, CYP2C19, CYP2D6, and CYP3A4 using 11 probe substrates, based on substrate depletion and/or metabolite formation kinetics. The ISEF values for CYP2C9, CYP2D6, and CYP3A4 determined using multiple substrates were similar across substrates. When enzyme kinetics of metabolite formation for CYP1A2, 2C9, 2D6, and 3A4 were used, the ISEFs determined were generally within 2-fold of that determined on the basis of substrate depletion. Validation of ISEFs was conducted using 10 marketed drugs by comparing the extrapolated data with published data. The major isoforms responsible for the metabolism were identified, and the contribution of the predominant P450s was similar to that of previously reported data. In addition, phenotyping data from internal compounds, extrapolated using the rhP450-ISEF method, were comparable to those obtained using an HLM-based inhibition assay approach. Moreover, the intrinsic clearance (CL(int)) calculated from extrapolated rhP450 data correlated well with measured HLM CL(int). The ISEF method established in our laboratory provides a convenient tool in early reaction phenotyping for situations in which the HLM-based inhibition approach is limited by low turnover and/or unavailable metabolite formation. Furthermore, this method allows for quantitative extrapolation of HLM intrinsic clearance from rhP450 phenotyping data simultaneously to obtaining the participating metabolizing enzymes.
An extrapolation scheme for solid-state NMR chemical shift calculations
Nakajima, Takahito
2017-06-01
Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.
Chiral extrapolation of nucleon axial charge gA in effective field theory
Li, Hong-na; Wang, P.
2016-12-01
The extrapolation of nucleon axial charge gA is investigated within the framework of heavy baryon chiral effective field theory. The intermediate octet and decuplet baryons are included in the one loop calculation. Finite range regularization is applied to improve the convergence in the quark-mass expansion. The lattice data from three different groups are used for the extrapolation. At physical pion mass, the extrapolated gA are all smaller than the experimental value. Supported by National Natural Science Foundation of China (11475186) and Sino-German CRC 110 (NSFC 11621131001)
Frequency Extrapolation by Floating Genetic Algorithm Based on GTD Model for Radar Cross Section
Institute of Scientific and Technical Information of China (English)
YANG Zhenglong; FANG Dagang; SHENG Weixing; LIU Tiejun; ZHUANG Jing
2001-01-01
A frequency extrapolation scheme isdeveloped to effectively predict radar cross section us-ing floating genetic algorithm based on the GTD (ge-ometry theory of diffraction) model. The parameter-ized model to extrapolate the frequency response tohigher (or lower) frequency band is used and somepractical targets are calculated to test the effective-ness of the method. The influence of extrapolationon the range profile is studied. Furthermore, the re-lationship between the fitting precision and extrap-olation ability is considered. Different extrapolationprocedures are discussed.
Yurkin, Maxim A; Hoekstra, Alfons G
2006-01-01
We propose an extrapolation technique that allows accuracy improvement of the discrete dipole approximation computations. The performance of this technique was studied empirically based on extensive simulations for 5 test cases using many different discretizations. The quality of the extrapolation improves with refining discretization reaching extraordinary performance especially for cubically shaped particles. A two order of magnitude decrease of error was demonstrated. We also propose estimates of the extrapolation error, which were proven to be reliable. Finally we propose a simple method to directly separate shape and discretization errors and illustrated this for one test case.
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Naess, Arvid; Saha, Nilanjan;
2011-01-01
The paper explores a recently developed method for statistical response load (load effect) extrapolation for application to extreme response of wind turbines during operation. The extrapolation method is based on average conditional exceedance rates and is in the present implementation restricted......-of-plane bending moment and the tower mudline bending moment of a pitch-controlled wind turbine. In general, the results show that the method based on average conditional exceedance rates predicts the extrapolated characteristic response loads at the individual mean wind speeds well and results in more consistent...
Extrapolation from , vector-valued inequalities and applications in the Schrödinger settings
Tang, Lin
2014-04-01
In this paper, we generalize the A ∞ extrapolation theorem ( Cruz-Uribe-Martell-Pérez, Extrapolation from A ∞ weights and applications, J. Funct. Anal. 213 (2004), 412-439) and the A p extrapolation theorem of Rubio de Francia to Schrödinger settings. In addition, we also establish weighted vector-valued inequalities for Schrödinger-type maximal operators by using weights belonging to which includes A p . As applications, we establish weighted vector-valued inequalities for some Schrödinger-type operators.
Earth-based and Cassini-spacecraft Observations of Irregular Moons of Jupiter and Saturn
Denk, Tilmann; Mottola, S.; Roatsch, T.; Rosenberg, H.; Neukum, G.
2010-10-01
We observed irregular satellites of Jupiter and Saturn with the ISS camera of the Cassini spacecraft [1] and with the 1.23-m telescope of the Calar Alto observatory in Spain [2]. Scientific goals are the determination of rotation periods, rotation-axis orientations, spin directions, size parameters, color properties, phase curves, and searches for binaries. Himalia (J6), the largest of the irregular jovian moons, has been imaged by Cassini on 18 Dec 2000; a body size of 120±5 km x 150±10 km and an albedo of 0.05±0.01 have been measured [3,4]. Earth-based observations revealed that Himalia's rotation period is probably 9.3 h, which is in agreement with the 9.2 to 9.8 h suggested by [5], although periods of 7.8 or 11.7 h cannot be ruled out yet. In the saturnian system, 10 irregular moons were scheduled for Cassini ISS observations over time spans >9 hrs until end-of-August, 2010. Observation distances vary between 5.6 and 22 million km, corresponding to ISS pixel scales of 34 to 130 km. For the objects measured so far, the rotation periods vary significantly. For instance, Siarnaq (S/2000 S3; size 40 km) and Ymir (S/2000 S1; 18 km) exhibit rotation periods of 6.7 h and 7.3 h, respectively, while Kiviuq (S/2000 S5; 16 km) might take about 22 h for one rotation. First results from the observation campaigns will be presented at the meeting. References: [1] Porco, C.C., et al. (2004), Space Sci. Rev. 115, 363; [2] http://www.caha.es/CAHA/Telescopes/1.2m.html; [3] Denk, T. et al. (2001), Conference on Jupiter (Planet, Satellites & Magnetosphere), Boulder, CO, 25-30 June 2001, abstracts book p. 30-31; [4] Porco, C.C., et al. (2003), Science 299, 1541; [5] Degewij, J., et al. (1980), Icarus 44, 520. We gratefully acknowledge funding by the German Space Agency (DLR) Bonn through grant no. 50 OH 0305.
Cross-species extrapolation of toxicity data from limited surrogate test organisms to all wildlife with potential of chemical exposure remains a key challenge in ecological risk assessment. A number of factors affect extrapolation, including the chemical exposure, pharmacokinetic...
NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.
Hinrichs, R N; McLean, S P
1995-10-01
This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.
Melting of "non-magic" argon clusters and extrapolation to the bulk limit
Senn, Florian; Wiebke, Jonas; Schumann, Ole; Gohr, Sebastian; Schwerdtfeger, Peter; Pahl, Elke
2014-01-01
The melting of argon clusters ArN is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Koči, and P. Schwerdtfeger, "Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations," Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes.
[Effects of spatial heterogeneity on spatial extrapolation of sampling plot data].
Liang, Yu; He, Hong-Shi; Hu, Yuan-Man; Bu, Ren-Cang
2012-01-01
By using model combination method, this paper simulated the changes of response variable (tree species distribution area at landscape level under climate change) under three scenarios of environmental spatial heterogeneous level, analyzed the differentiation of simulated results under different scenarios, and discussed the effects of environmental spatial heterogeneity on the larger spatial extrapolation of the tree species responses to climate change observed in sampling plots. For most tree species, spatial heterogeneity had little effects on the extrapolation from plot scale to class scale; for the tree species insensitive to climate warming and the azonal species, spatial heterogeneity also had little effects on the extrapolation from plot-scale to zonal scale. By contrast, for the tree species sensitive to climate warming, spatial heterogeneity had effects on the extrapolation from plot scale to zonal scale, and the effects could be varied under different scenarios.
The extrapolation of creep rupture data by PD6605 - An independent case study
Energy Technology Data Exchange (ETDEWEB)
Bolton, J., E-mail: john.bolton@uwclub.net [65 Fisher Avenue, Rugby, Warks CV22 5HW (United Kingdom)
2011-04-15
The worked example presented in BSI document PD6605-1:1998, to illustrate the selection, validation and extrapolation of a creep rupture model using statistical analysis, was independently examined. Alternative rupture models were formulated and analysed by the same statistical methods, and were shown to represent the test data more accurately than the original model. Median rupture lives extrapolated from the original and alternative models were found to diverge widely under some conditions of practical interest. The tests prescribed in PD6605 and employed to validate the original model were applied to the better of the alternative models. But the tests were unable to discriminate between the two, demonstrating that these tests fail to ensure reliability in extrapolation. The difficulties of determining when a model is sufficiently reliable for use in extrapolation are discussed and some proposals are made.
Optimal channels of the Garvey-Kelson mass relations in extrapolation
Bao, Man; He, Zeng; Cheng, YiYuan; Zhao, YuMin; Arima, Akito
2017-02-01
Garvey-Kelson mass relations connect nuclear masses of neighboring nuclei within high accuracy, and provide us with convenient tools in predicting unknown masses by extrapolations from existent experimental data. In this paper we investigate optimal "channels" of the Garvey-Kelson relations in extrapolation to the unknown regions, and tabulate our predicted masses by using these optimized channels of the Garvey-Kelson relations.
Wadsworth, Ian; Jaki, Thomas; Sills, Graeme J; Appleton, Richard; Cross, J Helen; Marson, Anthony G; Martland, Tim; McLellan, Ailsa; Smith, Philip E. M.; Pellock, John M; Hampson, Lisa V.
2016-01-01
Data from clinical trials in adults, extrapolated to predict benefits in paediatric patients, could result in fewer or smaller trials being required to obtain a new drug licence for paediatrics. This article outlines the place of such extrapolation in the development of drugs for use in paediatric epilepsies. Based on consensus expert opinion, a proposal is presented for a new paradigm for the clinical development of drugs for focal epilepsies. Phase I data should continue to be collected in ...
A spectral invariant representation of spectral reflectance
Ibrahim, Abdelhameed; Tominaga, Shoji; Horiuchi, Takahiko
2011-03-01
Spectral image acquisition as well as color image is affected by several illumination factors such as shading, gloss, and specular highlight. Spectral invariant representations for these factors were proposed for the standard dichromatic reflection model of inhomogeneous dielectric materials. However, these representations are inadequate for other characteristic materials like metal. This paper proposes a more general spectral invariant representation for obtaining reliable spectral reflectance images. Our invariant representation is derived from the standard dichromatic reflection model for dielectric materials and the extended dichromatic reflection model for metals. We proof that the invariant formulas for spectral images of natural objects preserve spectral information and are invariant to highlights, shading, surface geometry, and illumination intensity. It is proved that the conventional spectral invariant technique can be applied to metals in addition to dielectric objects. Experimental results show that the proposed spectral invariant representation is effective for image segmentation.
In situ LTE exposure of the general public: Characterization and extrapolation.
Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc
2012-09-01
In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields.
The LCROSS Ejecta Plume Revealed: First Characterization from Earth-based Imaging
Miller, C.; Chanover, N.; Hermalyn, B.; Strycker, P. D.; Hamilton, R. T.; Suggs, R. M.
2012-12-01
-extracted the synthetic plume brightness profiles using the identical PCA filtering algorithm used to detect the LCROSS plume and compared results. With this method, we found that the LCROSS plume reached a peak brightness as viewed from Earth of approximately 9.8 magnitudes/arcsec^2. By varying initial particle ejection angles and velocities in our synthetic plume simulations, we were able to create a family of possible brightness profiles to compare to the detected plume. This comparison yielded constraints on the maximum initial plume particle velocities and ejection angles. We present the results of our LCROSS plume detection and discuss the range of constraints on plume initial conditions implied by our model simulations. These ground-based observations provide a unique and complementary view of the LCROSS impact ejecta compared to that provided by LCROSS S/SC and LRO, which observed the plume from above. This Earth-based data set provides a cross-sectional view and therefore provides unique information necessary to constrain initial conditions of the LCROSS ejecta and, by inference, properties of the lunar regolith on the floor of Cabeus crater.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
The optimized gradient method for full waveform inversion and its spectral implementation
Wu, Zedong
2016-03-28
At the heart of the full waveform inversion (FWI) implementation is wavefield extrapolation, and specifically its accuracy and cost. To obtain accurate, dispersion free wavefields, the extrapolation for modelling is often expensive. Combining an efficient extrapolation with a novel gradient preconditioning can render an FWI implementation that efficiently converges to an accurate model. We, specifically, recast the extrapolation part of the inversion in terms of its spectral components for both data and gradient calculation. This admits dispersion free wavefields even at large extrapolation time steps, which improves the efficiency of the inversion. An alternative spectral representation of the depth axis in terms of sine functions allows us to impose a free surface boundary condition, which reflects our medium boundaries more accurately. Using a newly derived perfectly matched layer formulation for this spectral implementation, we can define a finite model with absorbing boundaries. In order to reduce the nonlinearity in FWI, we propose a multiscale conditioning of the objective function through combining the different directional components of the gradient to optimally update the velocity. Through solving a simple optimization problem, it specifically admits the smoothest approximate update while guaranteeing its ascending direction. An application to the Marmousi model demonstrates the capability of the proposed approach and justifies our assertions with respect to cost and convergence.
Institute of Scientific and Technical Information of China (English)
Ying Taokai(应桃开); Gao Xueping(高学平); Hu Weikang(胡伟康); Noréus Dag
2004-01-01
Rare earth-based AB5-type hydrogen storage alloys as catalysts of hydrogen-diffusion electrodes for hydrogen absorption and oxidation reactions in alkaline fuel cells were investigated. It is demonstrated that the meta-hydride hydrogen-diffusion electrodes could be charged by hydrogen gas and electrochemically discharged at the same time to retain a stable oxidation potential for a long period. The catalytic activities and stability are almost comparable with a Pt catalyst on the active carbon. Further improvement of performances is expected via reduction of catalyst size into nanometers.
Mueller, David S.
2013-04-01
Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.
Ardekani, Mohammad Ali; Nafisi, Vahid Reza; Farhani, Foad
2012-10-01
Hot-wire spirometer is a kind of constant temperature anemometer (CTA). The working principle of CTA, used for the measurement of fluid velocity and flow turbulence, is based on convective heat transfer from a hot-wire sensor to a fluid being measured. The calibration curve of a CTA is nonlinear and cannot be easily extrapolated beyond its calibration range. Therefore, a method for extrapolation of CTA calibration curve will be of great practical application. In this paper, a novel approach based on the conventional neural network and self-organizing map (SOM) method has been proposed to extrapolate CTA calibration curve for measurement of velocity in the range 0.7-30 m/seconds. Results show that, using this approach for the extrapolation of the CTA calibration curve beyond its upper limit, the standard deviation is about -0.5%, which is acceptable in most cases. Moreover, this approach for the extrapolation of the CTA calibration curve below its lower limit produces standard deviation of about 4.5%, which is acceptable in spirometry applications. Finally, the standard deviation on the whole measurement range (0.7-30 m/s) is about 1.5%.
Choice of order and extrapolation method in Aarseth-type N-body algorithms
Press, William H.; Spergel, David N.
1988-02-01
The force-versus-time history of a typical particle in a 50-body King model is taken as input data, and its 'extrapolatability' is measured. Extrapolatability means how far the force can be extrapolated, measured in units of a locally defined rate-of-change time scale, and still be within a specified fractional accuracy of the true values. Greater extrapolatability means larger step size, hence greater efficiency, in an Aarseth-type N-body code. Extrapolatability is found to depend systematically on the order of the extrapolation method, but it goes to a finite limit in the limit of large order. A formula for choosing the optimal (most efficient) order for any desired accuracy is given; higher orders than are presently in use are indicated. Neither rational function extrapolation nor a somewhat vector-regularized polynomial method is found to be systematically better than component-wise polynomial extrapolation, indicating that extrapolatability can be viewed as an intrinsic property of the underlying N-body forces, independent of the extrapolation method.
Hamhalter, Jan; Turilova, Ekaterina
2017-02-01
Quantum symmetries of spectral lattices are studied. Basic properties of spectral order on A W ∗-algebras are summarized. Connection between projection and spectral automorphisms is clarified by showing that, under mild conditions, any spectral automorphism is a composition of function calculus and Jordan ∗-automorphism. Complete description of quantum spectral symmetries on Type I and Type II A W ∗-factors are completely described.
DEFF Research Database (Denmark)
Ambühl, Simon; Sterndorff, Martin; Sørensen, John Dalsgaard
2014-01-01
Mooring systems for floating wave energy converters (WECs) are a major cost driver. Failure of mooring systems often occurs due to extreme loads. This paper introduces an extrapolation method for extreme response which accounts for the control system of a WEC that controls the loads onto the stru......Mooring systems for floating wave energy converters (WECs) are a major cost driver. Failure of mooring systems often occurs due to extreme loads. This paper introduces an extrapolation method for extreme response which accounts for the control system of a WEC that controls the loads onto...... the structure and the harvested power of the device as well as the fact that extreme loads may occur during operation and not at extreme wave states when the device is in storm protection mode. The extrapolation method is based on shortterm load time series and applied to a case study where up-scaled surge load...
An extrapolation approach for aeroengine’s transient control law design
Institute of Scientific and Technical Information of China (English)
Kong Xiangxing; Wang Xi; Tan Daoliang; He Ai; Liu Yue
2013-01-01
Transient control law ensures that the aeroengine transits to the command operating state rapidly and reliably. Most of the existing approaches for transient control law design have complicated principle and arithmetic. As a result, those approaches are not convenient for applica-tion. This paper proposes an extrapolation approach based on the set-point parameters to construct the transient control law, which has a good practicability. In this approach, the transient main fuel control law for acceleration and deceleration process is designed based on the main fuel flow on steady operating state. In order to analyze the designing feature of the extrapolation approach, the simulation results of several different transient control laws designed by the same approach are compared together. The analysis indicates that the aeroengine has a good performance in the transient process and the designing feature of the extrapolation approach conforms to the elements of the turbofan aeroengine.
Jaffrin, M Y; Maasrani, M; Le Gourrier, A; Boudailliez, B
1997-05-01
A method is presented for monitoring the relative variation of extracellular and intracellular fluid volumes using a multifrequency impedance meter and the Cole-Cole extrapolation technique. It is found that this extrapolation is necessary to obtain reliable data for the resistance of the intracellular fluid. The extracellular and intracellular resistances can be approached using frequencies of, respectively, 5 kHz and 1000 kHz, but the use of 100 kHz leads to unacceptable errors. In the conventional treatment the overall relative variation of intracellular resistance is found to be relatively small.
An Extrapolation Method of Vector Magnetic Field via Surface Integral Technique
Institute of Scientific and Technical Information of China (English)
YAN Hui; XIAO Chang-han; ZHOU Guo-hua
2009-01-01
According to the integral relationship between the vector magnetic flux density on a spatial point and that over a closed surface around magnetic sources, a technique for the extrapolation of vector magnetic field of a ferromagnetic object is given without computing scalar potential and its gradient. The vector magnetic flux density on a remote spatial point can be extrapolated by surface integral from the vector values over a closed measureed surface around the ferromagnetic object. The correctness of the technique testified by a special example and simulation. The experimented result shows that its accuracy is satisfying and the execution time is less than 1 second.
Zhao, Yi-Gong; Corsini, G.; Dalle Mese, E.
The method of extrapolation of frequency data based on the finite size property of the Gerchberg-Papoulis algorithm is used to address the problem of radar image enhancement. The rate of convergence of the algorithm and the behavior of noise-affected data are discussed. Simulation results show that the convergence rate can be very slow, depending on the ratio of the amount of extrapolated data to that of observed data. This behavior is due to the eigenvalues of the system matrix close to 1.
Extrapolation of Extreme Response for Wind Turbines based on FieldMeasurements
DEFF Research Database (Denmark)
Toft, Henrik Stensgaard; Sørensen, John Dalsgaard
2009-01-01
The characteristic loads on wind turbines during operation are among others dependent on the mean wind speed, the turbulence intensity and the type and settings of the control system. These parameters must be taken into account in the assessment of the characteristic load. The characteristic load...... extrapolation are presented. The first method is based on the same assumptions as the existing method but the statistical extrapolation is only performed for a limited number of mean wind speeds where the extreme load is likely to occur. For the second method the mean wind speeds are divided into storms which...
Extrapolation of neutron-rich isotope cross-sections from projectile fragmentation
Mocko, M; Sun, Z Y; Andronenko, L; Andronenko, M; Delaunay, F; Famiano, M; Friedman, W A; Henzl, V; Henzlova, D; Hui, H; Liu, X D; Lukyanov, S; Lynch, W G; Rogers, A M; Wallace, M S
2007-01-01
Using the measured fragmentation cross sections produced from the 48Ca and 64Ni beams at 140 MeV per nucleon on 9Be and 181Ta targets, we find that the cross sections of unmeasured neutron rich nuclei can be extrapolated using a systematic trend involving the average binding energy. The extrapolated cross-sections will be very useful in planning experiments with neutron rich isotopes produced from projectile fragmentation. The proposed method is general and could be applied to other fragmentation systems including those used in other radioactive ion beam facilities.
The Spectral Shift Function and Spectral Flow
Azamov, N. A.; Carey, A. L.; Sukochev, F. A.
2007-11-01
At the 1974 International Congress, I. M. Singer proposed that eta invariants and hence spectral flow should be thought of as the integral of a one form. In the intervening years this idea has lead to many interesting developments in the study of both eta invariants and spectral flow. Using ideas of [24] Singer’s proposal was brought to an advanced level in [16] where a very general formula for spectral flow as the integral of a one form was produced in the framework of noncommutative geometry. This formula can be used for computing spectral flow in a general semifinite von Neumann algebra as described and reviewed in [5]. In the present paper we take the analytic approach to spectral flow much further by giving a large family of formulae for spectral flow between a pair of unbounded self-adjoint operators D and D + V with D having compact resolvent belonging to a general semifinite von Neumann algebra {mathcal{N}} and the perturbation V in {mathcal{N}} . In noncommutative geometry terms we remove summability hypotheses. This level of generality is made possible by introducing a new idea from [3]. There it was observed that M. G. Krein’s spectral shift function (in certain restricted cases with V trace class) computes spectral flow. The present paper extends Krein’s theory to the setting of semifinite spectral triples where D has compact resolvent belonging to {mathcal{N}} and V is any bounded self-adjoint operator in {mathcal{N}} . We give a definition of the spectral shift function under these hypotheses and show that it computes spectral flow. This is made possible by the understanding discovered in the present paper of the interplay between spectral shift function theory and the analytic theory of spectral flow. It is this interplay that enables us to take Singer’s idea much further to create a large class of one forms whose integrals calculate spectral flow. These advances depend critically on a new approach to the calculus of functions of non
A least square extrapolation method for improving solution accuracy of PDE computations
Garbey, M
2003-01-01
Richardson extrapolation (RE) is based on a very simple and elegant mathematical idea that has been successful in several areas of numerical analysis such as quadrature or time integration of ODEs. In theory, RE can be used also on PDE approximations when the convergence order of a discrete solution is clearly known. But in practice, the order of a numerical method often depends on space location and is not accurately satisfied on different levels of grids used in the extrapolation formula. We propose in this paper a more robust and numerically efficient method based on the idea of finding automatically the order of a method as the solution of a least square minimization problem on the residual. We introduce a two-level and three-level least square extrapolation method that works on nonmatching embedded grid solutions via spline interpolation. Our least square extrapolation method is a post-processing of data produced by existing PDE codes, that is easy to implement and can be a better tool than RE for code v...
Uncertainty in vertical extrapolation of wind statistics: shear-exponent and WAsP/EWA methods
DEFF Research Database (Denmark)
Kelly, Mark C.
for uncertainties inherent in determination of (wind) shear exponents, and subsequent vertical extrapolation of wind speeds. The report further outlines application of the theory and results of Kelly & Troen (2014-6) for gauging the uncertainty inherent in use of the European Wind Atlas (EWA) / WAsP method...
Photon neutrino-production in a chiral EFT for nuclei and extrapolation to $E_{\
Zhang, Xilin
2013-01-01
We carry out a series of studies on pion and photon productions in neutrino/electron/photon--nucleus scatterings. The low energy region is investigated by using a chiral effective field theory for nuclei. The results for the neutral current induced photon production ($\\gamma$-NCP) are then extrapolated to neutrino energy $E_{\
Monte Carlo analysis: error of extrapolated thermal conductivity from molecular dynamics simulations
Energy Technology Data Exchange (ETDEWEB)
Liu, Xiang-Yang [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Andersson, Anders David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-11-07
In this short report, we give an analysis of the extrapolated thermal conductivity of UO2 from earlier molecular dynamics (MD) simulations [1]. Because almost all material properties are functions of temperature, e.g. fission gas release, the fuel thermal conductivity is the most important parameter from a model sensitivity perspective [2]. Thus, it is useful to perform such analysis.
Groeneveld, C.N.; Hakkert, B.C.; Bos, P.M.J.; Heer, C.de
2004-01-01
For human risk assessment, experimental data often have to be extrapolated for exposure duration, which is generally done by means of default values. The purpose of the present study was twofold. First, to derive a statistical distribution for differences in exposure duration that can be used in a p
Wu, G.; Skidmore, A.K.; Leeuw, de J.; Liu, X.; Prins, H.H.T.
2010-01-01
Measurements of photosynthetically active radiation (PAR), which are indispensable for simulating plant growth and productivity, are generally very scarce. This study aimed to compare two extrapolation and one interpolation methods for estimating daily PAR reaching the earth surface within the Poyan
Senjean, Bruno; Alam, Md Mehboob; Knecht, Stefan; Fromager, Emmanuel
2015-01-01
The combination of a recently proposed linear interpolation method (LIM) [Senjean et al., Phys. Rev. A 92, 012518 (2015)], which enables the calculation of weight-independent excitation energies in range-separated ensemble density-functional approximations, with the extrapolation scheme of Savin [J. Chem. Phys. 140, 18A509 (2014)] is presented in this work. It is shown that LIM excitation energies vary quadratically with the inverse of the range-separation parameter mu when the latter is large. As a result, the extrapolation scheme, which is usually applied to long-range interacting energies, can be adapted straightforwardly to LIM. This extrapolated LIM (ELIM) has been tested on a small test set consisting of He, Be, H2 and HeH+. Relatively accurate results have been obtained for the first singlet excitation energies with the typical mu=0.4 value. The improvement of LIM after extrapolation is remarkable, in particular for the doubly-excited 2^1Sigma+g state in the stretched H2 molecule. Three-state ensemble ...
Scaling and chiral extrapolation of pion mass and decay constant with maximally twisted mass QCD
Dimopoulos, P; Herdoiza, G; Jansen, K; Michael, C; Urbach, C
2008-01-01
We present an update of the results for pion mass and pion decay constant as obtained by the ETM collaboration in large scale simulations with maximally twisted mass fermions and two mass degenerate flavours of light quarks. We discuss the continuum, chiral and infinite volume extrapolation of these quantities as well as the extraction of low energy constants, and investigate possible systematic uncertainties.
Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian
2014-07-01
Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species' evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals ("MammalDIET"). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external
Inference of Surface Chemical and Physical Properties Using Mid-Infrared (MIR) Spectral Observations
Roush, Ted L.
2016-01-01
Reflected or emitted energy from solid surfaces in the solar system can provide insight into thermo-physical and chemical properties of the surface materials. Measurements have been obtained from instruments located on Earth-based telescopes and carried on several space missions. The characteristic spectral features commonly observed in Mid-Infrared (MIR) spectra of minerals will be reviewed, along with methods used for compositional interpretations of MIR emission spectra. The influence of surface grain size, and space weathering processes on MIR emissivity spectra will also be discussed. Methods used for estimating surface temperature, emissivity, and thermal inertias from MIR spectral observations will be reviewed.
Levy, Aharon; Cohen, Giora; Gilat, Eran; Kapon, Joseph; Dachir, Shlomit; Abraham, Shlomo; Herskovitz, Miriam; Teitelbaum, Zvi; Raveh, Lily
2007-05-01
The extrapolation from animal data to therapeutic effects in humans, a basic pharmacological issue, is especially critical in studies aimed to estimate the protective efficacy of drugs against nerve agent poisoning. Such efficacy can only be predicted by extrapolation of data from animal studies to humans. In pretreatment therapy against nerve agents, careful dose determination is even more crucial than in antidotal therapy, since excessive doses may lead to adverse effects or performance decrements. The common method of comparing dose per body weight, still used in some studies, may lead to erroneous extrapolation. A different approach is based on the comparison of plasma concentrations at steady state required to obtain a given pharmacodynamic endpoint. In the present study, this approach was applied to predict the prophylactic efficacy of the anticholinergic drug caramiphen in combination with pyridostigmine in man based on animal data. In two species of large animals, dogs and monkeys, similar plasma concentrations of caramiphen (in the range of 60-100 ng/ml) conferred adequate protection against exposure to a lethal-dose of sarin (1.6-1.8 LD(50)). Pharmacokinetic studies at steady state were required to achieve the correlation between caramiphen plasma concentrations and therapeutic effects. Evaluation of total plasma clearance values was instrumental in establishing desirable plasma concentrations and minimizing the number of animals used in the study. Previous data in the literature for plasma levels of caramiphen that do not lead to overt side effects in humans (70-100 ng/ml) enabled extrapolation to expected human protection. The method can be applied to other drugs and other clinical situations, in which human studies are impossible due to ethical considerations. When similar dose response curves are obtained in at least two animal models, the extrapolation to expected therapeutic effects in humans might be considered more reliable.
Directory of Open Access Journals (Sweden)
S. A. Banin
2016-01-01
Full Text Available Forecasting methods, extrapolation ones in particular, are used in health care for medical, biological and clinical research. The author, using accessible internet space, has not met a single publication devoted to extrapolation of financial parameters of health care activities. This determined the relevance of the material presented in the article: based on health care financing dynamics in Russia in 2000–2010 the author examined possibility of application of basic perspective extrapolation methods - moving average, exponential smoothing and least squares. It is hypothesized that all three methods can equally forecast actual public expenditures on health care in medium term in Russia’s current financial and economic conditions. The study result was evaluated in two time periods: within the studied interval and a five-year period. It was found that within the study period all methods have an average relative extrapolation error of 3–5%, which means high precision of the forecast. The study shown a specific feature of the least squares method which were gradually accumulating results so their economic interpretation became possible only in the end of the studied period. That is why the extrapolating results obtained by least squares method are not applicable in an entire study period and rather have a theoretical value. Beyond the study period, however, this feature was found to be the most corresponding to the real situation. It was the least squares method that proved to be the most appropriate for economic interpretation of the forecast results of actual public expenditures on health care. The hypothesis was not confirmed, the author received three differently directed results, while each method had independent significance and its application depended on evaluation study objectives and real social, economic and financial situation in Russian health care system.
How to Appropriately Extrapolate Costs and Utilities in Cost-Effectiveness Analysis.
Bojke, Laura; Manca, Andrea; Asaria, Miqdad; Mahon, Ronan; Ren, Shijie; Palmer, Stephen
2017-05-03
Costs and utilities are key inputs into any cost-effectiveness analysis. Their estimates are typically derived from individual patient-level data collected as part of clinical studies the follow-up duration of which is often too short to allow a robust quantification of the likely costs and benefits a technology will yield over the patient's entire lifetime. In the absence of long-term data, some form of temporal extrapolation-to project short-term evidence over a longer time horizon-is required. Temporal extrapolation inevitably involves assumptions regarding the behaviour of the quantities of interest beyond the time horizon supported by the clinical evidence. Unfortunately, the implications for decisions made on the basis of evidence derived following this practice and the degree of uncertainty surrounding the validity of any assumptions made are often not fully appreciated. The issue is compounded by the absence of methodological guidance concerning the extrapolation of non-time-to-event outcomes such as costs and utilities. This paper considers current approaches to predict long-term costs and utilities, highlights some of the challenges with the existing methods, and provides recommendations for future applications. It finds that, typically, economic evaluation models employ a simplistic approach to temporal extrapolation of costs and utilities. For instance, their parameters (e.g. mean) are typically assumed to be homogeneous with respect to both time and patients' characteristics. Furthermore, costs and utilities have often been modelled to follow the dynamics of the associated time-to-event outcomes. However, cost and utility estimates may be more nuanced, and it is important to ensure extrapolation is carried out appropriately for these parameters.
Bližňák, Vojtěch; Sokol, Zbyněk; Zacharov, Petr
2017-02-01
An evaluation of convective cloud forecasts performed with the numerical weather prediction (NWP) model COSMO and extrapolation of cloud fields is presented using observed data derived from the geostationary satellite Meteosat Second Generation (MSG). The present study focuses on the nowcasting range (1-5 h) for five severe convective storms in their developing stage that occurred during the warm season in the years 2012-2013. Radar reflectivity and extrapolated radar reflectivity data were assimilated for at least 6 h depending on the time of occurrence of convection. Synthetic satellite imageries were calculated using radiative transfer model RTTOV v10.2, which was implemented into the COSMO model. NWP model simulations of IR10.8 μm and WV06.2 μm brightness temperatures (BTs) with a horizontal resolution of 2.8 km were interpolated into the satellite projection and objectively verified against observations using Root Mean Square Error (RMSE), correlation coefficient (CORR) and Fractions Skill Score (FSS) values. Naturally, the extrapolation of cloud fields yielded an approximately 25% lower RMSE, 20% higher CORR and 15% higher FSS at the beginning of the second forecasted hour compared to the NWP model forecasts. On the other hand, comparable scores were observed for the third hour, whereas the NWP forecasts outperformed the extrapolation by 10% for RMSE, 15% for CORR and up to 15% for FSS during the fourth forecasted hour and 15% for RMSE, 27% for CORR and up to 15% for FSS during the fifth forecasted hour. The analysis was completed by a verification of the precipitation forecasts yielding approximately 8% higher RMSE, 15% higher CORR and up to 45% higher FSS when the NWP model simulation is used compared to the extrapolation for the first hour. Both the methods yielded unsatisfactory level of precipitation forecast accuracy from the fourth forecasted hour onward.
SU-D-204-02: BED Consistent Extrapolation of Mean Dose Tolerances
Energy Technology Data Exchange (ETDEWEB)
Perko, Z; Bortfeld, T; Hong, T; Wolfgang, J; Unkelbach, J [Massachusetts General Hospital, Boston, MA (United States)
2016-06-15
Purpose: The safe use of radiotherapy requires the knowledge of tolerable organ doses. For experimental fractionation schemes (e.g. hypofractionation) these are typically extrapolated from traditional fractionation schedules using the Biologically Effective Dose (BED) model. This work demonstrates that using the mean dose in the standard BED equation may overestimate tolerances, potentially leading to unsafe treatments. Instead, extrapolation of mean dose tolerances should take the spatial dose distribution into account. Methods: A formula has been derived to extrapolate mean physical dose constraints such that they are mean BED equivalent. This formula constitutes a modified BED equation where the influence of the spatial dose distribution is summarized in a single parameter, the dose shape factor. To quantify effects we analyzed 14 liver cancer patients previously treated with proton therapy in 5 or 15 fractions, for whom also photon IMRT plans were available. Results: Our work has two main implications. First, in typical clinical plans the dose distribution can have significant effects. When mean dose tolerances are extrapolated from standard fractionation towards hypofractionation they can be overestimated by 10–15%. Second, the shape difference between photon and proton dose distributions can cause 30–40% differences in mean physical dose for plans having the same mean BED. The combined effect when extrapolating proton doses to mean BED equivalent photon doses in traditional 35 fraction regimens resulted in up to 7–8 Gy higher doses than when applying the standard BED formula. This can potentially lead to unsafe treatments (in 1 of the 14 analyzed plans the liver mean dose was above its 32 Gy tolerance). Conclusion: The shape effect should be accounted for to avoid unsafe overestimation of mean dose tolerances, particularly when estimating constraints for hypofractionated regimens. In addition, tolerances established for a given treatment modality cannot
Energy Technology Data Exchange (ETDEWEB)
Christenson, T.R.; Garino, T.J.; Venturini, E.L.
1999-01-27
Precision high aspect-ratio micro molds constructed by deep x-ray lithography have been used to batch fabricate accurately shaped bonded rare-earth based permanent magnets with features as small as 5 microns and thicknesses up to 500 microns. Maximum energy products of up to 8 MGOe have been achieved with a 20%/vol. epoxy bonded melt-spun isotropic Nd2Fe14b powder composite. Using individually processed sub- millimeter permanent sections multipole rotors have been assembled. Despite the fact that these permanent magnet structures are small, their magnetic field producing capability remains the same as at any scale. Combining permanent magnet structures with soft magnetic materials and micro-coils makes possible new and more efficient magnetic microdevices.
Imai, Masafumi; Kurth, William S.; Hospodarsky, George B.; Bolton, Scott J.; Connerney, John E. P.; Levin, Steven M.; Clarke, Tracy E.; Higgins, Charles A.
2017-04-01
Jupiter is the dominant auroral radio source in our solar system, producing decameter (DAM) radiation (from a few to 40 MHz) with a flux density of up to 10-19 W/(m2Hz). Jovian DAM non-thermal radiation above 10 MHz is readily observed by Earth-based radio telescopes that are limited at lower frequencies by terrestrial ionospheric conditions and radio frequency interference. In contrast, frequencies observed by spacecraft depend upon receiver capability and the ambient solar wind plasma frequency. Observations of DAM from widely separated observers can be used to investigate the geometrical properties of the beam and learn about the generation mechanism. The first multi-observer observations of Jovian DAM emission were made using the Voyager spacecraft and ground-based radio telescopes in early 1979, but, due to geometrical constraints and limited flyby duration, a full understanding of the latitudinal beaming of Jovian DAM radiation remains elusive. This understanding is sorely needed to confirm DAM generation by the electron cyclotron maser instability, the widely assumed generation mechanism. Juno first detected Jovian DAM emissions on May 5, 2016, on approach to the Jovian system, initiating a new opportunity to perform observations of Jovian DAM radiation with Juno, Cassini, WIND, STEREO A, and Earth-based radio observatories (Long Wavelength Array Station One (LWA1) in New Mexico, USA, and Nançay Decameter Array (NDA) in France). These observers are widely distributed throughout our solar system and span a broad frequency range of 3.5 to 40.5 MHz. Juno resides in orbit at Jupiter, Cassini at Saturn, WIND around Earth, STEREO A in 1 AU orbit, and LWA1 and NDA at Earth. Juno's unique polar trajectory is expected to facilitate extraordinary stereoscopic observations of Jovian DAM, leading to a much improved understanding of the latitudinal beaming of Jovian DAM.
Directory of Open Access Journals (Sweden)
Mang Tia
2010-11-01
Full Text Available The research discussed in this paper is a subset of a bigger, NSF funded research project that is directed at investigating the use of sustainable building materials. The deployment context for the research is the hot and humid climate using selected cases from the East African region. The overarching goal for the research is advancing the structural use of earth-based technologies. Significant strides can be made through developing strategies for countering the adverse factors that affect the structural performance of the resulting wall, especially ones related to moisture dynamics. The research was executed in two phases. The first phase was a two-day NSF supported workshop which was held in Tanzania in July 2009. It provided a forum for sharing best practices in earth-based building technologies and developing a research and development roadmap. The priority research areas were broadly classified as optimizing the physio-mechanical properties of earth as a building material and managing socio-cultural impediments. In the second phase of the research, the authors collaborated with researchers from East Africa to conduct experimental work on the optimization of physio-mechanical properties. The specific research issues that have been addressed are: (1 characterizing the chemical reactions that can be linked to deterioration triggered by hygrothermal loads based on the hot and humid context, and; (2 developing a prototype for a simpler, portable, affordable and viable compressed brick production machine. The paper discusses the results from the characterization work that ultimately will be used to design bricks that have specific properties based on an understanding of how different stabilizers affect the hydration process. It also describes a cheaper, portable and more efficient prototype machine that has been developed as part of the follow-up research activities.
Sun, Shuyu
2013-06-01
This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.
Source‐receiver two‐way wave extrapolation for prestack exploding‐reflector modeling and migration
Alkhalifah, Tariq Ali
2010-10-17
While most of the modern seismic imaging methods perform imaging by separating input data into parts (shot gathers), we develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield backward in time. While computationally extensive, this approach has the potential of generating accurate images, free of artifacts associated with conventional approaches. We derive novel high‐order partial differential equations in source‐receiver‐time domain. The fourth order nature of the extrapolation in time has four solutions two of which correspond to the ingoing and outgoing P‐waves and reduces to the zero‐offset exploding‐reflector solutions when the source coincides with the receiver. Using asymptotic approximations, we develop an approach to extrapolating the full prestack wavefield forward or backward in time.
Variational procedure for nuclear shell-model calculations and energy-variance extrapolation
Shimizu, Noritaka; Mizusaki, Takahiro; Honma, Michio; Tsunoda, Yusuke; Otsuka, Takaharu
2012-01-01
We discuss a variational calculation for nuclear shell-model calculations and propose a new procedure for the energy-variance extrapolation (EVE) method using a sequence of the approximated wave functions obtained by the variational calculation. The wave functions are described as linear combinations of the parity, angular-momentum projected Slater determinants, the energy of which is minimized by the conjugate gradient method obeying the variational principle. The EVE generally works well using the wave functions, but we found some difficult cases where the EVE gives a poor estimation. We discuss the origin of the poor estimation concerning shape coexistence. We found that the appropriate reordering of the Slater determinants allows us to overcome this difficulty and to reduce the uncertainty of the extrapolation.
Extrapolation of Nystrom solution for two dimensional nonlinear Fredholm integral equations
Guoqiang, Han; Jiong, Wang
2001-09-01
In this paper, we analyze the existence of asymptotic error expansion of the Nystrom solution for two-dimensional nonlinear Fredholm integral equations of the second kind. We show that the Nystrom solution admits an error expansion in powers of the step-size h and the step-size k. For a special choice of the numerical quadrature, the leading terms in the error expansion for the Nystrom solution contain only even powers of h and k, beginning with terms h2p and k2q. These expansions are useful for the application of Richardson extrapolation and for obtaining sharper error bounds. Numerical examples show that how Richardson extrapolation gives a remarkable increase of precision, in addition to faster convergence.
{sup 131}I-CRTX internal dosimetry: animal model and human extrapolation
Energy Technology Data Exchange (ETDEWEB)
Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soares, Marcella Araugio; Silveira, Marina Bicalho; Santos, Raquel Gouvea dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: hma@cdtn.br
2009-07-01
Snake venoms molecules have been shown to play a role not only in the survival and proliferation of tumor cells but also in the processes of tumor cell adhesion, migration and angiogenesis. {sup 125}I-Crtx, a radiolabeled version of a peptide derived from Crotalus durissus terrificus snake venom, specifically binds to tumor and triggers apoptotic signalling. At the present work, {sup 125}I-Crtx biokinetic data (evaluated in mice bearing Erlich tumor) were treated by MIRD formalism to perform Internal Dosimetry studies. Doses in several organs of mice were determinate, as well as in implanted tumor, for {sup 131}I-Crtx. Doses results obtained for animal model were extrapolated to humans assuming a similar concentration ratio among various tissues between mouse and human. In the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from {sup 131}I in the tissue were considered in dose calculations. (author)
Energy Technology Data Exchange (ETDEWEB)
Latychevskaia, Tatiana; Fink, Hans-Werner [Physics Department, University of Zurich, Winterthurerstrasse 190, 8057 Zurich (Switzerland)
2015-01-12
Previously reported crystalline structures obtained by an iterative phase retrieval reconstruction of their diffraction patterns seem to be free from displaying any irregularities or defects in the lattice, which appears to be unrealistic. We demonstrate here that the structure of a nanocrystal including its atomic defects can unambiguously be recovered from its diffraction pattern alone by applying a direct phase retrieval procedure not relying on prior information of the object shape. Individual point defects in the atomic lattice are clearly apparent. Conventional phase retrieval routines assume isotropic scattering. We show that when dealing with electrons, the quantitatively correct transmission function of the sample cannot be retrieved due to anisotropic, strong forward scattering specific to electrons. We summarize the conditions for this phase retrieval method and show that the diffraction pattern can be extrapolated beyond the original record to even reveal formerly not visible Bragg peaks. Such extrapolated wave field pattern leads to enhanced spatial resolution in the reconstruction.
Usage of Empirical-Statical-Dynamical (ESD method for data extrapolation in Tunnel Construction
Directory of Open Access Journals (Sweden)
Zafirovski Zlatko
2016-01-01
Full Text Available This article describes a methodology that shows how it is possible to integrate all these approaches in a problem for extrapolation of the parameters for hydrotechical tunnels. During the design process for tunnels in hydrotechics, one of the main problems is how to extrapolate the deformability and shear strentgh rock mass parameters from the zone of testing to the whole area (volume of interes for interaction analyses between structure abd natural environments. Computers development in recent decades has contributed to the development of numerical calculation method in rock mechanics which enabled new and wider possibilities of stress and deformation calculation. This had significantly stimulated the development of rock mechanics and tunneling as scientific and technical discipline as well as the wider application of research results into practice.
The immunogenicity of biosimilar infliximab: can we extrapolate the data across indications?
Ben-Horin, Shomron; Heap, Graham A; Ahmad, Tariq; Kim, HoUng; Kwon, TaekSang; Chowers, Yehuda
2015-01-01
Biopharmaceuticals or 'biologics' have revolutionized the treatment of many diseases. However, some patients generate an immune response to such drugs, potentially limiting clinical efficacy and safety. Infliximab (Remicade(®)) is a monoclonal antibody used to treat several immune-mediated inflammatory disorders. A biosimilar of infliximab, CT-P13 (Remsima(®), Inflectra(®)), has recently been approved in Europe for all indications in which infliximab is approved. Approval of CT-P13 was based in part on extrapolation of clinical trial data from two indications (rheumatoid arthritis and ankylosing spondylitis) to all other indications, including inflammatory bowel disease. This review discusses the validity of extrapolating immunogenicity data across indications - a process adopted by the EMA as part of their biosimilar approval process - with a focus on CT-P13.
{sup 131}I-SPGP internal dosimetry: animal model and human extrapolation
Energy Technology Data Exchange (ETDEWEB)
Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soprani, Juliana; Santos, Raquel Gouvea dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: hma@cdtn.br; Figueiredo, Suely Gomes de [Universidade Federal do Espirito Santo, (UFES), Vitoria, ES (Brazil). Dept. de Ciencias Fisiologicas. Lab. de Quimica de Proteinas
2009-07-01
Scorpaena plumieri is commonly called moreia-ati or manganga and is the most venomous and one of the most abundant fish species of the Brazilian coast. Soprani 2006, demonstrated that SPGP - an isolated protein from S. plumieri fish- possess high antitumoral activity against malignant tumours and can be a source of template molecules for the development (design) of antitumoral drugs. In the present work, Soprani's {sup 125}ISPGP biokinetic data were treated by MIRD formalism to perform Internal Dosimetry studies. Absorbed doses due to the {sup 131}I-SPGP uptake were determinate in several organs of mice, as well as in the implanted tumor. Doses obtained for animal model were extrapolated to humans assuming a similar ratio for various mouse and human tissues. For the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from {sup 131}I were considered. (author)
Improving Predictions with Reliable Extrapolation Schemes and Better Understanding of Factorization
More, Sushant N
2016-01-01
We investigate two distinct sources of uncertainty in low-energy nuclear physics calculations and develop ways to account for them. Harmonic oscillator basis expansions are widely used in ab-initio nuclear structure calculations. Finite computational resources usually require that the basis be truncated before observables are fully converged, necessitating reliable extrapolation schemes. We show that a finite oscillator basis effectively imposes a hard-wall boundary condition. We accurately determine the position of the hard-wall as a function of oscillator space parameters, derive extrapolation formulas for the energy and other observables, and discuss the extension of this approach to higher angular momentum. Nucleon knockout reactions have been widely used to study and understand nuclear properties. Such an analysis implicitly assumes that the effects of the probe can be separated from the physics of the target nucleus. This factorization between nuclear structure and reaction components depends on the ren...
Extrapolation modeling of aerosol deposition in human and laboratory rat lungs
Energy Technology Data Exchange (ETDEWEB)
Martonen, T.B.; Zhang, Z.; Yang, Y.
1992-01-01
Laboratory test animals are often used as surrogates in exposure studies to assess the potential threat to human health following inhalation of airborne contaminants. To aid in the interpretation and extrapolation of data to man, dosimetric considerations need to be addressed. Therefore, a mathematical model describing the behavior and fate of inhaled particulate matter within the respiratory tracts of man and rats has been developed. In the computer simulations, the CO2 concentrations of inhalation exposure chamber atmospheres are controlled to produce desired breathing patterns in the rat which mimic human breathing patterns as functions of physical activity levels. Herein, deposition patterns in human and rat lung airways are specifically examined as functions of respiratory intensities and particle parameters. The model provides a basis for the re-evaluation of data from past experiments, and, perhaps most importantly, permits new inhalation exposure tests to be designed and conducted in a sound scientific manner regarding this endpoint: the extrapolation of results to human conditions.
Agarwal, Amit B; McBride, Ali
2016-08-01
The World Health Organization defines a biosimilar as "a biotherapeutic product which is similar in terms of quality, safety and efficacy to an already licensed reference biotherapeutic product." Biosimilars are biologic medical products that are very distinct from small-molecule generics, as their active substance is a biological agent derived from a living organism. Approval processes are highly regulated, with guidance issued by the European Medicines Agency and US Food and Drug Administration. Approval requires a comparability exercise consisting of extensive analytical and preclinical in vitro and in vivo studies, and confirmatory clinical studies. Extrapolation of biosimilars from their original indication to another is a feasible but highly stringent process reliant on rigorous scientific justification. This review focuses on the processes involved in gaining biosimilar approval and extrapolation and details the comparability exercise undertaken in the European Union between originator erythropoietin-stimulating agent, Eprex(®), and biosimilar, Retacrit™.
New allometric scaling relationships and applications for dose and toxicity extrapolation.
Cao, Qiming; Yu, Jimmy; Connell, Des
2014-01-01
Allometric scaling between metabolic rate, size, body temperature, and other biological traits has found broad applications in ecology, physiology, and particularly in toxicology and pharmacology. Basal metabolic rate (BMR) was observed to scale with body size and temperature. However, the mass scaling exponent was increasingly debated whether it should be 2/3, 3/4, or neither, and scaling with body temperature also attracted recent attention. Based on thermodynamic principles, this work reports 2 new scaling relationships between BMR, size, temperature, and biological time. Good correlations were found with the new scaling relationships, and no universal scaling exponent can be obtained. The new scaling relationships were successfully validated with external toxicological and pharmacological studies. Results also demonstrated that individual extrapolation models can be built to obtain scaling exponent specific to the interested group, which can be practically applied for dose and toxicity extrapolations.
Infrared length scale and extrapolations for the no-core shell model
Wendt, K A; Papenbrock, T; Sääf, D
2015-01-01
We precisely determine the infrared (IR) length scale of the no-core shell model (NCSM). In the NCSM, the $A$-body Hilbert space is truncated by the total energy, and the IR length can be determined by equating the intrinsic kinetic energy of $A$ nucleons in the NCSM space to that of $A$ nucleons in a $3(A-1)$-dimensional hyper-radial well with a Dirichlet boundary condition for the hyper radius. We demonstrate that this procedure indeed yields a very precise IR length by performing large-scale NCSM calculations for $^{6}$Li. We apply our result and perform accurate IR extrapolations for bound states of $^{4}$He, $^{6}$He, $^{6}$Li, $^{7}$Li. We also attempt to extrapolate NCSM results for $^{10}$B and $^{16}$O with bare interactions from chiral effective field theory over tens of MeV.
Hsieh, T C; Chao, Anne
2017-01-01
Measures of phylogenetic diversity are basic tools in many studies of systematic biology. Faith’s PD (sum of branch lengths of a phylogenetic tree connecting all focal species) is the most widely used phylogenetic measure. Like species richness, Faith’s PD based on sampling data is highly dependent on sample size and sample completeness. The sample-size- and sample-coverage-based integration of rarefaction and extrapolation of Faith’s PD was recently developed to make fair comparison across multiple assemblages. However, species abundances are not considered in Faith’s PD. Based on the framework of Hill numbers, Faith’s PD was generalized to a class of phylogenetic diversity measures that incorporates species abundances. In this article, we develop both theoretical formulae and analytic estimators for seamless rarefaction and extrapolation for this class of abundance-sensitive phylogenetic measures, which includes simple transformations of phylogenetic entropy and of quadratic entropy. This work generalizes the previous rarefaction/extrapolation model of Faith’s PD to incorporate species abundance, and also extends the previous rarefaction/extrapolation model of Hill numbers to include phylogenetic differences among species. Thus a unified approach to assessing and comparing species/taxonomic diversity and phylogenetic diversity can be established. A bootstrap method is suggested for constructing confidence intervals around the phylogenetic diversity, facilitating the comparison of multiple assemblages. Our formulation and estimators can be extended to incidence data collected from multiple sampling units. We also illustrate the formulae and estimators using bacterial sequence data from the human distal esophagus and phyllostomid bat data from three habitats.
On the problem of discrete extrapolation of a band-limited signal
Vincenti, Graziano; Volpi, Aldo
1992-01-01
Si considera il sistema lineare equivalente al problema della estrapolazione discreta di un segnale a banda limitata. Si dimostra che la matrice di iterazione del metodo di Gerchberg-Papoulis, metodo iterativo applicato a questo sistema, è una matrice convergente. Si verifica inoltre che la convergenza di tale metodo è cosi lenta da rendere tale metodo praticamente inutilizzabile. We consider the linear system equivalent to the problem of discrete extrapolation of a band-limited signal. We...
Precise Numerical Results of IR-vertex and box integration with Extrapolation Method
Yuasa, F; Fujimoro, J; Hamaguchi, N; Ishikawa, T; Shimizu, Y
2007-01-01
We present a new approach for obtaining very precise integration results for infrared vertex and box diagrams, where the integration is carried out directly without performing any analytic integration of Feynman parameters. Using an appropriate numerical integration routine with an extrapolation method, together with a multi-precision library, we have obtained integration results which agree with the analytic results to 10 digits even for such a very small photon mass as $10^{-150}$ GeV in the infrared vertex diagram.
Directory of Open Access Journals (Sweden)
Lee HyunYoung
2010-01-01
Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.
On the problem of discrete extrapolation of a band-limited signal
Vincenti, Graziano; Volpi, Aldo
1992-01-01
Si considera il sistema lineare equivalente al problema della estrapolazione discreta di un segnale a banda limitata. Si dimostra che la matrice di iterazione del metodo di Gerchberg-Papoulis, metodo iterativo applicato a questo sistema, è una matrice convergente. Si verifica inoltre che la convergenza di tale metodo è cosi lenta da rendere tale metodo praticamente inutilizzabile. We consider the linear system equivalent to the problem of discrete extrapolation of a band-limited signal. We...
A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems
Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong
2017-09-01
In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.
Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora
Directory of Open Access Journals (Sweden)
Ryosuke Takahira
2016-10-01
Full Text Available One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, but in 1990 Hilberg raised doubt regarding a correct interpretation of these experiments. This article provides an in-depth empirical analysis, using 20 corpora of up to 7.8 gigabytes across six languages (English, French, Russian, Korean, Chinese, and Japanese, to conclude that the entropy rate is positive. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes were proposed previously, here we use a new stretched exponential extrapolation function that has a smaller error of fit. Thus, we conclude that the entropy rates of human languages are positive but approximately 20% smaller than without extrapolation. Although the entropy rate estimates depend on the script kind, the exponent of the ansatz function turns out to be constant across different languages and governs the complexity of natural language in general. In other words, in spite of typological differences, all languages seem equally hard to learn, which partly confirms Hilberg’s hypothesis.
A model for the data extrapolation of greenhouse gas emissions in the Brazilian hydroelectric system
Pinguelli Rosa, Luiz; Aurélio dos Santos, Marco; Gesteira, Claudio; Elias Xavier, Adilson
2016-06-01
Hydropower reservoirs are artificial water systems and comprise a small proportion of the Earth’s continental territory. However, they play an important role in the aquatic biogeochemistry and may affect the environment negatively. Since the 90s, as a result of research on organic matter decay in manmade flooded areas, some reports have associated greenhouse gas emissions with dam construction. Pioneering work carried out in the early period challenged the view that hydroelectric plants generate completely clean energy. Those estimates suggested that GHG emissions into the atmosphere from some hydroelectric dams may be significant when measured per unit of energy generated and should be compared to GHG emissions from fossil fuels used for power generation. The contribution to global warming of greenhouse gases emitted by hydropower reservoirs is currently the subject of various international discussions and debates. One of the most controversial issues is the extrapolation of data from different sites. In this study, the extrapolation from a site sample where measurements were made to the complete set of 251 reservoirs in Brazil, comprising a total flooded area of 32 485 square kilometers, was derived from the theory of self-organized criticality. We employed a power law for its statistical representation. The present article reviews the data generated at that time in order to demonstrate how, with the help of mathematical tools, we can extrapolate values from one reservoir to another without compromising the reliability of the results.
Parallel difference schemes with interface extrapolation terms for quasi-linear parabolic systems
Institute of Scientific and Technical Information of China (English)
Guang-wei YUAN; Xu-deng HANG; Zhi-qiang SHENG
2007-01-01
In this paper some new parallel difference schemes with interface extrapolation terms for a quasi-linear parabolic system of equations are constructed. Two types of time extrapolations are proposed to give the interface values on the interface of sub-domains or the values adjacent to the interface points, so that the unconditional stable parallel schemes with the second accuracy are formed.Without assuming heuristically that the original boundary value problem has the unique smooth vector solution, the existence and uniqueness of the discrete vector solutions of the parallel difference schemes constructed are proved. Moreover the unconditional stability of the parallel difference schemes is justified in the sense of the continuous dependence of the discrete vector solution of the schemes on the discrete known data of the original problems in the discrete W2(2,1) (Q△) norms. Finally the convergence of the discrete vector solutions of the parallel difference schemes with interface extrapolation terms to the unique generalized solution of the original quasi-linear parabolic problem is proved. Numerical results are presented to show the good performance of the parallel schemes, including the unconditional stability, the second accuracy and the high parallelism.
Chaouche, L Yelles; Pillet, V Martínez; Moreno-Insertis, F
2012-01-01
The 3D structure of an active region (AR) filament is studied using nonlinear force-free field (NLFFF) extrapolations based on simultaneous observations at a photospheric and a chromospheric height. To that end, we used the Si I 10827 \\AA\\ line and the He I 10830 \\AA\\ triplet obtained with the Tenerife Infrared Polarimeter (TIP) at the VTT (Tenerife). The two extrapolations have been carried out independently from each other and their respective spatial domains overlap in a considerable height range. This opens up new possibilities for diagnostics in addition to the usual ones obtained through a single extrapolation from, typically, a photospheric layer. Among those possibilities, this method allows the determination of an average formation height of the He I 10830 \\AA\\ signal of \\approx 2 Mm above the surface of the sun. It allows, as well, to cross-check the obtained 3D magnetic structures in view of verifying a possible deviation from the force- free condition especially at the photosphere. The extrapolati...
Directory of Open Access Journals (Sweden)
Bressler B
2015-06-01
Full Text Available Brian Bressler,1 Theo Dingermann2 1St Paul’s Hospital, University of British Columbia, Vancouver, BC, Canada; 2Institute of Pharmaceutical Biology, Frankfurt, Germany Abstract: Despite their enormous value for our health care system, biopharmaceuticals have become a serious threat to the system itself due to their high cost. Costs may be warranted if the medicine is new and innovative; however, it is no longer an innovation when its patent protection expires. As patents and exclusivities expire on biological drugs, biosimilar products defined as highly similar to reference biologics are being marketed. The goal of biosimilar development is to establish a high degree of biosimilarity, not to reestablish clinical efficacy and safety. Current sophisticated analytical methods allow the detection of even small changes in quality attributes and can therefore enable sensitive monitoring of the batch-to-batch consistency and variability of the manufacturing process. The European Medicines Agency (EMA, US Food and Drug Administration (FDA, and Health Canada have determined that a reduced number of nonclinical and clinical comparative studies can be sufficient for approval with clinical data from the most sensitive indication extrapolated to other indications. Extrapolation of data is a scientifically based principle, guided by specific criteria, and if approved by the EMA, FDA, and/or Health Canada is appropriate. Enablement of extrapolation of data is a core principle of biosimilar development, based on principles of comparability and necessary to fully realize cost savings for these drugs. Keywords: biosimilars, Inflectra, infliximab, pharmacoeconomics, Canada, Europe
Ilieva, T.; Iliev, I.; Pashov, A.
2016-12-01
In the traditional description of electronic states of diatomic molecules by means of molecular constants or Dunham coefficients, one of the important fitting parameters is the value of the zero point energy - the minimum of the potential curve or the energy of the lowest vibrational-rotational level - E00 . Their values are almost always the result of an extrapolation and it may be difficult to estimate their uncertainties, because they are connected not only with the uncertainty of the experimental data, but also with the distribution of experimentally observed energy levels and the particular realization of set of Dunham coefficients. This paper presents a comprehensive analysis based on Monte Carlo simulations, which aims to demonstrate the influence of all these factors on the uncertainty of the extrapolated minimum of the potential energy curve U (Re) and the value of E00 . The very good extrapolation properties of the Dunham coefficients are quantitatively confirmed and it is shown that for a proper estimate of the uncertainties, the ambiguity in the composition of the Dunham coefficients should be taken into account.
Limitations of force-free magnetic field extrapolations: revisiting basic assumptions
Peter, H; Chitta, L P; Cameron, R H
2015-01-01
Force-free extrapolations are widely used to study the magnetic field in the solar corona based on surface measurements. The extrapolations assume that the ratio of internal energy of the plasma to magnetic energy, the plasma-beta is negligible. Despite the widespread use of this assumption observations, models, and theoretical considerations show that beta is of the order of a few percent to more than 10%, and thus not small. We investigate what consequences this has for the reliability of extrapolation results. We use basic concepts starting with the force and the energy balance to infer relations between plasma-beta and free magnetic energy, to study the direction of currents in the corona with respect to the magnetic field, and to estimate the errors in the free magnetic energy by neglecting effects of the plasma (beta<<1). A comparison with a 3D MHD model supports our basic considerations. If plasma-beta is of the order of the relative free energy (the ratio of the free magnetic energy to the total...
Compressive Spectral Renormalization Method
Bayindir, Cihan
2016-01-01
In this paper a novel numerical scheme for finding the sparse self-localized states of a nonlinear system of equations with missing spectral data is introduced. As in the Petviashivili's and the spectral renormalization method, the governing equation is transformed into Fourier domain, but the iterations are performed for far fewer number of spectral components (M) than classical versions of the these methods with higher number of spectral components (N). After the converge criteria is achieved for M components, N component signal is reconstructed from M components by using the l1 minimization technique of the compressive sampling. This method can be named as compressive spectral renormalization (CSRM) method. The main advantage of the CSRM is that, it is capable of finding the sparse self-localized states of the evolution equation(s) with many spectral data missing.
Gato-Rivera, Beatriz; Gato-Rivera, Beatriz; Rosado, Jose Ignacio
1995-01-01
Recently we showed that the spectral flow acting on the N=2 twisted topological theories gives rise to a topological algebra automorphism. Here we point out that the untwisting of that automorphism leads to a spectral flow on the untwisted N=2 superconformal algebra which is different from the usual one. This "other" spectral flow does not interpolate between the chiral ring and the antichiral ring. In particular, it maps the chiral ring into the chiral ring and the antichiral ring into the antichiral ring. We discuss the similarities and differences between both spectral flows. We also analyze their action on null states.
A rare earth-based metal-organic framework for moisture removal and control in confined spaces
Eddaoudi, Mohamed
2017-04-13
A method for preparing a metal-organic framework (MOF) comprising contacting one or more of a rare earth metal ion component with one or more of a tetratopic ligand component, sufficient to form a rare earth-based MOF for controlling moisture in an environment. A method of moisture control in an environment comprising adsorbing and/or desorbing water vapor in an environment using a MOF, the MOF including one or more of a rare earth metal ion component and one or more of a tetratopic ligand component. A method of controlling moisture in an environment comprising sensing the relative humidity in the environment comprising a MOF; and adsorbing water vapor on the MOF if the relative humidity is above a first level, sufficient to control moisture in an environment. The examples relate to a MOF created from 1,2,4,5-Tetrakis(4-carboxyphenyl )benzene (BTEB) as tetratopic ligand, 2-fluorobenzoic acid and Y(NO3)3, Tb(NO3)3 and Yb(NO3)3 as rare earth metals.
Campbell, Bruce A.; Carter, Lynn M.; Hawke, B. Ray; Campbell, Donald B.; Ghent, Rebecca R.
2008-02-01
Lunar pyroclastic deposits reflect an explosive stage of thebasaltic volcanism that filled impact basins across the nearside.These fine-grained mantling layers are of interest for theirassociation with early mare volcanic processes, and as possiblesources of volatiles and other species for lunar outposts. Wepresent Earth-based radar images, at 12.6 and 70 cm wavelengths,of the pyroclastic deposit that blankets the Aristarchus Plateau.The 70 cm data reveal the outlines of a lava-flow complex thatcovers a significant portion of the plateau and appears to haveformed by spillover of magma from the large sinuous rille VallisSchröteri. The pyroclastics mantling these flows are heavilycontaminated with rocks 10 cm and larger in diameter. The 12.6cm data confirm that other areas are mantled by 20 m or lessof material, and that there are numerous patches of 2 cm andlarger rocks associated with ejecta from Aristarchus crater.Some of the radar-detected rocky debris is within the mantlingmaterial and is not evident in visible-wavelength images. Theradar data identify thick, rock-poor areas of the pyroclasticdeposit best suited for resource exploitation.
On Longitudinal Spectral Coherence
DEFF Research Database (Denmark)
Kristensen, Leif
1979-01-01
It is demonstrated that the longitudinal spectral coherence differs significantly from the transversal spectral coherence in its dependence on displacement and frequency. An expression for the longitudinal coherence is derived and it is shown how the scale of turbulence, the displacement between...
Spectral geometry of spacetime
Kopf, T
2000-01-01
Spacetime, understood as a globally hyperbolic manifold, may be characterized by spectral data using a 3+1 splitting into space and time, a description of space by spectral triples and by employing causal relationships, as proposed earlier. Here, it is proposed to use the Hadamard condition of quantum field theory as a smoothness principle.
SRD 115 Hydrocarbon Spectral Database (Web, free access) All of the rotational spectral lines observed and reported in the open literature for 91 hydrocarbon molecules have been tabulated. The isotopic molecular species, assigned quantum numbers, observed frequency, estimated measurement uncertainty and reference are given for each transition reported.
Spectral Geometry and Causality
Kopf, T
1996-01-01
For a physical interpretation of a theory of quantum gravity, it is necessary to recover classical spacetime, at least approximately. However, quantum gravity may eventually provide classical spacetimes by giving spectral data similar to those appearing in noncommutative geometry, rather than by giving directly a spacetime manifold. It is shown that a globally hyperbolic Lorentzian manifold can be given by spectral data. A new phenomenon in the context of spectral geometry is observed: causal relationships. The employment of the causal relationships of spectral data is shown to lead to a highly efficient description of Lorentzian manifolds, indicating the possible usefulness of this approach. Connections to free quantum field theory are discussed for both motivation and physical interpretation. It is conjectured that the necessary spectral data can be generically obtained from an effective field theory having the fundamental structures of generalized quantum mechanics: a decoherence functional and a choice of...
Snapshot spectral imaging system
Arnold, Thomas; De Biasio, Martin; McGunnigle, Gerald; Leitner, Raimund
2010-02-01
Spectral imaging is the combination of spectroscopy and imaging. These fields are well developed and are used intensively in many application fields including industry and the life sciences. The classical approach to acquire hyper-spectral data is to sequentially scan a sample in space or wavelength. These acquisition methods are time consuming because only two spatial dimensions, or one spatial and the spectral dimension, can be acquired simultaneously. With a computed tomography imaging spectrometer (CTIS) it is possible to acquire two spatial dimensions and a spectral dimension during a single integration time, without scanning either spatial or spectral dimensions. This makes it possible to acquire dynamic image scenes without spatial registration of the hyperspectral data. This is advantageous compared to tunable filter based systems which need sophisticated image registration techniques. While tunable filters provide full spatial and spectral resolution, for CTIS systems there is always a tradeoff between spatial and spectral resolution as the spatial and spectral information corresponding to an image cube is squeezed onto a 2D image. The presented CTIS system uses a spectral-dispersion element to project the spectral and spatial image information onto a 2D CCD camera array. The system presented in this paper is designed for a microscopy application for the analysis of fixed specimens in pathology and cytogenetics, cell imaging and material analysis. However, the CTIS approach is not limited to microscopy applications, thus it would be possible to implement it in a hand-held device for e.g. real-time, intra-surgery tissue classification.
Mirus, Benjamin B.; Halford, Keith; Sweetkind, Don; Fenelon, Joe
2016-08-01
The suitability of geologic frameworks for extrapolating hydraulic conductivity ( K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.
Mirus, Benjamin B.; Halford, Keith J.; Sweetkind, Donald; Fenelon, Joseph M.
2016-01-01
The suitability of geologic frameworks for extrapolating hydraulic conductivity (K) to length scales commensurate with hydraulic data is difficult to assess. A novel method is presented for evaluating assumed relations between K and geologic interpretations for regional-scale groundwater modeling. The approach relies on simultaneous interpretation of multiple aquifer tests using alternative geologic frameworks of variable complexity, where each framework is incorporated as prior information that assumes homogeneous K within each model unit. This approach is tested at Pahute Mesa within the Nevada National Security Site (USA), where observed drawdowns from eight aquifer tests in complex, highly faulted volcanic rocks provide the necessary hydraulic constraints. The investigated volume encompasses 40 mi3 (167 km3) where drawdowns traversed major fault structures and were detected more than 2 mi (3.2 km) from pumping wells. Complexity of the five frameworks assessed ranges from an undifferentiated mass of rock with a single unit to 14 distinct geologic units. Results show that only four geologic units can be justified as hydraulically unique for this location. The approach qualitatively evaluates the consistency of hydraulic property estimates within extents of investigation and effects of geologic frameworks on extrapolation. Distributions of transmissivity are similar within the investigated extents irrespective of the geologic framework. In contrast, the extrapolation of hydraulic properties beyond the volume investigated with interfering aquifer tests is strongly affected by the complexity of a given framework. Testing at Pahute Mesa illustrates how this method can be employed to determine the appropriate level of geologic complexity for large-scale groundwater modeling.
A Cryogenic Radiometry Based Spectral Responsivity Scale at the National Metrology Centre
Xu, Gan; Huang, Xuebo
This paper describes the spectral responsivity scale established at the National Metrology Centre (NMC) based on cryogenic radiometry. A primary standard - a mechanically pumped cryogenic radiometer together with a set of intensity-stabilised lasers provides traceability for optical power measurement with an uncertainty in the order of 10-4 at 14 discrete wavelengths in the spectral range from 350 nm to 800 nm. A silicon trap detector, with its absolute responsivity calibrated against the cryogenic radiometer is used as a transfer standard for the calibration of other detectors using a specially built spectral comparator. The relative spectral responsivity of a detector at other wavelengths can be determined through the use of a cavity pyroelectric detector and the extrapolation technique. With this scale, NMC is capable to calibrate the spectral responsivity of different type of photo detectors from 250 nm to 1640 nm with an uncertainty range from 3.7% to 0.3%.
Scotcher, Daniel; Jones, Christopher; Posada, Maria; Galetin, Aleksandra; Rostami-Hodjegan, Amin
2016-09-01
It is envisaged that application of mechanistic models will improve prediction of changes in renal disposition due to drug-drug interactions, genetic polymorphism in enzymes and transporters and/or renal impairment. However, developing and validating mechanistic kidney models is challenging due to the number of processes that may occur (filtration, secretion, reabsorption and metabolism) in this complex organ. Prediction of human renal drug disposition from preclinical species may be hampered by species differences in the expression and activity of drug metabolising enzymes and transporters. A proposed solution is bottom-up prediction of pharmacokinetic parameters based on in vitro-in vivo extrapolation (IVIVE), mediated by recent advances in in vitro experimental techniques and application of relevant scaling factors. This review is a follow-up to the Part I of the report from the 2015 AAPS Annual Meeting and Exhibition (Orlando, FL; 25th-29th October 2015) which focuses on IVIVE and mechanistic prediction of renal drug disposition. It describes the various mechanistic kidney models that may be used to investigate renal drug disposition. Particular attention is given to efforts that have attempted to incorporate elements of IVIVE. In addition, the use of mechanistic models in prediction of renal drug-drug interactions and potential for application in determining suitable adjustment of dose in kidney disease are discussed. The need for suitable clinical pharmacokinetics data for the purposes of delineating mechanistic aspects of kidney models in various scenarios is highlighted.
Linear extrapolation for prediction of tensile creep compliance of polyvinyl chloride
Institute of Scientific and Technical Information of China (English)
XIE Gang
2005-01-01
The universal creep equation is successful in relating the creep (ε) to the aging time (te), coefficient of retardation time (β), and intrinsic time (to ). This relation was used to treat the creep experimental data for polyvinyl chloride (PVC) specimens at a given stress and different aging times. The βgs found by the "polynomial fitting" method in this work instead of the "middle -point" method reported in the literature. The unified master line was constructed with the treated data and curves according to the universal equation. The master line can be used to predict the long -term creep behavior and lifetime by extrapolating.
Sur l'Extrapolation des Signoux d'Energie Finie a Band Limitee
Charbonniaud, A. L.; Crouzet, J-F.; Gay, R.
1996-01-01
We show that both Papoulis' method and Aizenberg's method for extrapolating finite energy and band limited signals are related to each other, provided that the same setting is used to describe both methods. We study such a setting and give some examples we comment. On montre que les méthodes d'Exploration de signaux d'énergie finie et à bande limitée de Papoulis et d'Aizenberg peuvent être reliées dans un cadre d'étude commun. On étudie ce cadre de travail et on donne quelques exemples com...
Challenges for In vitro to in Vivo Extrapolation of Nanomaterial Dosimetry for Human Risk Assessment
Energy Technology Data Exchange (ETDEWEB)
Smith, Jordan N.
2013-11-01
The proliferation in types and uses of nanomaterials in consumer products has led to rapid application of conventional in vitro approaches for hazard identification. Unfortunately, assumptions pertaining to experimental design and interpretation for studies with chemicals are not generally appropriate for nanomaterials. The fate of nanomaterials in cell culture media, cellular dose to nanomaterials, cellular dose to nanomaterial byproducts, and intracellular fate of nanomaterials at the target site of toxicity all must be considered in order to accurately extrapolate in vitro results to reliable predictions of human risk.
Making the most of what we have: application of extrapolation approaches in wildlife transfer models
Energy Technology Data Exchange (ETDEWEB)
Beresford, Nicholas A.; Barnett, Catherine L.; Wells, Claire [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi [Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Brown, Justin E.; Hosseini, Ali [Norwegian Radiation Protection Authority, P.O. Box 55, N-1332 Oesteraas (Norway); Yankovich, Tamara L. [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria); Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Willey, Neil [Centre for Research in Biosciences, University of the West of England, Coldharbour Lane, Frenchay, Bristol BS16 1QY (United Kingdom)
2014-07-01
Radiological environmental protection models need to predict the transfer of many radionuclides to a large number of organisms. There has been considerable development of transfer (predominantly concentration ratio) databases over the last decade. However, in reality it is unlikely we will ever have empirical data for all the species-radionuclide combinations which may need to be included in assessments. To provide default values for a number of existing models/frameworks various extrapolation approaches have been suggested (e.g. using data for a similar organism or element). This paper presents recent developments in two such extrapolation approaches, namely phylogeny and allometry. An evaluation of how extrapolation approaches have performed and the potential application of Bayesian statistics to make best use of available data will also be given. Using a Residual Maximum Likelihood (REML) mixed-model regression we initially analysed a dataset comprising 597 entries for 53 freshwater fish species from 67 sites to investigate if phylogenetic variation in transfer could be identified. The REML analysis generated an estimated mean value for each species on a common scale after taking account of the effect of the inter-site variation. Using an independent dataset, we tested the hypothesis that the REML model outputs could be used to predict radionuclide activity concentrations in other species from the results of a species which had been sampled at a specific site. The outputs of the REML analysis accurately predicted {sup 137}Cs activity concentrations in different species of fish from 27 lakes. Although initially investigated as an extrapolation approach the output of this work is a potential alternative to the highly site dependent concentration ratio model. We are currently applying this approach to a wider range of organism types and different ecosystems. An initial analysis of these results will be presented. The application of allometric, or mass
Institute of Scientific and Technical Information of China (English)
秦开怀; 范刚; 等
1994-01-01
The new algorithms for finding B-Spline or Bezier curves and surfaces intersections using recursive subdivision techniques are presented,which use extrapolating acceleration technique,and have convergent precision of order 2.Matrix method is used to subdivide the curves or surfaces which makes the subdivision more concise and intuitive.Dividing depths of Bezier curves and surfaces are used to subdivide the curves or surfaces adaptively.Therefore the convergent precision and the computing efficiency of finding the intersections of curves and surfaces have been improved by the methods proposed in the paper.
Study of an extrapolation chamber in a standard diagnostic radiology beam by Monte Carlo simulation
Energy Technology Data Exchange (ETDEWEB)
Vedovato, Uly Pita; Silva, Rayre Janaina Vieira; Neves, Lucio Pereira; Santos, William S.; Perini, Ana Paula, E-mail: anapaula.perini@ufu.br [Universidade Federal de Uberlandia (INFIS/UFU), MG (Brazil). Instituto de Fisica; Caldas, Linda V.E. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Belinato, Walmir [Instituto Federal de Educacao, Ciencia e Tecnologia da Bahia (IFBA), Vitoria da Conquista, BA (Brazil)
2016-07-01
In this work, we studied the influence of the components of an extrapolation ionization chamber in its response. This study was undertaken using the MCNP-5 Monte Carlo code, and the standard diagnostic radiology quality for direct beams (RQR5). Using tally F6 and 2.1 x 10{sup 9} simulated histories, the results showed that the chamber design and material not alter significantly the energy deposited in its sensitive volume. The collecting electrode and support board were the components with more influence on the chamber response. (author)
Alessandria, F; Ardito, R; Arnaboldi, C; Avignone, F T; Balata, M; Bandac, I; Banks, T I; Bari, G; Beeman, J W; Bellini, F; Bersani, A; Biassoni, M; Bloxham, T; Brofferio, C; Bryant, A; Bucci, C; Cai, X Z; Canonica, L; Capelli, S; Carbone, L; Cardani, L; Carrettoni, M; Chott, N; Clemenza, M; Cosmelli, C; Cremonesi, O; Creswick, R J; Dafinei, I; Dally, A; De Biasi, A; Decowski, M P; Deninno, M M; de Waard, A; Di Domizio, S; Ejzak, L; Faccini, R; Fang, D Q; Farach, H; Ferri, E; Ferroni, F; Fiorini, E; Foggetta, L; Freedman, S; Frossati, G; Giachero, A; Gironi, L; Giuliani, A; Gorla, P; Gotti, C; Guardincerri, E; Gutierrez, T D; Haller, E E; Han, K; Heeger, K M; Huang, H Z; Ichimura, K; Kadel, R; Kazkaz, K; Keppel, G; Kogler, L; Kolomensky, Y G; Kraft, S; Lenz, D; Li, Y L; Liu, X; Longo, E; Ma, Y G; Maiano, C; Maier, G; Martinez, C; Martinez, M; Maruyama, R H; Moggi, N; Morganti, S; Newman, S; Nisi, S; Nones, C; Norman, E B; Nucciotti, A; Orio, F; Orlandi, D; Ouellet, J; Pallavicini, M; Palmieri, V; Pattavina, L; Pavan, M; Pedretti, M; Pessina, G; Pirro, S; Previtali, E; Rampazzo, V; Rimondi, F; Rosenfeld, C; Rusconi, C; Salvioni, C; Sangiorgio, S; Schaeffer, D; Scielzo, N D; Sisti, M; Smith, A R; Stivanello, F; Taffarello, L; Terenziani, G; Tian, W D; Tomei, C; Trentalange, S; Ventura, G; Vignati, M; Wang, B; Wang, H W; Whitten, C A; Wise, T; Woodcraft, A; Xu, N; Zanotti, L; Zarra, C; Zhu, B X; Zucchelli, S
2011-01-01
The CUORE Crystal Validation Runs (CCVRs) have been carried out since the end of 2008 at the Gran Sasso National Laboratories, in order to test the performances and the radiopurity of the TeO$_2$ crystals produced at SICCAS (Shanghai Institute of Ceramics, Chinese Academy of Sciences) for the CUORE experiment. In this work the results of the first 5 validation runs are presented. Results have been obtained for bulk contaminations and surface contaminations from several nuclides. An extrapolation to the CUORE background has been performed.
Model of a realistic InP surface quantum dot extrapolated from atomic force microscopy results.
Barettin, Daniele; De Angelis, Roberta; Prosposito, Paolo; Auf der Maur, Matthias; Casalboni, Mauro; Pecchia, Alessandro
2014-05-16
We report on numerical simulations of a zincblende InP surface quantum dot (QD) on In₀.₄₈Ga₀.₅₂ buffer. Our model is strictly based on experimental structures, since we extrapolated a three-dimensional dot directly by atomic force microscopy results. Continuum electromechanical, [Formula: see text] bandstructure and optical calculations are presented for this realistic structure, together with benchmark calculations for a lens-shape QD with the same radius and height of the extrapolated dot. Interesting similarities and differences are shown by comparing the results obtained with the two different structures, leading to the conclusion that the use of a more realistic structure can provide significant improvements in the modeling of QDs fact, the remarkable splitting for the electron p-like levels of the extrapolated dot seems to prove that a realistic experimental structure can reproduce the right symmetry and a correct splitting usually given by atomistic calculations even within the multiband [Formula: see text] approach. Moreover, the energy levels and the symmetry of the holes are strongly dependent on the shape of the dot. In particular, as far as we know, their wave function symmetries do not seem to resemble to any results previously obtained with simulations of zincblende ideal structures, such as lenses or truncated pyramids. The magnitude of the oscillator strengths is also strongly dependent on the shape of the dot, showing a lower intensity for the extrapolated dot, especially for the transition between the electrons and holes ground state, as a result of a relevant reduction of the wave functions overlap. We also compare an experimental photoluminescence spectrum measured on an homogeneous sample containing about 60 dots with a numerical ensemble average derived from single dot calculations. The broader energy range of the numerical spectrum motivated us to perform further verifications, which have clarified some aspects of the experimental
DEFF Research Database (Denmark)
Storhaug, Gaute; Andersen, Ingrid Marie Vincent
2015-01-01
Whipping can contribute to increased fatigue and extreme loading of container ships, and guidelines have been made available by the leading class societies. Reports concerning the hogging collapse of MSC Napoli and MOL Comfort suggest that whipping contributed. The accidents happened in moderate...... to small storms. Model tests of three container ships have been carried out in different sea states under realistic assumptions. Preliminary extrapolation of the measured data suggested that moderate storms are dimensioning when whipping is included due to higher maximum speed in moderate storms...
Kaltenboeck, Rudolf; Kerschbaum, Markus; Hennermann, Karin; Mayer, Stefan
2013-04-01
Nowcasting of precipitation events, especially thunderstorm events or winter storms, has high impact on flight safety and efficiency for air traffic management. Future strategic planning by air traffic control will result in circumnavigation of potential hazardous areas, reduction of load around efficiency hot spots by offering alternatives, increase of handling capacity, anticipation of avoidance manoeuvres and increase of awareness before dangerous areas are entered by aircraft. To facilitate this rapid update forecasts of location, intensity, size, movement and development of local storms are necessary. Weather radar data deliver precipitation analysis of high temporal and spatial resolution close to real time by using clever scanning strategies. These data are the basis to generate rapid update forecasts in a time frame up to 2 hours and more for applications in aviation meteorological service provision, such as optimizing safety and economic impact in the context of sub-scale phenomena. On the basis of tracking radar echoes by correlation the movement vectors of successive weather radar images are calculated. For every new successive radar image a set of ensemble precipitation fields is collected by using different parameter sets like pattern match size, different time steps, filter methods and an implementation of history of tracking vectors and plausibility checks. This method considers the uncertainty in rain field displacement and different scales in time and space. By validating manually a set of case studies, the best verification method and skill score is defined and implemented into an online-verification scheme which calculates the optimized forecasts for different time steps and different areas by using different extrapolation ensemble members. To get information about the quality and reliability of the extrapolation process additional information of data quality (e.g. shielding in Alpine areas) is extrapolated and combined with an extrapolation
3D Drop Size Distribution Extrapolation Algorithm Using a Single Disdrometer
Lane, John
2012-01-01
Determining the Z-R relationship (where Z is the radar reflectivity factor and R is rainfall rate) from disdrometer data has been and is a common goal of cloud physicists and radar meteorology researchers. The usefulness of this quantity has traditionally been limited since radar represents a volume measurement, while a disdrometer corresponds to a point measurement. To solve that problem, a 3D-DSD (drop-size distribution) method of determining an equivalent 3D Z-R was developed at the University of Central Florida and tested at the Kennedy Space Center, FL. Unfortunately, that method required a minimum of three disdrometers clustered together within a microscale network (.1-km separation). Since most commercial disdrometers used by the radar meteorology/cloud physics community are high-cost instruments, three disdrometers located within a microscale area is generally not a practical strategy due to the limitations of these kinds of research budgets. A relatively simple modification to the 3D-DSD algorithm provides an estimate of the 3D-DSD and therefore, a 3D Z-R measurement using a single disdrometer. The basis of the horizontal extrapolation is mass conservation of a drop size increment, employing the mass conservation equation. For vertical extrapolation, convolution of a drop size increment using raindrop terminal velocity is used. Together, these two independent extrapolation techniques provide a complete 3DDSD estimate in a volume around and above a single disdrometer. The estimation error is lowest along a vertical plane intersecting the disdrometer position in the direction of wind advection. This work demonstrates that multiple sensors are not required for successful implementation of the 3D interpolation/extrapolation algorithm. This is a great benefit since it is seldom that multiple sensors in the required spatial arrangement are available for this type of analysis. The original software (developed at the University of Central Florida, 1998.- 2000) has
Mercury's Pyroclastic Deposits and their spectral variability
Besse, Sebastien; Doressoundiram, Alain
2016-10-01
Observations of the MESSENGER spacecraft in orbit around Mercury have shown that volcanism is a very important process that has shaped the surface of the planet, in particular in its early history.In this study, we use the full range of the MASCS spectrometer (300-1400nm) to characterize the spectral properties of the pyroclastic deposits. Analysis of deposits within the Caloris Basin, and on other location of Mercury's surface (e.g., Hesiod, Rachmaninoff, etc.) show two main results: 1) Spectral variability is significant in the UV and VIS range between the deposits themselves, and also with respect to the rest of the planet and other features like hollows, 2) Deposits exhibit a radial variability similar to those found with the lunar pyroclastic deposits of floor fractured craters.These results are put in context with the latest analysis of other instruments of the MESSENGER spacecraft, in particular the visible observations from the imager MDIS, and the elemental composition given by the X-Ray spectrometer. Although all together, the results do not allow pointing to compositional variability of the deposits for certain, information on the formation mechanisms, the weathering and the age formation can be extrapolated from the radial variability and the elemental composition.
Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...
Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...
Application of Two-Parameter Extrapolation for Solution of Boundary-Value Problem on Semi-Axis
Zhidkov, E P
2000-01-01
A method for refining approximate eigenvalues and eigenfunctions for a boundary-value problem on a half-axis is suggested. To solve the problem numerically, one has to solve a problem on a finite segment [0,R] instead of the original problem on the interval [0,\\infty). This replacement leads to eigenvalues' and eigenfunctions' errors. To choose R beforehand for obtaining their required accuracy is often impossible. Thus, one has to resolve the problem on [0,R] with larger R. If there are two eigenvalues or two eigenfunctions that correspond to different segments, the suggested method allows one to improve the accuracy of the eigenvalue and the eigenfunction for the original problem by means of extrapolation along the segment. This approach is similar to Richardson's method. Moreover, a two-parameter extrapolation is described. It is combination of the extrapolation along the segment and Richardson's extrapolation along a discretization step.
Direct activity determination of Mn-54 and Zn-65 by a non-extrapolation liquid scintillation method
CSIR Research Space (South Africa)
Simpson, BRS
2004-02-01
Full Text Available The measurement of Mn-54 and Zn-65 by liquid scintillation coincidence counting results in low detection efficiencies. The activity obtained from the extrapolation of efficiency data can therefore become problematic if curvature is present...
Spackman, Peter R.; Karton, Amir
2015-05-01
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/Lα two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol-1. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol-1.
Energy Technology Data Exchange (ETDEWEB)
Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)
2015-05-15
Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.
Yamamoto, Tetsuya
2007-06-01
A novel test fixture operating at a millimeter-wave band using an extrapolation range measurement technique was developed at the National Metrology Institute of Japan (NMIJ). Here I describe the measurement system using a Q-band test fixture. I measured the relative insertion loss as a function of antenna separation distance and observed the effects of multiple reflections between the antennas. I also evaluated the antenna gain at 33 GHz using the extrapolation technique.
Amir, Sahar Z.
2013-05-01
We introduce an efficient thermodynamically consistent technique to extrapolate and interpolate normalized Canonical NVT ensemble averages like pressure and energy for Lennard-Jones (L-J) fluids. Preliminary results show promising applicability in oil and gas modeling, where accurate determination of thermodynamic properties in reservoirs is challenging. The thermodynamic interpolation and thermodynamic extrapolation schemes predict ensemble averages at different thermodynamic conditions from expensively simulated data points. The methods reweight and reconstruct previously generated database values of Markov chains at neighboring temperature and density conditions. To investigate the efficiency of these methods, two databases corresponding to different combinations of normalized density and temperature are generated. One contains 175 Markov chains with 10,000,000 MC cycles each and the other contains 3000 Markov chains with 61,000,000 MC cycles each. For such massive database creation, two algorithms to parallelize the computations have been investigated. The accuracy of the thermodynamic extrapolation scheme is investigated with respect to classical interpolation and extrapolation. Finally, thermodynamic interpolation benefiting from four neighboring Markov chains points is implemented and compared with previous schemes. The thermodynamic interpolation scheme using knowledge from the four neighboring points proves to be more accurate than the thermodynamic extrapolation from the closest point only, while both thermodynamic extrapolation and thermodynamic interpolation are more accurate than the classical interpolation and extrapolation. The investigated extrapolation scheme has great potential in oil and gas reservoir modeling.That is, such a scheme has the potential to speed up the MCMC thermodynamic computation to be comparable with conventional Equation of State approaches in efficiency. In particular, this makes it applicable to large-scale optimization of L
Stevanovic, Dragan
2015-01-01
Spectral Radius of Graphs provides a thorough overview of important results on the spectral radius of adjacency matrix of graphs that have appeared in the literature in the preceding ten years, most of them with proofs, and including some previously unpublished results of the author. The primer begins with a brief classical review, in order to provide the reader with a foundation for the subsequent chapters. Topics covered include spectral decomposition, the Perron-Frobenius theorem, the Rayleigh quotient, the Weyl inequalities, and the Interlacing theorem. From this introduction, the
Cho, M.A.; Skidmore, A.K.
2006-01-01
There is increasing interest in using hyperspectral data for quantitative characterization of vegetation in spatial and temporal scopes. Many spectral indices are being developed to improve vegetation sensitivity by minimizing the background influence. The chlorophyll absorption continuum index (CAC
The role of strange sea quarks in chiral extrapolations on the lattice
Descotes-Genon, S
2004-01-01
Since the strange quark has a light mass of order Lambda_QCD, fluctuations of sea s-s bar pairs may play a special role in the low-energy dynamics of QCD by inducing significantly different patterns of chiral symmetry breaking in the chiral limits N_f=2 (m_u=m_d=0, m_s physical) and N_f=3 (m_u=m_d=m_s=0). This effect of vacuum fluctuations of s-s bar pairs is related to the violation of the Zweig rule in the scalar sector, described through the two O(p^4) low-energy constants L_4 and L_6 of the three-flavour strong chiral lagrangian. In the case of significant vacuum fluctuations, three-flavour chiral expansions might exhibit a numerical competition between leading- and next-to-leading-order terms according to the chiral counting, and chiral extrapolations should be handled with a special care. We investigate the impact of the fluctuations of s-s bar pairs on chiral extrapolations in the case of lattice simulations with three dynamical flavours in the isospin limit. Information on the size of the vacuum fluct...
$
Abbasi, R U
2016-01-01
Recent measurements at the LHC of the p-p total cross section have reduced the uncertainty in simulations of cosmic ray air showers. In particular of the depth of shower maximum, called $X_{max}$. However, uncertainties of other important parameters, in particular the multiplicity and elasticity of high energy interactions, have not improved, and there is a remaining uncertainty due to the total cross section. Uncertainties due to extrapolations from accelerator data, at a maximum energy of $\\sim$ one TeV in the p-p center of mass, to 250 TeV ($3\\times10^{19}$ eV in a cosmic ray proton's lab frame) introduce significant uncertainties in predictions of $$. In this paper we estimate a lower limit on these uncertainties. The result is that the uncertainty in $$ is larger than the difference among the modern models being used in the field. At the full energy of the LHC, which is equivalent to $\\sim 1\\times10^{17}$ eV in the cosmic ray lab frame, the extrapolation is not as extreme, and the uncertainty is approxim...
Waheed, Umair bin
2014-08-01
The wavefield extrapolation operator for ellipsoidally anisotropic (EA) media offers significant cost reduction compared to that for the orthorhombic case, especially when the symmetry planes are tilted and/or rotated. However, ellipsoidal anisotropy does not provide accurate focusing for media of orthorhombic anisotropy. Therefore, we develop effective EA models that correctly capture the kinematic behavior of the wavefield for tilted orthorhombic (TOR) media. Specifically, we compute effective source-dependent velocities for the EA model using kinematic high-frequency representation of the TOR wavefield. The effective model allows us to use the cheaper EA wavefield extrapolation operator to obtain approximate wavefield solutions for a TOR model. Despite the fact that the effective EA models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TOR media, particularly for media of low to moderate complexity. We demonstrate applicability of the proposed approach on a layered TOR model.
Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer.
Le, Guigao; Oulaid, Othmane; Zhang, Junfeng
2015-03-01
In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.
Electric form factors of the octet baryons from lattice QCD and chiral extrapolation
Energy Technology Data Exchange (ETDEWEB)
Shanahan, P.E.; Thomas, A.W.; Young, R.D.; Zanotti, J.M. [Adelaide Univ., SA (Australia). ARC Centre of Excellence in Particle Physics at the Terascale and CSSM; Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Nakamura, Y. [RIKEN Advanced Institute for Computational Science, Kobe, Hyogo (Japan); Pleiter, D. [Forschungszentrum Juelich (Germany). JSC; Regensburg Univ. (Germany). Inst. fuer Theoretische Physik; Rakow, P.E.L. [Liverpool Univ. (United Kingdom). Theoretical Physics Div.; Schierholz, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Stueben, H. [Hamburg Univ. (Germany). Regionales Rechenzentrum; Collaboration: CSSM and QCDSF/UKQCD Collaborations
2014-03-15
We apply a formalism inspired by heavy baryon chiral perturbation theory with finite-range regularization to dynamical 2+1-flavor CSSM/QCDSF/UKQCD Collaboration lattice QCD simulation results for the electric form factors of the octet baryons. The electric form factor of each octet baryon is extrapolated to the physical pseudoscalar masses, after finite-volume corrections have been applied, at six fixed values of Q{sup 2} in the range 0.2-1.3 GeV{sup 2}. The extrapolated lattice results accurately reproduce the experimental form factors of the nucleon at the physical point, indicating that omitted disconnected quark loop contributions are small. Furthermore, using the results of a recent lattice study of the magnetic form factors, we determine the ratio μ{sub p}G{sub E}{sup p}/G{sub M}{sup p}. This quantity decreases with Q{sup 2} in a way qualitatively consistent with recent experimental results.
The Impacts of Atmospheric Stability on the Accuracy of Wind Speed Extrapolation Methods
Directory of Open Access Journals (Sweden)
Jennifer F. Newman
2014-01-01
Full Text Available The building of utility-scale wind farms requires knowledge of the wind speed climatology at hub height (typically 80–100 m. As most wind speed measurements are taken at 10 m above ground level, efforts are being made to relate 10-m measurements to approximate hub-height wind speeds. One common extrapolation method is the power law, which uses a shear parameter to estimate the wind shear between a reference height and hub height. The shear parameter is dependent on atmospheric stability and should ideally be determined independently for different atmospheric stability regimes. In this paper, data from the Oklahoma Mesonet are used to classify atmospheric stability and to develop stability-dependent power law fits for a nearby tall tower. Shear exponents developed from one month of data are applied to data from different seasons to determine the robustness of the power law method. In addition, similarity theory-based methods are investigated as possible alternatives to the power law. Results indicate that the power law method performs better than similarity theory methods, particularly under stable conditions, and can easily be applied to wind speed data from different seasons. In addition, the importance of using co-located near-surface and hub-height wind speed measurements to develop extrapolation fits is highlighted.
Energy Technology Data Exchange (ETDEWEB)
Kim, B.H.; Velas, J.P.; Lee, K.Y [Pennsylvania State Univ., University Park, PA (United States). Dept. of Electrical Engineering
2006-07-01
This paper presented a mathematical method that power plant operators can use to estimate rotational mass unbalance, which is the most common source of vibration in turbine generators. An unbalanced rotor or driveshaft causes vibration and stress in the rotating part and in its supporting structure. As such, balancing the rotating part is important to minimize structural stress, minimize operator annoyance and fatigue, increase bearing life, or minimize power loss. The newly proposed method for estimating vibration on a turbine generator uses mass unbalance extrapolation based on a modified system-type neural network architecture, notably the semigroup theory used to study differential equations, partial differential equations and their combinations. Rather than relying on inaccurate vibration measurements, this method extrapolates a set of reliable mass unbalance readings from a common source of vibration. Given a set of empirical data with no analytic expression, the authors first developed an analytic description and then extended that model along a single axis. The algebraic decomposition which was used to obtain the analytic description of empirical data in the semigroup form involved the product of a coefficient vector and a basis set of vectors. The proposed approach was simulated on empirical data. The concept can also be tested in many other engineering and non-engineering problems. 23 refs., 11 figs.
Counter-extrapolation method for conjugate interfaces in computational heat and mass transfer
Le, Guigao; Oulaid, Othmane; Zhang, Junfeng
2015-03-01
In this paper a conjugate interface method is developed by performing extrapolations along the normal direction. Compared to other existing conjugate models, our method has several technical advantages, including the simple and straightforward algorithm, accurate representation of the interface geometry, applicability to any interface-lattice relative orientation, and availability of the normal gradient. The model is validated by simulating the steady and unsteady convection-diffusion system with a flat interface and the steady diffusion system with a circular interface, and good agreement is observed when comparing the lattice Boltzmann results with respective analytical solutions. A more general system with unsteady convection-diffusion process and a curved interface, i.e., the cooling process of a hot cylinder in a cold flow, is also simulated as an example to illustrate the practical usefulness of our model, and the effects of the cylinder heat capacity and thermal diffusivity on the cooling process are examined. Results show that the cylinder with a larger heat capacity can release more heat energy into the fluid and the cylinder temperature cools down slower, while the enhanced heat conduction inside the cylinder can facilitate the cooling process of the system. Although these findings appear obvious from physical principles, the confirming results demonstrates the application potential of our method in more complex systems. In addition, the basic idea and algorithm of the counter-extrapolation procedure presented here can be readily extended to other lattice Boltzmann models and even other computational technologies for heat and mass transfer systems.
Testing magnetofrictional extrapolation with the Titov-D\\'emoulin model of solar active regions
Valori, G; Török, T; Titov, V S
2010-01-01
We examine the nonlinear magnetofrictional extrapolation scheme using the solar active region model by Titov and D\\'emoulin as test field. This model consists of an arched, line-tied current channel held in force-free equilibrium by the potential field of a bipolar flux distribution in the bottom boundary. A modified version, having a parabolic current density profile, is employed here. We find that the equilibrium is reconstructed with very high accuracy in a representative range of parameter space, using only the vector field in the bottom boundary as input. Structural features formed in the interface between the flux rope and the surrounding arcade-"hyperbolic flux tube" and "bald patch separatrix surface"-are reliably reproduced, as are the flux rope twist and the energy and helicity of the configuration. This demonstrates that force-free fields containing these basic structural elements of solar active regions can be obtained by extrapolation. The influence of the chosen initial condition on the accuracy...
Classification of future 5 MW turbines by extrapolation of current trends
Energy Technology Data Exchange (ETDEWEB)
Thakoer, R.; Van Kuik, G.A.M.; Van Leeuwen, H.L.
1999-09-01
This report is part of the STABTOOL project. The goals of the STABTOOL project can be summarised as follows: (1) first establish the elastic configuration of the present megawatt scaled wind turbines, and making an inventory of the present design trends and trends for future wind turbine developments w.r.t changes in the elastic configuration; (2) to make an inventory of the different types of instabilities which can occur for the present and next generation wind turbines for both onshore and offshore applications; (3) to make an inventory of analysis and design methods and development or adjustment of calculation methods. The final objective of the STABTOOL project is to create STABility TOOLs: a simple set of calculation models and methods for specific forms of aeroelastic instabilities and vibration problems which are applicable for both present and future large wind turbines. This report concerns the up scaling of the selected elastic configurations described in ST-NW-1-004: 2-blade, (active)pitch controlled, fixed speed (Kvaerner WTS 80M); 3-blade, (active)stall controlled, fixed speed (Nedwind 62 ); 3-blade, pitch controlled variable speed (Lagerwey 50/1000). Based on scaling rules and extrapolation of trend figures, the characteristics of the future 5MW class of wind turbines is estimated. The Nedwind based extrapolation is considered to be an onshore turbine, whereas the others are offshore. 5 refs.
On Extrapolating Past the Range of Observed Data When Making Statistical Predictions in Ecology.
Directory of Open Access Journals (Sweden)
Paul B Conn
Full Text Available Ecologists are increasingly using statistical models to predict animal abundance and occurrence in unsampled locations. The reliability of such predictions depends on a number of factors, including sample size, how far prediction locations are from the observed data, and similarity of predictive covariates in locations where data are gathered to locations where predictions are desired. In this paper, we propose extending Cook's notion of an independent variable hull (IVH, developed originally for application with linear regression models, to generalized regression models as a way to help assess the potential reliability of predictions in unsampled areas. Predictions occurring inside the generalized independent variable hull (gIVH can be regarded as interpolations, while predictions occurring outside the gIVH can be regarded as extrapolations worthy of additional investigation or skepticism. We conduct a simulation study to demonstrate the usefulness of this metric for limiting the scope of spatial inference when conducting model-based abundance estimation from survey counts. In this case, limiting inference to the gIVH substantially reduces bias, especially when survey designs are spatially imbalanced. We also demonstrate the utility of the gIVH in diagnosing problematic extrapolations when estimating the relative abundance of ribbon seals in the Bering Sea as a function of predictive covariates. We suggest that ecologists routinely use diagnostics such as the gIVH to help gauge the reliability of predictions from statistical models (such as generalized linear, generalized additive, and spatio-temporal regression models.
Determination of the true null electrode spacing of an extrapolation chamber for X-ray dosimetry
Energy Technology Data Exchange (ETDEWEB)
Figueiredo, M.T.T.; Bastos, F.M.; Silva, T.A. da, E-mail: mttf@cdtn.br, E-mail: fmb@cdtn.br, E-mail: silvata@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil). Pos-Graduacao em Ciencia e Tecnologia da Radiacao, Minerais e Materiais
2015-07-01
An accurate determination of the actual null distance is critical for the establishment of primary measurement method for absorbed dose in tissue, since the concept of the true null electrode spacing is used to define the sensitive volume of an extrapolation chamber. In this paper, a critical analysis of two methodologies for determining the true null electrode spacing of an extrapolation chamber was done. Firstly, the ionization current as a function of electrode spacing was measured in ISO 4037 low energy X-ray beams. In the second procedure, a LC Bridge was used to measure the capacitance between the electrodes of a 23392 Böhm model PTW ionization chamber and a reliable relationship between capacitance and relative distance was established. Results showed that the true null spacing values varied from 0.0015 to 0.38 mm. Since capacitance meters with high resolution are not always available in calibration laboratories, the second method showed values with large uncertainties. The first method proved to be highly sensitive to the quality of the X-ray beams used. (author)
Unmixing of spectrally similar minerals
CSIR Research Space (South Africa)
Debba, Pravesh
2009-01-01
Full Text Available -bearing oxide/hydroxide/sulfate minerals in complex mixtures be obtained using hyperspectral data? Debba (CSIR) Unmixing of spectrally similar minerals MERAKA 2009 3 / 18 Method of spectral unmixing Old method: problem Linear Spectral Mixture Analysis (LSMA...
Vowel Inherent Spectral Change
Assmann, Peter
2013-01-01
It has been traditional in phonetic research to characterize monophthongs using a set of static formant frequencies, i.e., formant frequencies taken from a single time-point in the vowel or averaged over the time-course of the vowel. However, over the last twenty years a growing body of research has demonstrated that, at least for a number of dialects of North American English, vowels which are traditionally described as monophthongs often have substantial spectral change. Vowel Inherent Spectral Change has been observed in speakers’ productions, and has also been found to have a substantial effect on listeners’ perception. In terms of acoustics, the traditional categorical distinction between monophthongs and diphthongs can be replaced by a gradient description of dynamic spectral patterns. This book includes chapters addressing various aspects of vowel inherent spectral change (VISC), including theoretical and experimental studies of the perceptually relevant aspects of VISC, the relationship between ar...
Temporal Lorentzian spectral triples
Franco, Nicolas
2014-09-01
We present the notion of temporal Lorentzian spectral triple which is an extension of the notion of pseudo-Riemannian spectral triple with a way to ensure that the signature of the metric is Lorentzian. A temporal Lorentzian spectral triple corresponds to a specific 3 + 1 decomposition of a possibly noncommutative Lorentzian space. This structure introduces a notion of global time in noncommutative geometry. As an example, we construct a temporal Lorentzian spectral triple over a Moyal-Minkowski spacetime. We show that, when time is commutative, the algebra can be extended to unbounded elements. Using such an extension, it is possible to define a Lorentzian distance formula between pure states with a well-defined noncommutative formulation.
Spectral recognition of graphs
Directory of Open Access Journals (Sweden)
Cvetković Dragoš
2012-01-01
Full Text Available At some time, in the childhood of spectral graph theory, it was conjectured that non-isomorphic graphs have different spectra, i.e. that graphs are characterized by their spectra. Very quickly this conjecture was refuted and numerous examples and families of non-isomorphic graphs with the same spectrum (cospectral graphs were found. Still some graphs are characterized by their spectra and several mathematical papers are devoted to this topic. In applications to computer sciences, spectral graph theory is considered as very strong. The benefit of using graph spectra in treating graphs is that eigenvalues and eigenvectors of several graph matrices can be quickly computed. Spectral graph parameters contain a lot of information on the graph structure (both global and local including some information on graph parameters that, in general, are computed by exponential algorithms. Moreover, in some applications in data mining, graph spectra are used to encode graphs themselves. The Euclidean distance between the eigenvalue sequences of two graphs on the same number of vertices is called the spectral distance of graphs. Some other spectral distances (also based on various graph matrices have been considered as well. Two graphs are considered as similar if their spectral distance is small. If two graphs are at zero distance, they are cospectral. In this sense, cospectral graphs are similar. Other spectrally based measures of similarity between networks (not necessarily having the same number of vertices have been used in Internet topology analysis, and in other areas. The notion of spectral distance enables the design of various meta-heuristic (e.g., tabu search, variable neighbourhood search algorithms for constructing graphs with a given spectrum (spectral graph reconstruction. Several spectrally based pattern recognition problems appear in many areas (e.g., image segmentation in computer vision, alignment of protein-protein interaction networks in bio
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-08-01
Spectrally selective glazing is window glass that permits some portions of the solar spectrum to enter a building while blocking others. This high-performance glazing admits as much daylight as possible while preventing transmission of as much solar heat as possible. By controlling solar heat gains in summer, preventing loss of interior heat in winter, and allowing occupants to reduce electric lighting use by making maximum use of daylight, spectrally selective glazing significantly reduces building energy consumption and peak demand. Because new spectrally selective glazings can have a virtually clear appearance, they admit more daylight and permit much brighter, more open views to the outside while still providing the solar control of the dark, reflective energy-efficient glass of the past. This Federal Technology Alert provides detailed information and procedures for Federal energy managers to consider spectrally selective glazings. The principle of spectrally selective glazings is explained. Benefits related to energy efficiency and other architectural criteria are delineated. Guidelines are provided for appropriate application of spectrally selective glazing, and step-by-step instructions are given for estimating energy savings. Case studies are also presented to illustrate actual costs and energy savings. Current manufacturers, technology users, and references for further reading are included for users who have questions not fully addressed here.
Thermophotovoltaic Spectral Control
Energy Technology Data Exchange (ETDEWEB)
DM DePoy; PM Fourspring; PF Baldasaro; JF Beausang; EJ Brown; MW Dashiel; KD Rahner; TD Rahmlow; JE Lazo-Wasem; EJ Gratrix; B Wemsman
2004-06-09
Spectral control is a key technology for thermophotovoltaic (TPV) direct energy conversion systems because only a fraction (typically less than 25%) of the incident thermal radiation has energy exceeding the diode bandgap energy, E{sub g}, and can thus be converted to electricity. The goal for TPV spectral control in most applications is twofold: (1) Maximize TPV efficiency by minimizing transfer of low energy, below bandgap photons from the radiator to the TPV diode. (2) Maximize TPV surface power density by maximizing transfer of high energy, above bandgap photons from the radiator to the TPV diode. TPV spectral control options include: front surface filters (e.g. interference filters, plasma filters, interference/plasma tandem filters, and frequency selective surfaces), back surface reflectors, and wavelength selective radiators. System analysis shows that spectral performance dominates diode performance in any practical TPV system, and that low bandgap diodes enable both higher efficiency and power density when spectral control limitations are considered. Lockheed Martin has focused its efforts on front surface tandem filters which have achieved spectral efficiencies of {approx}83% for E{sub g} = 0.52 eV and {approx}76% for E{sub g} = 0.60 eV for a 950 C radiator temperature.
Rapid spectral analysis for spectral imaging.
Jacques, Steven L; Samatham, Ravikant; Choudhury, Niloy
2010-07-15
Spectral imaging requires rapid analysis of spectra associated with each pixel. A rapid algorithm has been developed that uses iterative matrix inversions to solve for the absorption spectra of a tissue using a lookup table for photon pathlength based on numerical simulations. The algorithm uses tissue water content as an internal standard to specify the strength of optical scattering. An experimental example is presented on the spectroscopy of portwine stain lesions. When implemented in MATLAB, the method is ~100-fold faster than using fminsearch().
Poppe, L. J.; Eliason, A. E.; Hastings, M. E.
2004-05-01
Methods that describe and summarize grain-size distributions are important to geologists because of the large amount of information contained in textural data sets. Therefore, to facilitate reduction of sedimentologic data, we have written a computer program (GSSTAT) to generate grain-size statistics and extrapolate particle distributions. Our program is written in Microsoft Visual Basic 6.0, runs on Windows 95/98/ME/NT/2000/XP computers, provides a window to facilitate execution, and allows users to select options with mouse-click events or through interactive dialogue boxes. The program permits users to select output in either inclusive graphics or moment statistics, to extrapolate distributions to the colloidal-clay boundary by three methods, and to convert between frequency and cumulative frequency percentages. Detailed documentation is available within the program. Input files to the program must be comma-delimited ASCII text and have 20 fields that include: sample identifier, latitude, longitude, and the frequency or cumulative frequency percentages of the whole-phi fractions from 11 phi through -5 phi. Individual fields may be left blank, but the sum of the phi fractions must total 100% (+/- 0.2%). The program expects the first line of the input file to be a header showing attribute names; no embedded commas are allowed in any of the fields. Error messages warn the user of potential problems. The program generates an output file in the requested destination directory and allows the user to view results in a display window to determine the occurrence of errors. The output file has a header for its first line, but now has 34 fields; the original descriptor fields plus percentages of gravel, sand, silt and clay, statistics, classification, verbal descriptions, frequency or cumulative frequency percentages of the whole- phi fractions from 13 phi through -5 phi, and a field for error messages. If the user has selected extrapolation, the two additional phi
Mackie, Iain D.; DiLabio, Gino A.
2011-10-01
The first-principles calculation of non-covalent (particularly dispersion) interactions between molecules is a considerable challenge. In this work we studied the binding energies for ten small non-covalently bonded dimers with several combinations of correlation methods (MP2, coupled-cluster single double, coupled-cluster single double (triple) (CCSD(T))), correlation-consistent basis sets (aug-cc-pVXZ, X = D, T, Q), two-point complete basis set energy extrapolations, and counterpoise corrections. For this work, complete basis set results were estimated from averaged counterpoise and non-counterpoise-corrected CCSD(T) binding energies obtained from extrapolations with aug-cc-pVQZ and aug-cc-pVTZ basis sets. It is demonstrated that, in almost all cases, binding energies converge more rapidly to the basis set limit by averaging the counterpoise and non-counterpoise corrected values than by using either counterpoise or non-counterpoise methods alone. Examination of the effect of basis set size and electron correlation shows that the triples contribution to the CCSD(T) binding energies is fairly constant with the basis set size, with a slight underestimation with CCSD(T)/aug-cc-pVDZ compared to the value at the (estimated) complete basis set limit, and that contributions to the binding energies obtained by MP2 generally overestimate the analogous CCSD(T) contributions. Taking these factors together, we conclude that the binding energies for non-covalently bonded systems can be accurately determined using a composite method that combines CCSD(T)/aug-cc-pVDZ with energy corrections obtained using basis set extrapolated MP2 (utilizing aug-cc-pVQZ and aug-cc-pVTZ basis sets), if all of the components are obtained by averaging the counterpoise and non-counterpoise energies. With such an approach, binding energies for the set of ten dimers are predicted with a mean absolute deviation of 0.02 kcal/mol, a maximum absolute deviation of 0.05 kcal/mol, and a mean percent
Sato, A.; Yomogida, K.
2014-12-01
The early warning system operated by Japan Meteorological Agency (JMA) has been available in public since October 2007.The present system is still not effective in cases, that we cannot assume a nearly circular wavefront expansion from a source. We propose a new approach based on the extrapolation of the early observed wavefield alone without estimating its epicenter. The idea is similar to the migration method in exploration seismology, but we use not only the information of wave field at an early stage (i.e., at time T2 in Figure, but also its normal derivatives the difference between T1 and T2), that is, we utilize the apparent velocity and direction of early-stage wave propagation to predict the wavefield later (at T3 in Fig.). For the extrapolation of wavefield, we need a reliable Green's function from the observed point to a target point at which the wave arrives later. Since the complete 3-D wave propagation is extremely complex, particularly in and around Japan of highly heterogeneous structures, we shall consider a phenomenological 2-D Green's function, that is, a wavefront propagates on the surface with a certain apparent velocity and direction of P wave. This apparent velocity and direction may vary significantly depending on, for example, event depth and an area of propagation, so we examined those of P wave propagating in Japan in various situations. For example, the velocity of shallow events in Hokkaido is 7.1km/s while that in Nagano prefecture is about 5.5km/s. In addition, the apparent velocity depends on event depth, 7.1km/s for the depth of 10km and 8.9km/s for 100km in Hokkaido. We also conducted f-k array analyses of adjacent five or six stations where we can accurately estimate the apparent velocity and direction of P wave. For deep events with relatively simple waveforms, they are easily obtained, but we may need site corrections to enhance correlations of waveforms among stations for shallow ones. In the above extrapolation scheme, we can
Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc
2013-06-01
An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.
Removal of lipid artifacts in 1H spectroscopic imaging by data extrapolation.
Haupt, C I; Schuff, N; Weiner, M W; Maudsley, A A
1996-05-01
Proton MR spectroscopic imaging (MRSI) of human cerebral cortex is complicated by the presence of an intense signal from subcutaneous lipids, which, if not suppressed before Fourier reconstruction, causes ringing and signal contamination throughout the metabolite images as a result of limited k-space sampling. In this article, an improved reconstruction of the lipid region is obtained using the Papoulis-Gerchberg algorithm. This procedure makes use of the narrow-band-limited nature of the subcutaneous lipid signal to extrapolate to higher k-space values without alteration of the metabolite signal region. Using computer simulations and in vivo experimental studies, the implementation and performance of this algorithm were examined. This method was found to permit MRSI brain spectra to be obtained without applying any lipid suppression during data acquisition, at echo times of 50 ms and longer. When applied together with optimized acquisition methods, this provides an effective procedure for imaging metabolite distributions in cerebral cortical surface regions.
Energy Technology Data Exchange (ETDEWEB)
Dowding, Kevin J.; Hills, Richard Guy (New Mexico State University, Las Cruces, NM)
2005-04-01
Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.
DEFF Research Database (Denmark)
Thorndahl, Søren Liedtke; Grum, M.; Rasmussen, Michael R.;
2011-01-01
in a small urban catchment has been developed. The forecast is based on application of radar rainfall data, which by a correlation based technique, is extrapolated with a lead time up to two hours. The runoff forecast in the drainage system is based on a fully distributed MOUSE model which is auto......Forecasting of flows, overflow volumes, water levels, etc. in drainage systems can be applied in real time control of drainage systems in the future climate in order to fully utilize system capacity and thus save possible construction costs. An online system for forecasting flows and water levels......-calibrated on flow measurements in order to produce the best possible forecast for the drainage system at all times. The system shows great potential for the implementation of real time control in drainage systems and forecasting flows and water levels....
Polanco, Carlos; Buhse, Thomas; Vizcaíno, Gloria; Picciotto, Jacobo Levy
2017-01-01
This paper addresses the polar profile of ancient proteins using a comparative study of amino acids found in 25 000 000-year-old shells described in Abelson's work. We simulated the polar profile with a computer platform that represented an evolutionary computational toy model that mimicked the generation of small proteins starting from a pool of monomeric amino acids and that included several dynamic properties, such as self-replication and fragmentation-recombination of the proteins. The simulations were taken up to 15 generations and produced a considerable number of proteins of 25 amino acids in length. The computational model included the amino acids found in the ancient shells, the thermal degradation factor, and the relative abundance of the amino acids observed in the Miller-Urey experimental simulation of the prebiotic amino acid formation. We found that the amino acid polar profiles of the ancient shells and those simulated and extrapolated from the Miller-Urey abundances are coincident.
Suppression of MRI Truncation Artifacts Using Total Variation Constrained Data Extrapolation
Directory of Open Access Journals (Sweden)
Kai Tobias Block
2008-01-01
Full Text Available The finite sampling of k-space in MRI causes spurious image artifacts, known as Gibbs ringing, which result from signal truncation at the border of k-space. The effect is especially visible for acquisitions at low resolution and commonly reduced by filtering at the expense of image blurring. The present work demonstrates that the simple assumption of a piecewise-constant object can be exploited to extrapolate the data in k-space beyond the measured part. The method allows for a significant reduction of truncation artifacts without compromising resolution. The assumption translates into a total variation minimization problem, which can be solved with a nonlinear optimization algorithm. In the presence of substantial noise, a modified approach offers edge-preserving denoising by allowing for slight deviations from the measured data in addition to supplementing data. The effectiveness of these methods is demonstrated with simulations as well as experimental data for a phantom and human brain in vivo.
Continuum extrapolation of finite temperature meson correlation functions in quenched lattice QCD
Francis, Anthony
2010-01-01
We explore the continuum limit $a\\rightarrow 0$ of meson correlation functions at finite temperature. In detail we analyze finite volume and lattice cut-off effects in view of possible consequences for continuum physics. We perform calculations on quenched gauge configurations using the clover improved Wilson fermion action. We present and discuss simulations on isotropic $N_\\sigma^3\\times 16$ lattices with $N_\\sigma=32,48,64,128$ and $128^3 \\times N_\\tau$ lattices with $N_\\tau=16,24,32,48$ corresponding to lattice spacings in the range of $0.01 fm \\lsim a \\lsim\\ 0.031 fm$ at $T\\simeq1.45T_c$. Continuum limit extrapolations of vector meson and pseudo scalar correlators are performed and their large distance expansion in terms of thermal moments is introduced. We discuss consequences of this analysis for the calculation of the electrical conductivity of the QGP at this temperature.
Extrapolation of lattice QCD results beyond the power-counting regime
Leinweber, D B; Young, R D
2005-01-01
Resummation of the chiral expansion is necessary to make accurate contact with current lattice simulation results of full QCD. Resummation techniques including relativistic formulations of chiral effective field theory and finite-range regularization (FRR) techniques are reviewed, with an emphasis on using lattice simulation results to constrain the parameters of the chiral expansion. We illustrate how the chiral extrapolation problem has been solved and use FRR techniques to identify the power-counting regime (PCR) of chiral perturbation theory. To fourth-order in the expansion at the 1% tolerance level, we find $0 \\le m_\\pi \\le 0.18$ GeV for the PCR, extending only a small distance beyond the physical pion mass.
Variance reduction technique in a beta radiation beam using an extrapolation chamber.
Polo, Ivón Oramas; Souza Santos, William; de Lara Antonio, Patrícia; Caldas, Linda V E
2017-10-01
This paper aims to show how the variance reduction technique "Geometry splitting/Russian roulette" improves the statistical error and reduces uncertainties in the determination of the absorbed dose rate in tissue using an extrapolation chamber for beta radiation. The results show that the use of this technique can increase the number of events in the chamber cavity leading to a closer approximation of simulation result with the physical problem. There was a good agreement among the experimental measurements, the certificate of manufacture and the simulation results of the absorbed dose rate values and uncertainties. The absorbed dose rate variation coefficient using the variance reduction technique "Geometry splitting/Russian roulette" was 2.85%. Copyright © 2017 Elsevier Ltd. All rights reserved.
Prediction of long-term creep behaviour and lifetime of polystyrene by linear extrapolation
Institute of Scientific and Technical Information of China (English)
胡立江; 赵树山
2002-01-01
The universal creep function derived from the kinetic equations is successful in relating the creep (ε) to the aging time (ta), coefficient of retardation time (β), and intrinsic time (t0). The relation was used to treat the creep experimental data for polystyrene (PS) specimens which were aged at a given temperature and different times (short-term) and tested at a certain temperature and different stress levels. Then unified master lines were constructed with the treated data and curves according to the universal equation. The master lines can be used to predict the long-term creep behaviour and lifetime by extrapolating to a required ultimate strain. The verifications of results obtained with this method were shown as well.
Prediction of long-term creep behavior and lifetime of PPC pipe materials by linear extrapolation
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
The universal creep equation relates creep behavior(ε/εo)to aging time(ta),coefficient of retardation time(β),and intrinsic time(to).The relation was used to treat the creep experimental data for pipe specimens of polypropylene block copelymer(PPC),which were aged for different days(short-term)and tested under different stress levels at a certain temperature.Then unified master lines were constructed with the treated data and curves according to the universal equation.The master straight lines can be used for extrapolation to predict the long-term creep behavior and lifetime of the pipe materials of PPC in the same way as plate materials.
Top Background Extrapolation for $H \\to WW$ Searches at the LHC
Kauer, N
2004-01-01
A leading order (LO) analysis is presented that demonstrates that key top backgrounds to H -> W^+W^- -> l^\\pm l^\\mp \\sla{p}_T decays in weak boson fusion (WBF) and gluon fusion (GF) at the CERN Large Hadron Collider can be extrapolated from experimental data with an accuracy of order 5% to 10%. If LO scale variation is accepted as proxy for the theoretical error, parton level results indicate that the tt~j background to the H -> WW search in WBF can be determined with a theoretical error of about 5%, while the tt~ background to the H -> WW search in GF can be determined with a theoretical error of better than 1%. Uncertainties in the parton distribution functions contribute an estimated 3% to 10% to the total error.
DEFF Research Database (Denmark)
Kissling, W. Daniel; Dalby, Lars; Fløjgaard, Camilla
2014-01-01
, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals (“MammalDIET”). Diet information was digitized from two global......, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating...... that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet...
Bipolar spectral associative memories.
Spencer, R G
2001-01-01
Nonlinear spectral associative memories are proposed as quantized frequency domain formulations of nonlinear, recurrent associative memories in which volatile network attractors are instantiated by attractor waves. In contrast to conventional associative memories, attractors encoded in the frequency domain by convolution may be viewed as volatile online inputs, rather than nonvolatile, off-line parameters. Spectral memories hold several advantages over conventional associative memories, including decoder/attractor separability and linear scalability, which make them especially well suited for digital communications. Bit patterns may be transmitted over a noisy channel in a spectral attractor and recovered at the receiver by recurrent, spectral decoding. Massive nonlocal connectivity is realized virtually, maintaining high symbol-to-bit ratios while scaling linearly with pattern dimension. For n-bit patterns, autoassociative memories achieve the highest noise immunity, whereas heteroassociative memories offer the added flexibility of achieving various code rates, or degrees of extrinsic redundancy. Due to linear scalability, high noise immunity and use of conventional building blocks, spectral associative memories hold much promise for achieving robust communication systems. Simulations are provided showing bit error rates for various degrees of decoding time, computational oversampling, and signal-to-noise ratio.
Teutsch, J
2007-01-01
It is possible to enumerate all computer programs. In particular, for every partial computable function, there is a shortest program which computes that function. f-MIN is the set of indices for shortest programs. In 1972, Meyer showed that f-MIN is Turing equivalent to 0'', the halting set with halting set oracle. This paper generalizes the notion of shortest programs, and we use various measures from computability theory to describe the complexity of the resulting "spectral sets." We show that under certain Godel numberings, the spectral sets are exactly the canonical sets 0', 0'', 0''', ... up to Turing equivalence. This is probably not true in general, however we show that spectral sets always contain some useful information. We show that immunity, or "thinness" is a useful characteristic for distinguishing between spectral sets. In the final chapter, we construct a set which neither contains nor is disjoint from any infinite arithmetic set, yet it is 0-majorized and contains a natural spectral set. Thus ...
Parametric Explosion Spectral Model
Energy Technology Data Exchange (ETDEWEB)
Ford, S R; Walter, W R
2012-01-19
Small underground nuclear explosions need to be confidently detected, identified, and characterized in regions of the world where they have never before occurred. We develop a parametric model of the nuclear explosion seismic source spectrum derived from regional phases that is compatible with earthquake-based geometrical spreading and attenuation. Earthquake spectra are fit with a generalized version of the Brune spectrum, which is a three-parameter model that describes the long-period level, corner-frequency, and spectral slope at high-frequencies. Explosion spectra can be fit with similar spectral models whose parameters are then correlated with near-source geology and containment conditions. We observe a correlation of high gas-porosity (low-strength) with increased spectral slope. The relationship between the parametric equations and the geologic and containment conditions will assist in our physical understanding of the nuclear explosion source.
Photovoltaic spectral responsivity measurements
Energy Technology Data Exchange (ETDEWEB)
Emery, K.; Dunlavy, D.; Field, H.; Moriarty, T. [National Renewable Energy Lab., Golden, CO (United States)
1998-09-01
This paper discusses the various elemental random and nonrandom error sources in typical spectral responsivity measurement systems. The authors focus specifically on the filter and grating monochrometer-based spectral responsivity measurement systems used by the Photovoltaic (PV) performance characterization team at NREL. A variety of subtle measurement errors can occur that arise from a finite photo-current response time, bandwidth of the monochromatic light, waveform of the monochromatic light, and spatial uniformity of the monochromatic and bias lights; the errors depend on the light source, PV technology, and measurement system. The quantum efficiency can be a function of he voltage bias, light bias level, and, for some structures, the spectral content of the bias light or location on the PV device. This paper compares the advantages and problems associated with semiconductor-detector-based calibrations and pyroelectric-detector-based calibrations. Different current-to-voltage conversion and ac photo-current detection strategies employed at NREL are compared and contrasted.
Gaiotto, Davide; Neitzke, Andrew
2012-01-01
We apply and illustrate the techniques of spectral networks in a large collection of A_{K-1} theories of class S, which we call "lifted A_1 theories." Our construction makes contact with Fock and Goncharov's work on higher Teichmuller theory. In particular we show that the Darboux coordinates on moduli spaces of flat connections which come from certain special spectral networks coincide with the Fock-Goncharov coordinates. We show, moreover, how these techniques can be used to study the BPS spectra of lifted A_1 theories. In particular, we determine the spectrum generators for all the lifts of a simple superconformal field theory.
Spectral library searching in proteomics.
Griss, Johannes
2016-03-01
Spectral library searching has become a mature method to identify tandem mass spectra in proteomics data analysis. This review provides a comprehensive overview of available spectral library search engines and highlights their distinct features. Additionally, resources providing spectral libraries are summarized and tools presented that extend experimental spectral libraries by simulating spectra. Finally, spectrum clustering algorithms are discussed that utilize the same spectrum-to-spectrum matching algorithms as spectral library search engines and allow novel methods to analyse proteomics data.
Antonio, Patrícia L.; Xavier, Marcos; Caldas, Linda V. E.
2014-11-01
The Calibration Laboratory (LCI) at the Instituto de Pesquisas Energéticas e Nucleares (IPEN) is going to establish a Böhm extrapolation chamber as a primary standard system for the dosimetry and calibration of beta radiation sources and detectors. This chamber was already tested in beta radiation beams with an aluminized Mylar entrance window, and now, it was characterized with an original Hostaphan entrance window. A comparison between the results of the extrapolation chamber with the two entrance windows was performed. The results showed that this extrapolation chamber presents the same effectiveness in beta radiation fields as a primary standard system with both entrance windows, showing that any one of them may be utilized.
Energy Technology Data Exchange (ETDEWEB)
Rothe, R.E.
1997-12-01
Sixty-nine critical configurations of up to 186 kg of uranium are reported from very early experiments (1960s) performed at the Rocky Flats Critical Mass Laboratory near Denver, Colorado. Enriched (93%) uranium metal spherical and hemispherical configurations were studied. All were thick-walled shells except for two solid hemispheres. Experiments were essentially unreflected; or they included central and/or external regions of mild steel. No liquids were involved. Critical parameters are derived from extrapolations beyond subcritical data. Extrapolations, rather than more precise interpolations between slightly supercritical and slightly subcritical configurations, were necessary because experiments involved manually assembled configurations. Many extrapolations were quite long; but the general lack of curvature in the subcritical region lends credibility to their validity. In addition to delayed critical parameters, a procedure is offered which might permit the determination of prompt critical parameters as well for the same cases. This conjectured procedure is not based on any strong physical arguments.
Tang, Lin
2011-01-01
In this paper, we generalize the $A_\\fz$ extrapolation theorem in \\cite{cmp} and the $A_p$ extrapolation theorem of Rubio de Francia to Schr\\"odinger settings. In addition, we also establish the weighted vector-valued inequalities for Schr\\"odinger type maximal operators by using weights belonging to $ A_p^{\\rho,\\tz}$ which includes $A_p$. As their applications, we establish the weighted vector-valued inequalities for some Sch\\"odinger type operators and pseudo-differential operators.
Schunck, Franz E
2008-01-01
We reconsider the nonlinear second order Abel equation of Stewart and Lyth, which follows from a nonlinear second order slow-roll approximation. We find a new eigenvalue spectrum in the blue regime. Some of the discrete values of the spectral index n_s have consistent fits to the cumulative COBE data as well as to recent ground-base CMB experiments.
Large Spectral Library Problem
Energy Technology Data Exchange (ETDEWEB)
Chilton, Lawrence K.; Walsh, Stephen J.
2008-10-03
Hyperspectral imaging produces a spectrum or vector at each image pixel. These spectra can be used to identify materials present in the image. In some cases, spectral libraries representing atmospheric chemicals or ground materials are available. The challenge is to determine if any of the library chemicals or materials exist in the hyperspectral image. The number of spectra in these libraries can be very large, far exceeding the number of spectral channels collected in the ¯eld. Suppose an image pixel contains a mixture of p spectra from the library. Is it possible to uniquely identify these p spectra? We address this question in this paper and refer to it as the Large Spectral Library (LSL) problem. We show how to determine if unique identi¯cation is possible for any given library. We also show that if p is small compared to the number of spectral channels, it is very likely that unique identi¯cation is possible. We show that unique identi¯cation becomes less likely as p increases.
Energy Technology Data Exchange (ETDEWEB)
Mocsy, Agnes [Department of Mathematics and Science, Pratt Institute, Brooklyn, NY 11205 (United States)
2009-11-01
In this talk I summarize the progress achieved in recent years on the understanding of quarkonium properties at finite temperature. Theoretical studies from potential models, lattice QCD, and effective field theories are discussed. I also highlight a bridge from spectral functions to experiment.
Spectral representation of fingerprints
Xu, Haiyun; Bazen, Asker M.; Veldhuis, Raymond N.J.; Kevenaar, Tom A.M.; Akkermans, Anton H.M.
2007-01-01
Most fingerprint recognition systems are based on the use of a minutiae set, which is an unordered collection of minutiae locations and directions suffering from various deformations such as translation, rotation and scaling. The spectral minutiae representation introduced in this paper is a novel m
Improving Predictions with Reliable Extrapolation Schemes and Better Understanding of Factorization
More, Sushant N.
New insights into the inter-nucleon interactions, developments in many-body technology, and the surge in computational capabilities has led to phenomenal progress in low-energy nuclear physics in the past few years. Nonetheless, many calculations still lack a robust uncertainty quantification which is essential for making reliable predictions. In this work we investigate two distinct sources of uncertainty and develop ways to account for them. Harmonic oscillator basis expansions are widely used in ab-initio nuclear structure calculations. Finite computational resources usually require that the basis be truncated before observables are fully converged, necessitating reliable extrapolation schemes. It has been demonstrated recently that errors introduced from basis truncation can be taken into account by focusing on the infrared and ultraviolet cutoffs induced by a truncated basis. We show that a finite oscillator basis effectively imposes a hard-wall boundary condition in coordinate space. We accurately determine the position of the hard-wall as a function of oscillator space parameters, derive infrared extrapolation formulas for the energy and other observables, and discuss the extension of this approach to higher angular momentum and to other localized bases. We exploit the duality of the harmonic oscillator to account for the errors introduced by a finite ultraviolet cutoff. Nucleon knockout reactions have been widely used to study and understand nuclear properties. Such an analysis implicitly assumes that the effects of the probe can be separated from the physics of the target nucleus. This factorization between nuclear structure and reaction components depends on the renormalization scale and scheme, and has not been well understood. But it is potentially critical for interpreting experiments and for extracting process-independent nuclear properties. We use a class of unitary transformations called the similarity renormalization group (SRG) transformations to
Spectral-collocation variational integrators
Li, Yiqun; Wu, Boying; Leok, Melvin
2017-03-01
Spectral methods are a popular choice for constructing numerical approximations for smooth problems, as they can achieve geometric rates of convergence and have a relatively small memory footprint. In this paper, we introduce a general framework to convert a spectral-collocation method into a shooting-based variational integrator for Hamiltonian systems. We also compare the proposed spectral-collocation variational integrators to spectral-collocation methods and Galerkin spectral variational integrators in terms of their ability to reproduce accurate trajectories in configuration and phase space, their ability to conserve momentum and energy, as well as the relative computational efficiency of these methods when applied to some classical Hamiltonian systems. In particular, we note that spectrally-accurate variational integrators, such as the Galerkin spectral variational integrators and the spectral-collocation variational integrators, combine the computational efficiency of spectral methods together with the geometric structure-preserving and long-time structural stability properties of symplectic integrators.
Wavelength conversion based spectral imaging
DEFF Research Database (Denmark)
Dam, Jeppe Seidelin
There has been a strong, application driven development of Si-based cameras and spectrometers for imaging and spectral analysis of light in the visible and near infrared spectral range. This has resulted in very efficient devices, with high quantum efficiency, good signal to noise ratio and high...... resolution for this spectral region. Today, an increasing number of applications exists outside the spectral region covered by Si-based devices, e.g. within cleantech, medical or food imaging. We present a technology based on wavelength conversion which will extend the spectral coverage of state of the art...... visible or near infrared cameras and spectrometers to include other spectral regions of interest....
Spatial extrapolation of light use efficiency model parameters to predict gross primary production
Directory of Open Access Journals (Sweden)
Karsten Schulz
2011-12-01
Full Text Available To capture the spatial and temporal variability of the gross primary production as a key component of the global carbon cycle, the light use efficiency modeling approach in combination with remote sensing data has shown to be well suited. Typically, the model parameters, such as the maximum light use efficiency, are either set to a universal constant or to land class dependent values stored in look-up tables. In this study, we employ the machine learning technique support vector regression to explicitly relate the model parameters of a light use efficiency model calibrated at several FLUXNET sites to site-specific characteristics obtained by meteorological measurements, ecological estimations and remote sensing data. A feature selection algorithm extracts the relevant site characteristics in a cross-validation, and leads to an individual set of characteristic attributes for each parameter. With this set of attributes, the model parameters can be estimated at sites where a parameter calibration is not possible due to the absence of eddy covariance flux measurement data. This will finally allow a spatially continuous model application. The performance of the spatial extrapolation scheme is evaluated with a cross-validation approach, which shows the methodology to be well suited to recapture the variability of gross primary production across the study sites.
Mass, Measurement, Materials, and Mathematical Modeling: The Nuts and Bolts of Extrapolation
Directory of Open Access Journals (Sweden)
Scott A Sinex
2011-12-01
Full Text Available A simple activity is described which is appropriate for any class dealing with measurement. It introduces students to the important scientific process of mathematical modeling and online collaboration. Students, working in groups, determine the mass of a bolt indirectly by extrapolation from massing the bolt with one to five nuts on it and determining the equation of the line; the y-intercept being the mass of the bolt. Students gain experience with using a balance, graphing data, and analyzing results using algebraic skills. They calculate percent error after measuring the bolt’s mass directly and can compare this with the error limits from the least squares fit. Groups enter data into a web-based form and the data is examined by the class using Google Docs in a collaborative manner. After entering data in Google Docs, the students use an interactive Excel spreadsheet to compare their results to the best-fit line obtained by linear regression (pre-built into the spreadsheet for novices. In the spreadsheet, they further explore the model to gain an understanding and examine the influence of scatter (error in the data and material density.
Octet baryon masses and sigma terms from an SU(3) chiral extrapolation
Energy Technology Data Exchange (ETDEWEB)
Young, Ross; Thomas, Anthony
2009-01-01
We analyze the consequences of the remarkable new results for octet baryon masses calculated in 2+1- avour lattice QCD using a low-order expansion about the SU(3) chiral limit. We demonstrate that, even though the simulation results are clearly beyond the power-counting regime, the description of the lattice results by a low-order expansion can be significantly improved by allowing the regularisation scale of the effective field theory to be determined by the lattice data itself. The model dependence of our analysis is demonstrated to be small compared with the present statistical precision. In addition to the extrapolation of the absolute values of the baryon masses, this analysis provides a method to solve the difficult problem of fine-tuning the strange-quark mass. We also report a determination of the sigma terms for all of the octet baryons, including an accurate value of the pion-nucleon sigma term and the first determination of the strangeness sigma term based on 2+1-flavour l
The risk of extrapolation in neuroanatomy: the case of the mammalian vomeronasal system
Directory of Open Access Journals (Sweden)
Ignacio Salazar
2009-10-01
Full Text Available The sense of smell plays a crucial role in mammalian social and sexual behaviour, identification of food, and detection of predators. Nevertheless, mammals vary in their olfactory ability. One reason for this concerns the degree of development of their pars basalis rhinencephali, an anatomical feature that has has been considered in classifying this group of animals as macrosmatic, microsmatic or anosmatic. In mammals, different structures are involved in detecting odours: the main olfactory system, the vomeronasal system (VNS, and two subsystems, namely the ganglion of Grüneberg and the septal organ. Here, we review and summarise some aspects of the comparative anatomy of the VNS and its putative relationship to other olfactory structures. Even in the macrosmatic group, morphological diversity is an important characteristic of the VNS, specifically of the vomeronasal organ and the accessory olfactory bulb. We conclude that it is a big mistake to extrapolate anatomical data of the VNS from species to species, even in the case of relatively close evolutionary proximity between them. We propose to study other mammalian VNS than those of rodents in depth as a way to clarify its exact role in olfaction. Our experience in this field leads us to hypothesise that the VNS, considered for all mammalian species, could be a system undergoing involution or regression, and could serve as one more integrated olfactory subsystem.
Evidence for Solar Tether-cutting Magnetic Reconnection from Coronal Field Extrapolations
Liu, Chang; Lee, Jeongwoo; Wiegelmann, Thomas; Moore, Ronald L; Wang, Haimin
2013-01-01
Magnetic reconnection is one of the primary mechanisms for triggering solar eruptive events, but direct observation of its rapid process has been of challenge. In this Letter we present, using a nonlinear force-free field (NLFFF) extrapolation technique, a visualization of field line connectivity changes resulting from tether-cutting reconnection over about 30 minutes during the 2011 February 13 M6.6 flare in NOAA AR 11158. Evidence for the tether-cutting reconnection was first collected through multiwavelength observations and then by the analysis of the field lines traced from positions of four conspicuous flare 1700 A footpoints observed at the event onset. Right before the flare, the four footpoints are located very close to the regions of local maxima of magnetic twist index. Especially, the field lines from the inner two footpoints form two strongly twisted flux bundles (up to ~1.2 turns), which shear past each other and reach out close to the outer two footpoints, respectively. Immediately after the fl...
Yang, X; Zhou, Y-F; Yu, Y; Zhao, D-H; Shi, W; Fang, B-H; Liu, Y-H
2015-02-01
A multi-compartment physiologically based pharmacokinetic (PBPK) model to describe the disposition of cyadox (CYX) and its metabolite quinoxaline-2-carboxylic acid (QCA) after a single oral administration was developed in rats (200 mg/kg b.w. of CYX). Considering interspecies differences in physiology and physiochemistry, the model efficiency was validated by pharmacokinetic data set in swine. The model included six compartments that were blood, muscle, liver, kidney, adipose, and a combined compartment for the rest of tissues. The model was parameterized using rat plasma and tissue concentration data that were generated from this study. Model simulations were achieved using a commercially available software program (ACSLXL ibero version 3.0.2.1). Results supported the validity of the model with simulated tissue concentrations within the range of the observations. The correlation coefficients of the predicted and experimentally determined values for plasma, liver, kidney, adipose, and muscles in rats were 0.98, 0.98, 0.98, 0.99, and 0.95, respectively. The rat model parameters were then extrapolated to pigs to estimate QCA disposition in tissues and validated by tissue concentration of QCA in swine. The correlation coefficients between the predicted and observed values were over 0.90. This model could provide a foundation for developing more reliable pig models once more data are available.
Cui, Jie; Li, Zhiying; Krems, Roman V
2015-10-21
We consider a problem of extrapolating the collision properties of a large polyatomic molecule A-H to make predictions of the dynamical properties for another molecule related to A-H by the substitution of the H atom with a small molecular group X, without explicitly computing the potential energy surface for A-X. We assume that the effect of the -H →-X substitution is embodied in a multidimensional function with unknown parameters characterizing the change of the potential energy surface. We propose to apply the Gaussian Process model to determine the dependence of the dynamical observables on the unknown parameters. This can be used to produce an interval of the observable values which corresponds to physical variations of the potential parameters. We show that the Gaussian Process model combined with classical trajectory calculations can be used to obtain the dependence of the cross sections for collisions of C6H5CN with He on the unknown parameters describing the interaction of the He atom with the CN fragment of the molecule. The unknown parameters are then varied within physically reasonable ranges to produce a prediction uncertainty of the cross sections. The results are normalized to the cross sections for He - C6H6 collisions obtained from quantum scattering calculations in order to provide a prediction interval of the thermally averaged cross sections for collisions of C6H5CN with He.
Montiel, Ariadna; Sendra, Irene; Escamilla-Rivera, Celia; Salzano, Vincenzo
2014-01-01
In this work we present a nonparametric approach, which works on minimal assumptions, to reconstruct the cosmic expansion of the Universe. We propose to combine a locally weighted scatterplot smoothing method and a simulation-extrapolation method. The first one (Loess) is a nonparametric approach that allows to obtain smoothed curves with no prior knowledge of the functional relationship between variables nor of the cosmological quantities. The second one (Simex) takes into account the effect of measurement errors on a variable via a simulation process. For the reconstructions we use as raw data the Union2.1 Type Ia Supernovae compilation, as well as recent Hubble parameter measurements. This work aims to illustrate the approach, which turns out to be a self-sufficient technique in the sense we do not have to choose anything by hand. We examine the details of the method, among them the amount of observational data needed to perform the locally weighted fit which will define the robustness of our reconstructio...
Extrapolation of Galactic Dust Emission at 100 Microns to CMBR Frequencies Using FIRAS
Finkbeiner, D; Schlegel, D J; Finkbeiner, Douglas P.; Davis, Marc; Schlegel, David J.
1999-01-01
We present predicted full-sky maps of submillimeter and microwave emission from the diffuse interstellar dust in the Galaxy. These maps are extrapolated from the 100 micron emission and 100/240 micron flux ratio maps that Schlegel, Finkbeiner, & Davis (1998; SFD98) generated from IRAS and COBE/DIRBE data. Results are presented for a number of physically plausible emissivity models. We find that no power law emissivity function fits the FIRAS data from 200 - 2100 GHz. In this paper we provide a formalism for a multi-component model for the dust emission. A two-component model with a mixture of silicate and carbon-dominated grains (motivated by Pollack et al., 1994}) provides a fit to an accuracy of about 15% to all the FIRAS data over the entire high-latitude sky. Small systematic differences are found between the atomic and molecular phases of the ISM. Our predictions for the thermal (vibrational) emission from Galactic dust at made at the DIRBE resolution of 40' or at the higher resolution of 6.1 arcmin ...
Caution warranted in extrapolating from Boston Naming Test item gradation construct.
Beattey, Robert A; Murphy, Hilary; Cornwell, Melinda; Braun, Thomas; Stein, Victoria; Goldstein, Martin; Bender, Heidi Allison
2017-01-01
The Boston Naming Test (BNT) was designed to present items in order of difficulty based on word frequency. Changes in word frequencies over time, however, would frustrate extrapolation in clinical and research settings based on the theoretical construct because performance on the BNT might reflect changes in ecological frequency of the test items, rather than performance across items of increasing difficulty. This study identifies the ecological frequency of BNT items at the time of publication using the American Heritage Word Frequency Book and determines changes in frequency over time based on the frequency distribution of BNT items across a current corpus, the Corpus of Contemporary American English. Findings reveal an uneven distribution of BNT items across 2 corpora and instances of negligible differentiation in relative word frequency across test items. As BNT items are not presented in order from least to most frequent, clinicians and researchers should exercise caution in relying on the BNT as presenting items in increasing order of difficulty. A method is proposed for distributing confrontation-naming items to be explicitly measured against test items that are normally distributed across the corpus of a given language.
Chenglin, L.; Charpentier, R.R.
2010-01-01
The U.S. Geological Survey procedure for the estimation of the general form of the parent distribution requires that the parameters of the log-geometric distribution be calculated and analyzed for the sensitivity of these parameters to different conditions. In this study, we derive the shape factor of a log-geometric distribution from the ratio of frequencies between adjacent bins. The shape factor has a log straight-line relationship with the ratio of frequencies. Additionally, the calculation equations of a ratio of the mean size to the lower size-class boundary are deduced. For a specific log-geometric distribution, we find that the ratio of the mean size to the lower size-class boundary is the same. We apply our analysis to simulations based on oil and gas pool distributions from four petroleum systems of Alberta, Canada and four generated distributions. Each petroleum system in Alberta has a different shape factor. Generally, the shape factors in the four petroleum systems stabilize with the increase of discovered pool numbers. For a log-geometric distribution, the shape factor becomes stable when discovered pool numbers exceed 50 and the shape factor is influenced by the exploration efficiency when the exploration efficiency is less than 1. The simulation results show that calculated shape factors increase with those of the parent distributions, and undiscovered oil and gas resources estimated through the log-geometric distribution extrapolation are smaller than the actual values. ?? 2010 International Association for Mathematical Geology.
Comparison of Coronal Extrapolation Methods for Cycle 24 Using HMI Data
Arden, William M; Sun, Xudong; Zhao, Xuepu
2016-01-01
Two extrapolation models of the solar coronal magnetic field are compared using magnetogram data from the SDO/HMI instrument. The two models, a horizontal current-current sheet-source surface (HCCSSS) model and a potential field-source surface (PFSS) model differ in their treatment of coronal currents. Each model has its own critical variable, respectively the radius of a cusp surface and a source surface, and it is found that adjusting these heights over the period studied allows better fit between the models and the solar open flux at 1 AU as calculated from the Interplanetary Magnetic Field (IMF). The HCCSSS model provides the better fit for the overall period from 2010 November to 2015 May as well as for two subsets of the period - the minimum/rising part of the solar cycle, and the recently-identified peak in the IMF from mid-2014 to mid-2015 just after solar maximum. It is found that a HCCSSS cusp surface height of 1.7 Rsun provides the best fit to the IMF for the overall period, while 1.7 & 1.9 Rsu...
Latychevskaia, Tatiana
2015-01-01
In coherent diffractive imaging (CDI) the resolution with which the reconstructed object can be obtained is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by post-extrapolation of coherent diffraction images, such as diffraction patterns or holograms. We proof that a diffraction pattern can unambiguously be extrapolated from just a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal, is linearly proportional to the oversampling ratio. While there could be in principle other methods to achieve extrapolation, we devote our discussion to employing phase retrieval methods and demonstrate their limits. We present two numerical studies; namely the extrapolation of diffraction patterns of non-binary and that of phase objects together with a discussion of the optimal extrapolation procedure.
Macsween, A
2001-09-01
While the accepted measure of aerobic power remains the VO2max this test is extremely demanding even for athletes. There are serious practical and ethical concerns in attempting such testing in non-athletic or patient populations. An alternative method of measuring aerobic power in such populations is required. A limited body of work exists evaluating the accuracy of the Astrand-Ryhming nomogram and linear extrapolation of the heart rate/oxygen uptake plot. Issues exist in terms of both equipment employed and sample numbers. Twenty-five normal subjects (mean age 28.6, range 22-50) completed 52 trials (Bruce treadmill protocol) meeting stringent criteria for VO2max performance. Respiratory gases were measured with a portable gas analyser on a five-sec sample period. The data was analysed to allow comparison of the reliability and validity of linear extrapolations to three estimates of heart rate maximum with the Astrand nomogram prediction. Extrapolation was preferable yielding intraclass correlation co-efficients (ICC) of 0.9433 comparable to that of the observed VO2max at 0.9443 and a bias of -1.1 ml x min(-1) x kg(-1) representing a 2.19 percent underestimate. This study provides empirical evidence that extrapolation of submaximal data can be employed with confidence for both clinical monitoring and research purposes. With the use of portable equipment and submaximal testing the scope for future research in numerous populations and non-laboratory environments is considerably increased.
Mueller, David S.
2013-01-01
Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current proﬁler (ADCP) streamﬂow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity
Campbell, Bruce A.; Hawke, B. Ray; Morgan, Gareth A.; Carter, Lynn M.; Campbell, Donald B.; Nolan, Michael
2014-01-01
Radar images at 70 cm wavelength show 4-5 dB variations in backscatter strength within regions of relatively uniform spectral reflectance properties in central and northern Mare Serenitatis, delineating features suggesting lava flow margins, channels, and superposition relationships. These backscatter differences are much less pronounced at 12.6 cm wavelength, consistent with a large component of the 70 cm echo arising from the rough or blocky transition zone between the mare regolith and the intact bedrock. Such deep probing is possible because the ilmenite content, which modulates microwave losses, of central Mare Serenitatis is generally low (2-3% by weight). Modeling of the radar returns from a buried interface shows that an average regolith thickness of 10m could lead to the observed shifts in 70 cm echo power with a change in TiO2 content from 2% to 3%. This thickness is consistent with estimates of regolith depth (10-15m) based on the smallest diameter for which fresh craters have obvious blocky ejecta. The 70 cm backscatter differences provide a view of mare flow-unit boundaries, channels, and lobes unseen by other remote sensing methods. A localized pyroclastic deposit associated with Rima Calippus is identified based on its low radar echo strength. Radar mapping also improves delineation of units for crater age dating and highlights a 250 km long, east-west trending feature in northern Mare Serenitatis that we suggest is a large graben flooded by late-stage mare flows.
Context Dependent Spectral Unmixing
2014-08-01
International Geoscience and Remote Sensing Symposium (IGARSS), Cape Town, South Africa , July 2009. HONORS AND AWARDS: 1. IEEE Outstanding CECS Student Award...COMMEND on the Usgs1C2M3 data across the 25 runs and at all noise levels: (a) SME , (b) SMAE, (c) AME. . . . . . . . . . . . . . 59 6.10 True (solid lines...identifying multiple sets of endmembers. In other words, the unmixing process is adapted to different regions of the spectral space. Another challenge with most
Nel, P.; Lynch, P. A.; Laird, J. S.; Casey, H. M.; Goodall, L. J.; Ryan, C. G.; Sloggett, R. J.
2010-07-01
Artwork and precious artefacts demand non-destructive analytical methodologies for art authentication, attribution and provenance assessment. However, structural and chemical characterisation represents a challenging problem with existing analytical techniques. A recent authentication case based on an Australian Aboriginal artwork, indicate there is substantial benefit in the ability of particle induced X-ray emission (PIXE), coupled with dynamic analysis (DA) to characterise pigments through trace element analysis. However, this information alone is insufficient for characterising the mineralogical residence of trace elements. For this reason a combined methodology based on PIXE and X-ray diffraction (XRD) has been performed to explore the benefits of a more comprehensive data set. Many Aboriginal paintings and artefacts are predominantly earth pigment based. This makes these cultural heritage materials an ideal case study for testing the above combined methodological approach on earth-based pigments. Samples of synthetic and naturally occurring earth-based pigments were obtained from a range of sources, which include Indigenous communities within Australia's Kimberley region. PIXE analyses using a 3 MeV focussed proton beam at the CSIRO nuclear microprobe, as well as laboratory-based XRD was carried out on the above samples. Elemental signature spectra as well as mineralogical data were used to assess issues regarding synthetic and naturally occurring earth pigments with the ultimate aim of establishing provenance.
[Visible-NIR spectral feature of citrus greening disease].
Li, Xiu-hua; Li, Min-zan; Won Suk, Lee; Reza, Ehsani; Ashish, Ratn Mishra
2014-06-01
Citrus greening (Huanglongbing, or HLB) is a devastating disease caused by Candidatus liberibacter which uses psyllids as vectors. It has no cure till now, and poses a huge threat to citrus industry around the world. In order to diagnose, assess and further control this disease, it is of great importance to first find a quick and effective way to detect it. Spectroscopy method, which was widely considered as a fast and nondestructive way, was adopted here to conduct a preliminary exploration of disease characteristics. In order to explore the spectral differences between the healthy and HLB infected leaves and canopies, this study measured the visible-NIR spectral reflectance of their leaves and canopies under lab and field conditions, respectively. The original spectral data were firstly preprocessed with smoothing (or moving average) and cluster average procedures, and then the first derivatives were also calculated to determine the red edge position (REP). In order to solve the multi-peak phenomenon problem, two interpolation methods (three-point Lagrangian interpolation and four-point linear extrapolation) were adopted to calculate the REP for each sample. The results showed that there were, obvious differences at the visible & NIR spectral reflectance between the healthy and HLB infected classes. Comparing with the healthy reflectance, the HLB reflectance was higher at the visible bands because of the yellowish symptoms on the infected leaves, and lower at NIR bands because the disease blocked water transportation to leaves. But the feature at NIR bands was easily affected by environmental factors such as light, background, etc. The REP was also a potential indicator to distinguish those two classes. The average REP was slowly moving toward red bands while the infection level was getting higher. The gap of the average REPs between the healthy and HLB classes reached to a maximum of 20 nm. Even in the dataset with relatively lower variation, the classification
Mangrove litter fall: Extrapolation from traps to a large tropical macrotidal harbour
Metcalfe, Kristin N.; Franklin, Donald C.; McGuinness, Keith A.
2011-11-01
Mangrove litter is a major source of organic matter for detrital food chains in many tropical coastal ecosystems, but scant attention has been paid to the substantial challenges in sampling and extrapolation of rates of litter fall. The challenges arise due to within-stand heterogeneity including incomplete canopy cover, and canopy that is below the high tide mark. We sampled litter monthly for three years at 35 sites across eight mapped communities in the macrotidal Darwin Harbour, northern Australia. Totals were adjusted for mean community canopy cover and the occurrence of canopy below the high tide mark. The mangroves of Darwin Harbour generate an estimated average of 5.0 t ha -1 yr -1 of litter. This amount would have been overestimated by 32% had we not corrected for limited canopy cover and underestimated by 11% had we not corrected for foliage that is below the high tide mark. Had we made neither correction, we would have overestimated litter fall by 17%. Among communities, rates varied 2.6-fold per unit area of canopy, and 3.9-fold among unit area of community. Seaward fringe mangroves were the most productive per unit of canopy area but the canopy was relatively open; Tidal creek forest was the most productive per unit area of community. Litter fall varied 1.1-fold among years and 2.0-fold among months though communities exhibited a range of seasonalities. Our study may be the most extensively stratified and sampled evaluation of mangrove litter fall in a tropical estuary. We believe our study is also the first such assessment to explicitly deal with canopy discontinuities and demonstrates that failure to do so can result in considerable overestimation of mangrove productivity.
Measurement of absorbed dose with a bone-equivalent extrapolation chamber.
DeBlois, François; Abdel-Rahman, Wamied; Seuntjens, Jan P; Podgorsak, Ervin B
2002-03-01
A hybrid phantom-embedded extrapolation chamber (PEEC) made of Solid Water and bone-equivalent material was used for determining absorbed dose in a bone-equivalent phantom irradiated with clinical radiation beams (cobalt-60 gamma rays; 6 and 18 MV x rays; and 9 and 15 MeV electrons). The dose was determined with the Spencer-Attix cavity theory, using ionization gradient measurements and an indirect determination of the chamber air-mass through measurements of chamber capacitance. The collected charge was corrected for ionic recombination and diffusion in the chamber air volume following the standard two-voltage technique. Due to the hybrid chamber design, correction factors accounting for scatter deficit and electrode composition were determined and applied in the dose equation to obtain absorbed dose in bone for the equivalent homogeneous bone phantom. Correction factors for graphite electrodes were calculated with Monte Carlo techniques and the calculated results were verified through relative air cavity dose measurements for three different polarizing electrode materials: graphite, steel, and brass in conjunction with a graphite collecting electrode. Scatter deficit, due mainly to loss of lateral scatter in the hybrid chamber, reduces the dose to the air cavity in the hybrid PEEC in comparison with full bone PEEC by 0.7% to approximately 2% depending on beam quality and energy. In megavoltage photon and electron beams, graphite electrodes do not affect the dose measurement in the Solid Water PEEC but decrease the cavity dose by up to 5% in the bone-equivalent PEEC even for very thin graphite electrodes (<0.0025 cm). In conjunction with appropriate correction factors determined with Monte Carlo techniques, the uncalibrated hybrid PEEC can be used for measuring absorbed dose in bone material to within 2% for high-energy photon and electron beams.
Birgand, F.; Etheridge, J. R.; Burchell, M. R.
2013-12-01
Tidal marshes are among the most dynamic aquatic systems in the world. While astronomical and wind driven tides are the major driver to displace water volumes, rainfall events and evapotranspiration move the overall balance towards water export or import, respectively. Until now, only glimpses of the associated biogeochemical functioning could be obtained, usually at one or several tidal cycles scale, because there was no obvious method to obtain long term water quality data at a high temporal frequency. We have successfully managed, using UV-Vis spectrophotometers in the field, to obtain water quality and flow data on a 15-min frequency for over 20 months in a restored brackish marsh in North Carolina. This marsh was designed to intercept water generated by subsurface drainage of adjacent agricultural land before discharge to the nearby estuary. It is particularly tempting in tidal systems where tides may look very similar from one to the next, to extrapolate results obtained possibly over several days or weeks to a ';seasonal biogeochemical functioning'. The lessons learned from high frequency data at the tidal scale are fascinating, but in the longer term, we have learned that a few and inherently rare rainfall events drove the overall nutrient balance in the marsh. Continuous water quality monitoring is thus essential for two reasons: 1) to observe the short term dynamics, as they are the key to unveil possibly misunderstood biogeochemical processes, and 2) to capture the rare yet essential events which drive the system's response. However, continuous water quality monitoring on a long term basis in harsh coastal environments is not without challenges.
The cerebellum and visual perceptual learning: evidence from a motion extrapolation task.
Deluca, Cristina; Golzar, Ashkan; Santandrea, Elisa; Lo Gerfo, Emanuele; Eštočinová, Jana; Moretto, Giuseppe; Fiaschi, Antonio; Panzeri, Marta; Mariotti, Caterina; Tinazzi, Michele; Chelazzi, Leonardo
2014-09-01
Visual perceptual learning is widely assumed to reflect plastic changes occurring along the cerebro-cortical visual pathways, including at the earliest stages of processing, though increasing evidence indicates that higher-level brain areas are also involved. Here we addressed the possibility that the cerebellum plays an important role in visual perceptual learning. Within the realm of motor control, the cerebellum supports learning of new skills and recalibration of motor commands when movement execution is consistently perturbed (adaptation). Growing evidence indicates that the cerebellum is also involved in cognition and mediates forms of cognitive learning. Therefore, the obvious question arises whether the cerebellum might play a similar role in learning and adaptation within the perceptual domain. We explored a possible deficit in visual perceptual learning (and adaptation) in patients with cerebellar damage using variants of a novel motion extrapolation, psychophysical paradigm. Compared to their age- and gender-matched controls, patients with focal damage to the posterior (but not the anterior) cerebellum showed strongly diminished learning, in terms of both rate and amount of improvement over time. Consistent with a double-dissociation pattern, patients with focal damage to the anterior cerebellum instead showed more severe clinical motor deficits, indicative of a distinct role of the anterior cerebellum in the motor domain. The collected evidence demonstrates that a pure form of slow-incremental visual perceptual learning is crucially dependent on the intact cerebellum, bearing the notion that the human cerebellum acts as a learning device for motor, cognitive and perceptual functions. We interpret the deficit in terms of an inability to fine-tune predictive models of the incoming flow of visual perceptual input over time. Moreover, our results suggest a strong dissociation between the role of different portions of the cerebellum in motor versus
Song, Yang; Hamtaei, Ehsan; Sethi, Sean K; Yang, Guang; Xie, Haibin; Mark Haacke, E
2017-09-01
To introduce a new approach to reconstruct high definition vascular images using COnstrained Data Extrapolation (CODE) and evaluate its capability in estimating vessel area and stenosis. CODE is based on the constraint that the full width half maximum of a vessel can be accurately estimated and, since it represents the best estimate for the width of the object, higher k-space data can be generated from this information. To demonstrate the potential of extracting high definition vessel edges using low resolution data, both simulated and human data were analyzed to better visualize the vessels and to quantify both area and stenosis measurements. The results from CODE using one-fourth of the fully sampled k-space data were compared with a compressed sensing (CS) reconstruction approach using the same total amount of data but spread out between the center of k-space and the outer portions of the original k-space to accelerate data acquisition by a factor of four. For a sufficiently high signal-to-noise ratio (SNR) such as 16 (8), we found that objects as small as 3 voxels in the 25% under-sampled data (6 voxels when zero-filled) could be used for CODE and CS and provide an estimate of area with an error 200 (30) times faster for CODE compared to CS in the simulated (human) data. CODE was capable of producing sharp sub-voxel edges and accurately estimating stenosis to within 5% for clinically relevant studies of vessels with a width of at least 3pixels in the low resolution images. Copyright © 2017 Elsevier Inc. All rights reserved.
Spectral implementation of full waveform inversion based on reflections
Wu, Zedong
2014-01-01
Using the reflection imaging process as a source to model reflections for full waveform inversion (FWI), referred to as reflection FWI (RFWI), allows us to update the background component of the model, and avoid using the relatively costly migration velocity analysis (MVA), which usually relies on extended images. However, RFWI requires a good image to represent the current reflectivity, as well as, some effort to obtain good smooth gradients. We develop a spectral implementation of RFWI where the wavefield extrapolations and gradient evaluation are performed in the wavenumber domain, obtaining clean dispersion free and fast extrapolations. The gradient, in this case, yields three terms, two of which provide us with each side of the rabbit ear kernel, and the third, often ignored, provides a normalization of the reflectivity within the kernel, which can be used to obtain a reflectivity free background update. Since the image is imperfect (it is an adjoint, not an inverse), an optimization process for the third term scaling is implemented to achieve the smoothest gradient update. A rare application of RFWI on the reflectivity infested Marmousi model shows some of the potential of the approach.
Spectral signatures of chirality
DEFF Research Database (Denmark)
Pedersen, Jesper Goor; Mortensen, Asger
2009-01-01
We present a new way of measuring chirality, via the spectral shift of photonic band gaps in one-dimensional structures. We derive an explicit mapping of the problem of oblique incidence of circularly polarized light on a chiral one-dimensional photonic crystal with negligible index contrast...... to the formally equivalent problem of linearly polarized light incident on-axis on a non-chiral structure with index contrast. We derive analytical expressions for the first-order shifts of the band gaps for negligible index contrast. These are modified to give good approximations to the band gap shifts also...
Indian Academy of Sciences (India)
Minfeng Gu; Y. L. Ai
2011-03-01
The optical variability of 29 flat spectrum radio quasars in SDSS Stripe 82 region are investigated by using DR7 released multi-epoch data. All FSRQs show variations with overall amplitude ranging from 0.24 mag to 3.46 mag in different sources. About half of FSRQs show a bluer-when-brighter trend, which is commonly observed for blazars. However, only one source shows a redder-when-brighter trend, which implies it is rare in FSRQs. In this source, the thermal emission may be responsible for the spectral behaviour.
Spectrally encoded confocal microscopy
Energy Technology Data Exchange (ETDEWEB)
Tearney, G.J.; Webb, R.H.; Bouma, B.E. [Wellman Laboratories of Photomedicine, Massachusetts General Hospital, 50 Blossom Street, BAR 703, Boston, Massachusetts 02114 (United States)
1998-08-01
An endoscope-compatible, submicrometer-resolution scanning confocal microscopy imaging system is presented. This approach, spectrally encoded confocal microscopy (SECM), uses a quasi-monochromatic light source and a transmission diffraction grating to detect the reflectivity simultaneously at multiple points along a transverse line within the sample. Since this method does not require fast spatial scanning within the probe, the equipment can be miniaturized and incorporated into a catheter or endoscope. Confocal images of an electron microscope grid were acquired with SECM to demonstrate the feasibility of this technique. {copyright} {ital 1998} {ital Optical Society of America}
Iterative reconstruction of images from incomplete spectral data
Rhebergen, Jan B.; van den Berg, Peter M.; Habashy, Tarek M.
1997-06-01
In various branches of engineering and science, one is confronted with measurements resulting in incomplete spectral data. The problem of the reconstruction of an image from such a data set can be formulated in terms of an integral equation of the first kind. Consequently, this equation can be converted into an equivalent integral equation of the second kind which can be solved by a Neumann-type iterative method. It is shown that this Neumann expansion is an error-reducing method and that it is equivalent to the Papoulis - Gerchberg algorithm for band-limited signal extrapolation. The integral equation can also be solved by employing a conjugate gradient iterative scheme. Again, convergence of this scheme is demonstrated. Finally a number of illustrative numerical examples are presented and discussed.
Energy Technology Data Exchange (ETDEWEB)
Silva, Eric A.B. da; Caldas, Linda V.E., E-mail: ebrito@usp.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2011-10-26
The extrapolation chamber is a ionization chamber used for detection low energy radiation and can be used as an standard instrument for beta radiation beams. This type of ionization chamber have as main characteristic the variation of sensible volume. This paper performs a study of characterization of a PTW commercial extrapolation chamber, in the energy interval of the qualities of conventional radiodiagnostic
Martín-Jiménez, Tomás; Baynes, Ronald E; Craigmill, Arthur; Riviere, Jim E
2002-08-01
The extralabel use of drugs can be defined as the use of drugs in a manner inconsistent with their FDA-approved labeling. The passage of the Animal Medicinal Drug Use Clarification Act (AMDUCA) in 1994 and its implementation by the FDA-Center for Veterinary Medicine in 1996 has allowed food animal veterinarians to use drugs legally in an extralabel manner, as long as an appropriate withdrawal period is established. The present study introduces and validates with simulated and experimental data the Extrapolated Withdrawal-Period Estimator (EWE) Algorithm, a procedure aimed at predicting extralabel withdrawal intervals (WDIs) based on the label and pharmacokinetic literature data contained in the Food Animal Residue Avoidance Databank (FARAD). This is the initial and first attempt at consistently obtaining WDI estimates that encompass a reasonable degree of statistical soundness. Data on the determination of withdrawal times after the extralabel use of the antibiotic oxytetracycline were obtained both with simulated disposition data and from the literature. A withdrawal interval was computed using the EWE Algorithm for an extralabel dose of 25 mg/kg (simulation study) and for a dose of 40 mg/kg (literature data). These estimates were compared with the withdrawal times computed with the simulated data and with the literature data, respectively. The EWE estimates of WDP for a simulated extralabel dose of 25 mg/kg was 39 days. The withdrawal time (WDT) obtained for this dose on a tissue depletion study was 39 days. The EWE estimate of WDP for an extralabel intramuscular dose of 40 mg/kg in cattle, based on the kinetic data contained in the FARAD database, was 48 days. The withdrawal time experimentally obtained for similar use of this drug was 49 days. The EWE Algorithm can obtain WDI estimates that encompass the same degree of statistical soundness as the WDT estimates, provided that the assumptions of the approved dosage regimen hold for the extralabel dosage regimen
DEFF Research Database (Denmark)
Thorndahl, Søren Liedtke; Rasmussen, Michael R.
2013-01-01
Model based short-term forecasting of urban storm water runoff can be applied in realtime control of drainage systems in order to optimize system capacity during rain and minimize combined sewer overflows, improve wastewater treatment or activate alarms if local flooding is impending. A novel...... online system, which forecasts flows and water levels in real-time with inputs from extrapolated radar rainfall data, has been developed. The fully distributed urban drainage model includes auto-calibration using online in-sewer measurements which is seen to improve forecast skills significantly....... The radar rainfall extrapolation (nowcast) limits the lead time of the system to two hours. In this paper, the model set-up is tested on a small urban catchment for a period of 1.5 years. The 50 largest events are presented....
Monte Carlo based approach to the LS–NaI 4πβ–γ anticoincidence extrapolation and uncertainty.
Fitzgerald, R
2016-03-01
The 4πβ–γ anticoincidence method is used for the primary standardization of β−, β+, electron capture (EC), α, and mixed-mode radionuclides. Efficiency extrapolation using one or more γ ray coincidence gates is typically carried out by a low-order polynomial fit. The approach presented here is to use a Geant4-based Monte Carlo simulation of the detector system to analyze the efficiency extrapolation. New code was developed to account for detector resolution, direct γ ray interaction with the PMT, and implementation of experimental β-decay shape factors. The simulation was tuned to 57Co and 60Co data, then tested with 99mTc data, and used in measurements of 18F, 129I, and 124I. The analysis method described here offers a more realistic activity value and uncertainty than those indicated from a least-squares fit alone.
Lee, Jung-Won; Choi, Jeung-Yoon; Kang, Hong-Goo
2012-02-01
Knowledge-based speech recognition systems extract acoustic cues from the signal to identify speech characteristics. For channel-deteriorated telephone speech, acoustic cues, especially those for stop consonant place, are expected to be degraded or absent. To investigate the use of knowledge-based methods in degraded environments, feature extrapolation of acoustic-phonetic features based on Gaussian mixture models is examined. This process is applied to a stop place detection module that uses burst release and vowel onset cues for consonant-vowel tokens of English. Results show that classification performance is enhanced in telephone channel-degraded speech, with extrapolated acoustic-phonetic features reaching or exceeding performance using estimated Mel-frequency cepstral coefficients (MFCCs). Results also show acoustic-phonetic features may be combined with MFCCs for best performance, suggesting these features provide information complementary to MFCCs.
Exl, Lukas; Mauser, Norbert J.; Schrefl, Thomas; Suess, Dieter
2017-10-01
A practical and efficient scheme for the higher order integration of the Landau-Lifschitz-Gilbert (LLG) equation is presented. The method is based on extrapolation of the two-step explicit midpoint rule and incorporates adaptive time step and order selection. We make use of a piecewise time-linear stray field approximation to reduce the necessary work per time step. The approximation to the interpolated operator is embedded into the extrapolation process to keep in step with the hierarchic order structure of the scheme. We verify the approach by means of numerical experiments on a standardized NIST problem and compare with a higher order embedded Runge-Kutta formula. The efficiency of the presented approach increases when the stray field computation takes a larger portion of the costs for the effective field evaluation.
Wang, Z.; Kwok, KWH; Lui, GCS; Zhou, G; Lee, JS; Lam, MHW; Leung, KMY
2015-01-01
Due to a lack of saltwater toxicity data in tropical regions, toxicity data generated from temperate or cold water species endemic to North America and Europe are often adopted to derive water quality guidelines (WQG) for protecting tropical marine ecosystems. Given the differences in species composition and environmental attributes between tropical and temperate saltwater ecosystems, there are conceivable uncertainties in such ‘temperate-to-tropic’ extrapolations. This ...
Directory of Open Access Journals (Sweden)
Hyun Young Lee
2010-01-01
Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal ℓ∞(L2 error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.
Xia, Hong; Luo, Zhendong
2017-01-01
In this study, we devote ourselves to establishing a stabilized mixed finite element (MFE) reduced-order extrapolation (SMFEROE) model holding seldom unknowns for the two-dimensional (2D) unsteady conduction-convection problem via the proper orthogonal decomposition (POD) technique, analyzing the existence and uniqueness and the stability as well as the convergence of the SMFEROE solutions and validating the correctness and dependability of the SMFEROE model by means of numerical simulations.
Fallou, Hélène; Cimetière, Nicolas; Giraudet, Sylvain; Wolbert, Dominique; Le Cloirec, Pierre
2016-01-15
Activated carbon fiber cloths (ACFC) have shown promising results when applied to water treatment, especially for removing organic micropollutants such as pharmaceutical compounds. Nevertheless, further investigations are required, especially considering trace concentrations, which are found in current water treatment. Until now, most studies have been carried out at relatively high concentrations (mg L(-1)), since the experimental and analytical methodologies are more difficult and more expensive when dealing with lower concentrations (ng L(-1)). Therefore, the objective of this study was to validate an extrapolation procedure from high to low concentrations, for four compounds (Carbamazepine, Diclofenac, Caffeine and Acetaminophen). For this purpose, the reliability of the usual adsorption isotherm models, when extrapolated from high (mg L(-1)) to low concentrations (ng L(-1)), was assessed as well as the influence of numerous error functions. Some isotherm models (Freundlich, Toth) and error functions (RSS, ARE) show weaknesses to be used as an adsorption isotherms at low concentrations. However, from these results, the pairing of the Langmuir-Freundlich isotherm model with Marquardt's percent standard of deviation was evidenced as the best combination model, enabling the extrapolation of adsorption capacities by orders of magnitude.
Energy Technology Data Exchange (ETDEWEB)
Reynaldo, S. R. [Development Centre of Nuclear Technology, Posgraduate Course in Science and Technology of Radiations, Minerals and Materials / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Benavente C, J. A.; Da Silva, T. A., E-mail: sirr@cdtn.br [Development Centre of Nuclear Technology / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil)
2015-10-15
Beta Secondary Standard 2 (Bss 2) provides beta radiation fields with certified values of absorbed dose to tissue and the derived operational radiation protection quantities. As part of the quality assurance, metrology laboratories are required to verify the reliability of the Bss-2 system by performing additional verification measurements. In the CDTN Calibration Laboratory, the absorbed dose rates and their angular variation in the {sup 90}Sr/{sup 90}Y and {sup 85}Kr beta radiation fields were studied. Measurements were done with a 23392 model PTW extrapolation chamber and with Gafchromic radiochromic films on a PMMA slab phantom. In comparison to the certificate values provided by the Bss-2, absorbed dose rates measured with the extrapolation chamber differed from -1.4 to 2.9% for the {sup 90}Sr/{sup 90}Y and -0.3% for the {sup 85}Kr fields; their angular variation showed differences lower than 2% for incidence angles up to 40-degrees and it reached 11% for higher angles, when compared to ISO values. Measurements with the radiochromic film showed an asymmetry of the radiation field that is caused by a misalignment. Differences between the angular variations of absorbed dose rates determined by both dosimetry systems suggested that some correction factors for the extrapolation chamber that were not considered should be determined. (Author)
Jiang, Chaowei
2015-01-01
In the solar corona, magnetic flux rope is believed to be a fundamental structure accounts for magnetic free energy storage and solar eruptions. Up to the present, the extrapolation of magnetic field from boundary data is the primary way to obtain fully three-dimensional magnetic information of the corona. As a result, the ability of reliable recovering coronal magnetic flux rope is important for coronal field extrapolation. In this paper, our coronal field extrapolation code (CESE-MHD-NLFFF, Jiang & Feng 2012) is examined with an analytical magnetic flux rope model proposed by Titov & Demoulin (1999), which consists of a bipolar magnetic configuration holding an semi-circular line-tied flux rope in force-free equilibrium. By using only the vector field in the bottom boundary as input, we test our code with the model in a representative range of parameter space and find that the model field is reconstructed with high accuracy. Especially, the magnetic topological interfaces formed between the flux rop...
Scott, Bradley J; Klein, Agnes V; Wang, Jian
2015-03-01
Monoclonal antibodies have become mainstays of treatment for many diseases. After more than a decade on the Canadian market, a number of authorized monoclonal antibody products are facing patent expiry. Given their success, most notably in the areas of oncology and autoimmune disease, pharmaceutical and biotechnology companies are eager to produce their own biosimilar versions and have begun manufacturing and testing for a variety of monoclonal antibody products. In October of 2013, the first biosimilar monoclonal antibody products were approved by the European Medicines Agency (Remsima™ and Inflectra™). These products were authorized by Health Canada shortly after; however, while the EMA allowed for extrapolation to all of the indications held by the reference product, Health Canada limited extrapolation to a subset of the indications held by the reference product, Remicade®. The purpose of this review is to discuss the Canadian regulatory framework for the authorization of biosimilar mAbs with specific discussion around the clinical requirements for establishing (bio)-similarity and to present the principles that are used in the clinical assessment of New Drug Submissions for intended biosimilar monoclonal antibodies. Health Canada's current views regarding indication extrapolation, product interchangeability, and post-market surveillance are discussed as well.
Ground state energy of the δ-Bose and Fermi gas at weak coupling from double extrapolation
Prolhac, Sylvain
2017-04-01
We consider the ground state energy of the Lieb–Liniger gas with δ interaction in the weak coupling regime γ \\to 0 . For bosons with repulsive interaction, previous studies gave the expansion {{e}\\text{B}}≤ft(γ \\right)≃ γ -4{γ3/2}/3π +≤ft(1/6-1/{π2}\\right){γ2} . Using a numerical solution of the Lieb–Liniger integral equation discretized with M points and finite strength γ of the interaction, we obtain very accurate numerics for the next orders after extrapolation on M and γ. The coefficient of {γ5/2} in the expansion is found to be approximately equal to -0.001 587 699 865 505 944 989 29 , accurate within all digits shown. This value is supported by a numerical solution of the Bethe equations with N particles, followed by extrapolation on N and γ. It was identified as ≤ft(3\\zeta (3)/8-1/2\\right)/{π3} by G Lang. The next two coefficients are also guessed from the numerics. For balanced spin 1/2 fermions with attractive interaction, the best result so far for the ground state energy has been {{e}\\text{F}}≤ft(γ \\right)≃ {π2}/12-γ /2+{γ2}/6 . An analogue double extrapolation scheme leads to the value -\\zeta (3)/{π4} for the coefficient of {γ3} .
Eliav, Ephraim; Vilkas, Marius J; Ishikawa, Yasuyuki; Kaldor, Uzi
2005-06-08
The intermediate Hamiltonian (IH) coupled-cluster method makes possible the use of very large model spaces in coupled-cluster calculations without running into intruder states. This is achieved at the cost of approximating some of the IH matrix elements, which are not taken at their rigorous effective Hamiltonian (EH) value. The extrapolated intermediate Hamiltonian (XIH) approach proposed here uses a parametrized IH and extrapolates it to the full EH, with model spaces larger by several orders of magnitude than those possible in EH coupled-cluster methods. The flexibility and resistance to intruders of the IH approach are thus combined with the accuracy of full EH. Various extrapolation schemes are described. A pilot application to the electron affinities (EAs) of alkali atoms is presented, where converged EH results are obtained by XIH for model spaces of approximately 20,000 determinants; direct EH calculations converge only for a one-dimensional model space. Including quantum electrodynamic effects, the average XIH error for the EAs is 0.6 meV and the largest error is 1.6 meV. A new reference estimate for the EA of Fr is proposed at 486+/-2 meV.
DEFF Research Database (Denmark)
Basith, M. A.; Islam, M. A.; Ahmmad, Bashir
2017-01-01
A simple route to prepare Gd0.7Sr0.3MnO3 nanoparticles by ultrasonication of their bulk powder materials is presented in this article. For comparison, Gd0.7Sr0.3MnO3 nanoparticles are also prepared by ball milling. The prepared samples are characterized by X-ray diffraction (XRD), field emission...... of crystalline and amorphous phases. FESEM images demonstrate the formation of nanoparticles with average particle size in the range of 50–100 nm for both ultrasonication and 4 h (h) of ball milling. The bulk materials and nanoparticles synthesized by both ultrasonication and 4 h ball milling exhibit...... of the nanoparticles due to ball milling particularly for milling time exceeding 8 h. This investigation demonstrates the potential of ultrasonication as a simple route to prepare high crystalline rare-earth based manganite nanoparticles with improved control compared to the traditional ball milling technique....
Rectangular spectral collocation
Driscoll, Tobin A.
2015-02-06
Boundary conditions in spectral collocation methods are typically imposed by removing some rows of the discretized differential operator and replacing them with others that enforce the required conditions at the boundary. A new approach based upon resampling differentiated polynomials into a lower-degree subspace makes differentiation matrices, and operators built from them, rectangular without any row deletions. Then, boundary and interface conditions can be adjoined to yield a square system. The resulting method is both flexible and robust, and avoids ambiguities that arise when applying the classical row deletion method outside of two-point scalar boundary-value problems. The new method is the basis for ordinary differential equation solutions in Chebfun software, and is demonstrated for a variety of boundary-value, eigenvalue and time-dependent problems.
Spectral disentangling with Spectangular
Sablowski, Daniel P.; Weber, Michael
2017-01-01
The paper introduces the software Spectangular for spectral disentangling via singular value decomposition with global optimisation of the orbital parameters of the stellar system or radial velocities of the individual observations. We will describe the procedure and the different options implemented in our program. Furthermore, we will demonstrate the performance and the applicability using tests on artificial data. Additionally, we use high-resolution spectra of Capella to demonstrate the performance of our code on real-world data. The novelty of this package is the implemented global optimisation algorithm and the graphical user interface (GUI) for ease of use. We have implemented the code to tackle SB1 and SB2 systems with the option of also dealing with telluric (static) lines. Based in part on data obtained with the STELLA robotic telescope in Tenerife, an AIP facility jointly operated by AIP and IAC.
Spectral Classification Beyond M
Leggett, S K; Burgasser, A J; Jones, H R A; Marley, M S; Tsuji, T
2004-01-01
Significant populations of field L and T dwarfs are now known, and we anticipate the discovery of even cooler dwarfs by Spitzer and ground-based infrared surveys. However, as the number of known L and T dwarfs increases so does the range in their observational properties, and difficulties have arisen in interpreting the observations. Although modellers have made significant advances, the complexity of the very low temperature, high pressure, photospheres means that problems remain such as the treatment of grain condensation as well as incomplete and non-equilibrium molecular chemistry. Also, there are several parameters which control the observed spectral energy distribution - effective temperature, grain sedimentation efficiency, metallicity and gravity - and their effects are not well understood. In this paper, based on a splinter session, we discuss classification schemes for L and T dwarfs, their dependency on wavelength, and the effects of the parameters T_eff, f_sed, [m/H] and log g on optical and infra...
Spectral Animation Compression
Institute of Scientific and Technical Information of China (English)
Chao Wang; Yang Liu; Xiaohu Guo; Zichun Zhong; Binh Le; Zhigang Deng
2015-01-01
This paper presents a spectral approach to compress dynamic animation consisting of a sequence of homeomor-phic manifold meshes. Our new approach directly compresses the field of deformation gradient defined on the surface mesh, by decomposing it into rigid-body motion (rotation) and non-rigid-body deformation (stretching) through polar decompo-sition. It is known that the rotation group has the algebraic topology of 3D ring, which is different from other operations like stretching. Thus we compress these two groups separately, by using Manifold Harmonics Transform to drop out their high-frequency details. Our experimental result shows that the proposed method achieves a good balance between the reconstruction quality and the compression ratio. We compare our results quantitatively with other existing approaches on animation compression, using standard measurement criteria.
Spectral disentangling with Spectangular
Sablowski, Daniel P
2016-01-01
The paper introduces the software Spectangular for spectral disentangling via singular value decomposition with global optimisation of the orbital parameters of the stellar system or radial velocities of the individual observations. We will describe the procedure and the different options implemented in our program. Furthermore, we will demonstrate the performance and the applicability using tests on artificial data. Additionally, we use high-resolution spectra of Capella to demonstrate the performance of our code on real-world data. The novelty of this package is the implemented global optimisation algorithm and the graphical user interface (GUI) for ease of use. We have implemented the code to tackle SB1 and SB2 systems with the option of also dealing with telluric (static) lines.
Spectral proper orthogonal decomposition
Sieber, Moritz; Paschereit, Christian Oliver
2015-01-01
The identification of coherent structures from experimental or numerical data is an essential task when conducting research in fluid dynamics. This typically involves the construction of an empirical mode base that appropriately captures the dominant flow structures. The most prominent candidates are the energy-ranked proper orthogonal decomposition (POD) and the frequency ranked Fourier decomposition and dynamic mode decomposition (DMD). However, these methods fail when the relevant coherent structures occur at low energies or at multiple frequencies, which is often the case. To overcome the deficit of these "rigid" approaches, we propose a new method termed Spectral Proper Orthogonal Decomposition (SPOD). It is based on classical POD and it can be applied to spatially and temporally resolved data. The new method involves an additional temporal constraint that enables a clear separation of phenomena that occur at multiple frequencies and energies. SPOD allows for a continuous shifting from the energetically ...
SPECTRAL ANALYSIS OF RADIOXENON
Energy Technology Data Exchange (ETDEWEB)
Cooper, Matthew W.; Bowyer, Ted W.; Hayes, James C.; Heimbigner, Tom R.; Hubbard, Charles W.; McIntyre, Justin I.; Schrom, Brian T.
2008-09-23
Monitoring changes in atmospheric radioxenon concentrations is a major tool in the detection of an underground nuclear explosion. Ground based systems like the Automated Radioxenon Sampler /Analyzer (ARSA), the Swedish Unattended Noble gas Analyzer (SAUNA) and the Automatic portable radiometer of isotopes Xe (ARIX), can collect and detect several radioxenon isotopes by processing and transferring samples into a high efficiency beta-gamma coincidence detector. The high efficiency beta-gamma coincidence detector makes these systems highly sensitive to the radioxenon isotopes 133Xe, 131mXe, 133mXe and 135Xe. The standard analysis uses regions of interest (ROI) to determine the amount of a particular radioxenon isotope present. The ROI method relies on the peaks of interest falling within energy limits of the ROI. Some potential problems inherent in this method are the reliance on stable detector gains and a fixed resolution for each energy peak. In addition, when a high activity sample is measured there will be more interference among the ROI, in particular within the 133Xe, 133mXe, and 131mXe regions. A solution to some of these problems can be obtained through spectral fitting of the data. Spectral fitting is simply the fitting of the peaks using known functions to determine the number and relative peak positions and widths. By knowing this information it is possible to determine which isotopes are present. Area under each peak can then be used to determine an overall concentration for each isotope. Using the areas of the peaks several key detector characteristics can be determined: efficiency, energy calibration, energy resolution and ratios between interfering isotopes (Radon daughters).
Patient-bounded extrapolation using low-dose priors for volume-of-interest imaging in C-arm CT
Energy Technology Data Exchange (ETDEWEB)
Xia, Y.; Maier, A.; Berger, M.; Hornegger, J. [Pattern Recognition Lab, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen 91058 (Germany); Bauer, S. [Siemens AG, Healthcare Sector, Forchheim 91301 (Germany)
2015-04-15
Purpose: Three-dimensional (3D) volume-of-interest (VOI) imaging with C-arm systems provides anatomical information in a predefined 3D target region at a considerably low x-ray dose. However, VOI imaging involves laterally truncated projections from which conventional reconstruction algorithms generally yield images with severe truncation artifacts. Heuristic based extrapolation methods, e.g., water cylinder extrapolation, typically rely on techniques that complete the truncated data by means of a continuity assumption and thus appear to be ad-hoc. It is our goal to improve the image quality of VOI imaging by exploiting existing patient-specific prior information in the workflow. Methods: A necessary initial step prior to a 3D acquisition is to isocenter the patient with respect to the target to be scanned. To this end, low-dose fluoroscopic x-ray acquisitions are usually applied from anterior–posterior (AP) and medio-lateral (ML) views. Based on this, the patient is isocentered by repositioning the table. In this work, we present a patient-bounded extrapolation method that makes use of these noncollimated fluoroscopic images to improve image quality in 3D VOI reconstruction. The algorithm first extracts the 2D patient contours from the noncollimated AP and ML fluoroscopic images. These 2D contours are then combined to estimate a volumetric model of the patient. Forward-projecting the shape of the model at the eventually acquired C-arm rotation views gives the patient boundary information in the projection domain. In this manner, we are in the position to substantially improve image quality by enforcing the extrapolated line profiles to end at the known patient boundaries, derived from the 3D shape model estimate. Results: The proposed method was evaluated on eight clinical datasets with different degrees of truncation. The proposed algorithm achieved a relative root mean square error (rRMSE) of about 1.0% with respect to the reference reconstruction on
Energy Technology Data Exchange (ETDEWEB)
Croom, Edward L.; Shafer, Timothy J.; Evans, Marina V.; Mundy, William R.; Eklund, Chris R.; Johnstone, Andrew F.M.; Mack, Cina M.; Pegram, Rex A., E-mail: pegram.rex@epa.gov
2015-02-15
Approaches for extrapolating in vitro toxicity testing results for prediction of human in vivo outcomes are needed. The purpose of this case study was to employ in vitro toxicokinetics and PBPK modeling to perform in vitro to in vivo extrapolation (IVIVE) of lindane neurotoxicity. Lindane cell and media concentrations in vitro, together with in vitro concentration-response data for lindane effects on neuronal network firing rates, were compared to in vivo data and model simulations as an exercise in extrapolation for chemical-induced neurotoxicity in rodents and humans. Time- and concentration-dependent lindane dosimetry was determined in primary cultures of rat cortical neurons in vitro using “faux” (without electrodes) microelectrode arrays (MEAs). In vivo data were derived from literature values, and physiologically based pharmacokinetic (PBPK) modeling was used to extrapolate from rat to human. The previously determined EC{sub 50} for increased firing rates in primary cultures of cortical neurons was 0.6 μg/ml. Media and cell lindane concentrations at the EC{sub 50} were 0.4 μg/ml and 7.1 μg/ml, respectively, and cellular lindane accumulation was time- and concentration-dependent. Rat blood and brain lindane levels during seizures were 1.7–1.9 μg/ml and 5–11 μg/ml, respectively. Brain lindane levels associated with seizures in rats and those predicted for humans (average = 7 μg/ml) by PBPK modeling were very similar to in vitro concentrations detected in cortical cells at the EC{sub 50} dose. PBPK model predictions matched literature data and timing. These findings indicate that in vitro MEA results are predictive of in vivo responses to lindane and demonstrate a successful modeling approach for IVIVE of rat and human neurotoxicity. - Highlights: • In vitro to in vivo extrapolation for lindane neurotoxicity was performed. • Dosimetry of lindane in a micro-electrode array (MEA) test system was assessed. • Cell concentrations at the MEA EC
Full-disk nonlinear force-free field extrapolation of SDO/HMI and SOLIS/VSM magnetograms
Tadesse, T.; Wiegelmann, T.; Inhester, B.; MacNeice, P.; Pevtsov, A.; Sun, X.
2013-02-01
Context. The magnetic field configuration is essential for understanding solar explosive phenomena, such as flares and coronal mass ejections. To overcome the unavailability of coronal magnetic field measurements, photospheric magnetic field vector data can be used to reconstruct the coronal field. Two complications of this approach are that the measured photospheric magnetic field is not force-free and that one has to apply a preprocessing routine to achieve boundary conditions suitable for the force-free modeling. Furthermore the nonlinear force-free extrapolation code should take uncertainties into account in the photospheric field data. They occur due to noise, incomplete inversions, or azimuth ambiguity-removing techniques. Aims: Extrapolation codes in Cartesian geometry for modeling the magnetic field in the corona do not take the curvature of the Sun's surface into account and can only be applied to relatively small areas, e.g., a single active region. Here we apply a method for nonlinear force-free coronal magnetic field modeling and preprocessing of photospheric vector magnetograms in spherical geometry using the optimization procedure to full disk vector magnetograms. We compare the analysis of the photospheric magnetic field and subsequent force-free modeling based on full-disk vector maps from Helioseismic and Magnetic Imager (HMI) onboard the solar dynamics observatory (SDO) and Vector Spectromagnetograph (VSM) of the Synoptic Optical Long-term Investigations of the Sun (SOLIS). Methods: We used HMI and VSM photospheric magnetic field measurements to model the force-free coronal field above multiple solar active regions, assuming magnetic forces to dominate. We solved the nonlinear force-free field equations by minimizing a functional in spherical coordinates over a full disk and excluding the poles. After searching for the optimum modeling parameters for the particular data sets, we compared the resulting nonlinear force-free model fields. We compared
Spectral unmixing: estimating partial abundances
CSIR Research Space (South Africa)
Debba, Pravesh
2009-01-01
Full Text Available of spectral unmixing 3 End-member spectra and synthetic mixtures 4 Results 5 Conclusions Debba (CSIR) Spectral Unmixing LQM 2009 2 / 22 Background and Research Question If research could be as easy as eating a chocolate cake . . . Figure: Can you guess... the ingredients for this chocolate cake? Debba (CSIR) Spectral Unmixing LQM 2009 3 / 22 Background and Research Question Ingredients Quantity unsweetened chocolate unsweetened cocoa powder boiling water flour baking powder baking soda salt unsalted...
Towards a measurement of the spectral runnings
Muñoz, Julian B.; Kovetz, Ely D.; Raccanelli, Alvise; Kamionkowski, Marc; Silk, Joseph
2017-05-01
Single-field slow-roll inflation predicts a nearly scale-free power spectrum of perturbations, as observed at the scales accessible to current cosmological experiments. This spectrum is slightly red, showing a tilt (1-ns)~ 0.04. A direct consequence of this tilt are nonvanishing runnings αs= d ns/ dlog k, and βs= dαs/ dlog k, which in the minimal inflationary scenario should reach absolute values of 10-3 and 10-5, respectively. In this work we calculate how well future surveys can measure these two runnings. We consider a Stage-4 (S4) CMB experiment and show that it will be able to detect significant deviations from the inflationary prediction for αs, although not for βs. Adding to the S4 CMB experiment the information from a WFIRST-like or a DESI-like survey improves the sensitivity to the runnings by ~ 20%, and 30%, respectively. A spectroscopic survey with a billion objects, such as the SKA, will add enough information to the S4 measurements to allow a detection of αs=10-3, required to probe the single-field slow-roll inflationary paradigm. We show that only a very-futuristic interferometer targeting the dark ages will be capable of measuring the minimal inflationary prediction for βs. The results of other probes, such as a stochastic background of gravitational waves observable by LIGO, the Ly-α forest, and spectral distortions, are shown for comparison. Finally, we study the claims that large values of βs, if extrapolated to the smallest scales, can produce primordial black holes of tens of solar masses, which we show to be easily testable by the S4 CMB experiment.
Directory of Open Access Journals (Sweden)
J. J. Vélez
2009-02-01
Full Text Available A Regional Water Resources study was performed at basins within and draining to the Basque Country Region (N of Spain, with a total area of approximately 8500 km^{2}. The objective was to obtain daily and monthly long-term discharges in 567 points, most of them ungauged, with basin areas ranging from 0.25 to 1850 km^{2}. In order to extrapolate the calibrations at gauged points to the ungauged ones, a distributed and conceptually based model called TETIS was used. In TETIS the runoff production is modelled using five linked tanks at the each cell with different outflow relationships at each tank, which represents the main hydrological processes as snowmelt, evapotranspiration, overland flow, interflow and base flow. The routing along the channels' network couples its geomorphologic characteristics with the kinematic wave approach. The parameter estimation methodology tries to distinguish between the effective parameter used in the model at the cell scale, and the watershed characteristic estimated from the available information, being the best estimation without losing its physical meaning. The relationship between them can be considered as a correction function or, in its simple form, a correction factor. The correction factor can take into account the model input errors, the temporal and spatial scale effects and the watershed characteristics. Therefore, it is reasonable to assume the correction factor is the same for each parameter to all cells within the watershed. This approach reduces drastically the number of parameter to be calibrated, because only the common correction factors are calibrated instead of parameter maps (number of parameters times the number of cells. In this way, the calibration can be performed using automatic methodologies. In this work, the Shuffled Complex Evolution – University of Arizona, SCE-UA algorithm was used. The available recent year's data was used to calibrate the model in 20 of
[Review of digital ground object spectral library].
Zhou, Xiao-Hu; Zhou, Ding-Wu
2009-06-01
A higher spectral resolution is the main direction of developing remote sensing technology, and it is quite important to set up the digital ground object reflectance spectral database library, one of fundamental research fields in remote sensing application. Remote sensing application has been increasingly relying on ground object spectral characteristics, and quantitative analysis has been developed to a new stage. The present article summarized and systematically introduced the research status quo and development trend of digital ground object reflectance spectral libraries at home and in the world in recent years. Introducing the spectral libraries has been established, including desertification spectral database library, plants spectral database library, geological spectral database library, soil spectral database library, minerals spectral database library, cloud spectral database library, snow spectral database library, the atmosphere spectral database library, rocks spectral database library, water spectral database library, meteorites spectral database library, moon rock spectral database library, and man-made materials spectral database library, mixture spectral database library, volatile compounds spectral database library, and liquids spectral database library. In the process of establishing spectral database libraries, there have been some problems, such as the lack of uniform national spectral database standard and uniform standards for the ground object features as well as the comparability between different databases. In addition, data sharing mechanism can not be carried out, etc. This article also put forward some suggestions on those problems.
Bais, A F
1997-07-20
A methodology for the absolute calibration of spectral measurements of direct solar ultraviolet radiation, performed with a Brewer spectrophotometer is presented. The method uses absolute measurements of global and diffuse solar irradiance obtained practically simultaneously at each wavelength with the direct-Sun component. On the basis of this calibration, direct-Sun spectra, measured over a wide range of solar zenith angles at a high altitude site, were used to determine the extraterrestrial solar spectrum by applying the Langley extrapolation method. Finally this spectrum is compared with a solar spectrum derived from the airborne tunable laser absorption spectrometer 3 Space Shuttle mission, showing an agreement of better than +/-3%.
Spectral Analysis of Markov Chains
2007-01-01
The paper deals with the problem of a statistical analysis of Markov chains connected with the spectral density. We present the expressions for the function of spectral density. These expressions may be used to estimate the parameter of the Markov chain.
SPECTRAL ANALYSIS OF EXCHANGE RATES
Directory of Open Access Journals (Sweden)
ALEŠA LOTRIČ DOLINAR
2013-06-01
Full Text Available Using spectral analysis is very common in technical areas but rather unusual in economics and finance, where ARIMA and GARCH modeling are much more in use. To show that spectral analysis can be useful in determining hidden periodic components for high-frequency finance data as well, we use the example of foreign exchange rates
Miniature spectrally selective dosimeter
Energy Technology Data Exchange (ETDEWEB)
Adams, R.R.; Macconochie, I.O.; Poole, B.D.
1983-02-08
The present invention discloses a miniature spectrally selective dosimeter capable of measuring selected bandwidths of radiation exposure on small mobile areas. This is achieved by the combination of photovoltaic detectors, electrochemical integrators (e-cells) and filters in a small compact case which can be easily attached in close proximity to and substantially parallel to the surface being measured. In one embodiment two photovoltaic detectors, two e-cells and three filters are packaged in a small case with attaching means consisting of a safety pin. In another embodiment, two detectors, one e-cell and three filters are packaged in a small case with attaching means consisting of a clip to clip over a side piece of an eye glass frame in a further embodiment, the electro-optic elements a packaged in a wristwatch case with attaching means being a watchband. The filters in all embodiments allow only selected wavelengths of radiation to be detected by the photovoltaic detectors and then integrated by the e-cells.
Spectral numbers in Floer theories
Usher, Michael
2007-01-01
The chain complexes underlying Floer homology theories typically carry a real-valued filtration, allowing one to associate to each Floer homology class a spectral number defined as the infimum of the filtration levels of chains representing that class. These spectral numbers have been studied extensively in the case of Hamiltonian Floer homology by Oh, Schwarz, and others. We prove that the spectral number associated to any nonzero Floer homology class is always finite, and that the infimum in the definition of the spectral number is always attained. In the Hamiltonian case, this implies that what is known as the "nondegenerate spectrality" axiom holds on all closed symplectic manifolds. Our proofs are entirely algebraic and rather elementary, and apply to any Floer-type theory (including Novikov homology) satisfying certain standard formal properties provided that one works with coefficients in a Novikov ring whose degree-zero part \\Lambda_0 is a field. The key ingredient is a theorem about linear transforma...
Directory of Open Access Journals (Sweden)
Orien M W Richmond
Full Text Available Species distribution models (SDMs are increasingly used for extrapolation, or predicting suitable regions for species under new geographic or temporal scenarios. However, SDM predictions may be prone to errors if species are not at equilibrium with climatic conditions in the current range and if training samples are not representative. Here the controversial "Pleistocene rewilding" proposal was used as a novel example to address some of the challenges of extrapolating modeled species-climate relationships outside of current ranges. Climatic suitability for three proposed proxy species (Asian elephant, African cheetah and African lion was extrapolated to the American southwest and Great Plains using Maxent, a machine-learning species distribution model. Similar models were fit for Oryx gazella, a species native to Africa that has naturalized in North America, to test model predictions. To overcome biases introduced by contracted modern ranges and limited occurrence data, random pseudo-presence points generated from modern and historical ranges were used for model training. For all species except the oryx, models of climatic suitability fit to training data from historical ranges produced larger areas of predicted suitability in North America than models fit to training data from modern ranges. Four naturalized oryx populations in the American southwest were correctly predicted with a generous model threshold, but none of these locations were predicted with a more stringent threshold. In general, the northern Great Plains had low climatic suitability for all focal species and scenarios considered, while portions of the southern Great Plains and American southwest had low to intermediate suitability for some species in some scenarios. The results suggest that the use of historical, in addition to modern, range information and randomly sampled pseudo-presence points may improve model accuracy. This has implications for modeling range shifts of
Richmond, Orien M W; McEntee, Jay P; Hijmans, Robert J; Brashares, Justin S
2010-09-22
Species distribution models (SDMs) are increasingly used for extrapolation, or predicting suitable regions for species under new geographic or temporal scenarios. However, SDM predictions may be prone to errors if species are not at equilibrium with climatic conditions in the current range and if training samples are not representative. Here the controversial "Pleistocene rewilding" proposal was used as a novel example to address some of the challenges of extrapolating modeled species-climate relationships outside of current ranges. Climatic suitability for three proposed proxy species (Asian elephant, African cheetah and African lion) was extrapolated to the American southwest and Great Plains using Maxent, a machine-learning species distribution model. Similar models were fit for Oryx gazella, a species native to Africa that has naturalized in North America, to test model predictions. To overcome biases introduced by contracted modern ranges and limited occurrence data, random pseudo-presence points generated from modern and historical ranges were used for model training. For all species except the oryx, models of climatic suitability fit to training data from historical ranges produced larger areas of predicted suitability in North America than models fit to training data from modern ranges. Four naturalized oryx populations in the American southwest were correctly predicted with a generous model threshold, but none of these locations were predicted with a more stringent threshold. In general, the northern Great Plains had low climatic suitability for all focal species and scenarios considered, while portions of the southern Great Plains and American southwest had low to intermediate suitability for some species in some scenarios. The results suggest that the use of historical, in addition to modern, range information and randomly sampled pseudo-presence points may improve model accuracy. This has implications for modeling range shifts of organisms in response
Energy Technology Data Exchange (ETDEWEB)
Miyazawa, J., E-mail: miyazawa@LHD.nifs.ac.jp [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan); Goto, T.; Morisaki, T.; Goto, M.; Sakamoto, R.; Motojima, G.; Peterson, B.J.; Suzuki, C.; Ida, K.; Yamada, H.; Sagara, A. [National Institute for Fusion Science, 322-6 Oroshi, Toki, Gifu 509-5292 (Japan)
2011-12-15
Highlights: Black-Right-Pointing-Pointer The DPE method predicts temperature and density profiles in a fusion reactor. Black-Right-Pointing-Pointer This method is based on the gyro-Bohm type parameter dependence. Black-Right-Pointing-Pointer The size of fusion reactor is determined to fulfill the power balance. Black-Right-Pointing-Pointer The reactor size is proportional to a factor and -4/3 power of the magnetic field. Black-Right-Pointing-Pointer This factor can be a measure of plasma performance like the fusion triple product. - Abstract: A new method named direct profile extrapolation (DPE) has been developed to estimate the radial profiles of temperature and density in a fusion reactor. This method directly extrapolates the radial profiles observed in present experiments to the fusion reactor condition assuming gyro-Bohm type parameter dependence. The magnetohydrodynamic equilibrium that fits the experimental profile data is used to determine the plasma volume. Four enhancement factors for the magnetic field strength, the density, the plasma beta, and the energy confinement are assumed. Then, the plasma size is determined so as to fulfill the power balance in the reactor plasma. The plasma performance can be measured by an index, C{sub exp}, introduced in the DPE method. The minimum magnetic stored energy of the fusion reactor to achieve self-ignition is shown to be proportional to the cube of C{sub exp} and inversely proportional to the square of magnetic field strength. Using this method, the design window of a self-ignited fusion reactor that can be extrapolated from recent experimental results in the Large Helical Device (LHD) is considered. Also discussed is how large an enhancement is needed for the LHD experiment to ensure the helical reactor design of FFHR2m2.
Barman, Stephen L; Jean, Gary W; Dinsfriend, William M; Gerber, David E
2016-02-01
The treatment of adults who present with rare pediatric tumors is not characterized well in the literature. We report an instance of a 40-year-old African American woman with a diagnosis of choroid plexus carcinoma admitted to the intensive care unit for severe sepsis seven days after receiving chemotherapy consisting of carboplatin (350 mg/m(2) on Days 1 and 2 plus etoposide 100 mg/m(2) on Days 1-5). Her laboratory results were significant for an absolute neutrophil count of 0/µL and blood cultures positive for Capnocytophagia species. She was supported with broad spectrum antibiotics and myeloid growth factors. She eventually recovered and was discharged in stable condition. The management of adults with malignancies most commonly seen in pediatric populations presents substantial challenges. There are multiple age-specific differences in renal and hepatic function that explain the need for higher dosing in pediatric patients without increasing the risk of toxicity. Furthermore, differences in pharmacokinetic parameters such as absorption, distribution, and clearance are present but are less likely to affect patients. It is expected that the pediatric population will have more bone marrow reserve and, therefore, less susceptible to myelosuppression. The extrapolation of pediatric dosing to an adult presents a problematic situation in treating adults with malignancies that primarily effect pediatric patients. We recommend extrapolating from adult treatment regimens with similar agents rather than extrapolating from pediatric treatment regimens to reduce the risk of toxicity. We also recommend the consideration of adding myeloid growth factors. If the treatment is tolerated without significant toxicity, dose escalation can be considered.
Directory of Open Access Journals (Sweden)
Ravichandran R
2009-01-01
Full Text Available The objective of the present study is to establish radiation standards for absorbed doses, for clinical high energy linear accelerator beams. In the nonavailability of a cobalt-60 beam for arriving at Nd, water values for thimble chambers, we investigated the efficacy of perspex mounted extrapolation chamber (EC used earlier for low energy x-rays and beta dosimetry. Extrapolation chamber with facility for achieving variable electrode separations 10.5mm to 0.5mm using micrometer screw was used for calibrations. Photon beams 6 MV and 15 MV and electron beams 6 MeV and 15 MeV from Varian Clinac linacs were calibrated. Absorbed Dose estimates to Perspex were converted into dose to solid water for comparison with FC 65 ionisation chamber measurements in water. Measurements made during the period December 2006 to June 2008 are considered for evaluation. Uncorrected ionization readings of EC for all the radiation beams over the entire period were within 2% showing the consistency of measurements. Absorbed doses estimated by EC were in good agreement with in-water calibrations within 2% for photons and electron beams. The present results suggest that extrapolation chambers can be considered as an independent measuring system for absorbed dose in addition to Farmer type ion chambers. In the absence of standard beam quality (Co-60 radiations as reference Quality for Nd,water the possibility of keeping EC as Primary Standards for absorbed dose calibrations in high energy radiation beams from linacs should be explored. As there are neither Standard Laboratories nor SSDL available in our country, we look forward to keep EC as Local Standard for hospital chamber calibrations. We are also participating in the IAEA mailed TLD intercomparison programme for quality audit of existing status of radiation dosimetry in high energy linac beams. The performance of EC has to be confirmed with cobalt-60 beams by a separate study, as linacs are susceptible for minor
Directory of Open Access Journals (Sweden)
Ezekiel Uba Nwose
2010-04-01
Full Text Available Background: There are many different methods for the assessment of whole blood viscosity, but not every pathology unit has equipment for any of the methods. However, a validated arithmetic method exists whereby whole blood viscosity can be extrapolated from haematocrit and total serum proteins. Aims: The objective of this work is to develop an algorithm in the form of a chart by which clinicians can easily extrapolate whole blood viscosity values in their consulting rooms or on the ward. Another objective is to suggest normal, subnormal and critical reference ranges applicable to this method. Materials and Methods: Whole blood viscosity at high shear stress was determined, from various possible pairs of haematocrit and total proteins. A chart was formulated so that whole blood viscosity can be extrapolated. After determination of two standard deviations from the mean and ascertainment of symmetric distribution, normal and abnormal reference ranges were defined. Results: The clinicians’ user-friendly chart is presented. Considering presumptive lower and upper limits, the continuum of ≤14.28, 14.29 – 15.00, 15.01 – 19.01, 19.02 – 19.39 and ≥19.40 (208 Sec-1 is obtained as reference ranges for critically low, subnormal low, normal, subnormal high and critically high whole blood viscosity levels respectively. Conclusion: This article advances a validated method to provide a user-friendly chart that would enable clinicians to assess whole blood viscosity for any patients who has results for full blood count and total proteins. It would make the assessment of whole blood viscosity costless and the neglect of a known cardiovascular risk factor less excusable.
A novel approach to modeling spacecraft spectral reflectance
Willison, Alexander; Bédard, Donald
2016-10-01
Simulated spectrometric observations of unresolved resident space objects are required for the interpretation of quantities measured by optical telescopes. This allows for their characterization as part of regular space surveillance activity. A peer-reviewed spacecraft reflectance model is necessary to help improve the understanding of characterization measurements. With this objective in mind, a novel approach to model spacecraft spectral reflectance as an overall spectral bidirectional reflectance distribution function (sBRDF) is presented. A spacecraft's overall sBRDF is determined using its triangular-faceted computer-aided design (CAD) model and the empirical sBRDF of its homogeneous materials. The CAD model is used to determine the proportional contribution of each homogeneous material to the overall reflectance. Each empirical sBRDF is contained in look-up tables developed from measurements made over a range of illumination and reflection geometries using simple interpolation and extrapolation techniques. A demonstration of the spacecraft reflectance model is provided through simulation of an optical ground truth characterization using the Canadian Advanced Nanospace eXperiment-1 Engineering Model nanosatellite as the subject. Validation of the reflectance model is achieved through a qualitative comparison of simulated and measured quantities.
Balabin, Roman M; Smirnov, Sergey V
2012-04-07
Modern analytical chemistry of industrial products is in need of rapid, robust, and cheap analytical methods to continuously monitor product quality parameters. For this reason, spectroscopic methods are often used to control the quality of industrial products in an on-line/in-line regime. Vibrational spectroscopy, including mid-infrared (MIR), Raman, and near-infrared (NIR), is one of the best ways to obtain information about the chemical structures and the quality coefficients of multicomponent mixtures. Together with chemometric algorithms and multivariate data analysis (MDA) methods, which were especially created for the analysis of complicated, noisy, and overlapping signals, NIR spectroscopy shows great results in terms of its accuracy, including classical prediction error, RMSEP. However, it is unclear whether the combined NIR + MDA methods are capable of dealing with much more complex interpolation or extrapolation problems that are inevitably present in real-world applications. In the current study, we try to make a rather general comparison of linear, such as partial least squares or projection to latent structures (PLS); "quasi-nonlinear", such as the polynomial version of PLS (Poly-PLS); and intrinsically non-linear, such as artificial neural networks (ANNs), support vector regression (SVR), and least-squares support vector machines (LS-SVM/LSSVM), regression methods in terms of their robustness. As a measure of robustness, we will try to estimate their accuracy when solving interpolation and extrapolation problems. Petroleum and biofuel (biodiesel) systems were chosen as representative examples of real-world samples. Six very different chemical systems that differed in complexity, composition, structure, and properties were studied; these systems were gasoline, ethanol-gasoline biofuel, diesel fuel, aromatic solutions of petroleum macromolecules, petroleum resins in benzene, and biodiesel. Eighteen different sample sets were used in total. General
Energy Technology Data Exchange (ETDEWEB)
Sussmann, R.; Homburg, F.; Freudenthaler, V.; Jaeger, H. [Frauenhofer Inst. fuer Atmosphaerische Umweltforschung, Garmisch-Partenkirchen (Germany)
1997-12-31
The CCD image of a persistent contrail and the coincident LIDAR measurement are presented. To extrapolate the LIDAR derived optical thickness to the video field of view an anisotropy correction and calibration has to be performed. Observed bright halo components result from highly regular oriented hexagonal crystals with sizes of 200 {mu}m-2 mm. This explained by measured ambient humidities below the formation threshold of natural cirrus. Optical thickness from LIDAR shows significant discrepancies to the result from coincident NOAA-14 data. Errors result from anisotropy correction and parameterized relations between AVHRR channels and optical properties. (author) 28 refs.
Cruz Uribe, David; Pérez Moreno, Carlos
2000-01-01
We give several extrapolation theorems for pairs of weights of the form (w, Mkw) and (w, (Mw/w)r w), where w is any non-negative function, r>1, and Mk is the kth iterate of the Hardy–Littlewood maximal operator. As an application we show that our results can be used to extend and sharpen results for square functions and singular integral operators by Chang et al. (1985, Comment. Math. Helv.60, 217–246), Chanillo and Wheeden (1987, Indiana Univ. Math. J.36, 277–294), Wilson (1987, Duke Math. J...
Reynaldo, S R; Benavente, J A; Da Silva, T A
2016-11-01
Beta Secondary Standard 2 (BSS 2) provides beta radiation fields with certified values of absorbed dose to tissue and the derived operational radiation protection quantities. As part of the quality assurance, the reliability of the CDTN BSS2 system was verified through measurements in the (90)Sr/(90)Y and (85)Kr beta radiation fields. Absorbed dose rates and their angular variation were measured with a 23392 model PTW extrapolation chamber and with Gafchromic radiochromic films on a PMMA slab phantom. The feasibility of using both methods was analyzed.
Rong, Lu; Latychevskaia, Tatiana; Wang, Dayong; Zhou, Xun; Huang, Haochong; Li, Zeyu; Wang, Yunxin
2014-07-14
We report here on terahertz (THz) digital holography on a biological specimen. A continuous-wave (CW) THz in-line holographic setup was built based on a 2.52 THz CO(2) pumped THz laser and a pyroelectric array detector. We introduced novel statistical method of obtaining true intensity values for the pyroelectric array detector's pixels. Absorption and phase-shifting images of a dragonfly's hindwing were reconstructed simultaneously from single in-line hologram. Furthermore, we applied phase retrieval routines to eliminate twin image and enhanced the resolution of the reconstructions by hologram extrapolation beyond the detector area. The finest observed features are 35 μm width cross veins.
Energy Technology Data Exchange (ETDEWEB)
Hudson, James G.
2009-02-27
Detailed aircraft measurements were made of cloud condensation nuclei (CCN) spectra associated with extensive cloud systems off the central California coast in the July 2005 MASE project. These measurements include the wide supersaturation (S) range (2-0.01%) that is important for these polluted stratus clouds. Concentrations were usually characteristic of continental/anthropogenic air masses. The most notable feature was the consistently higher concentrations above the clouds than below. CCN measurements are so important because they provide a link between atmospheric chemistry and cloud-climate effects, which are the largest climate uncertainty. Extensive comparisons throughout the eleven flights between two CCN spectrometers operated at different but overlapping S ranges displayed the precision and accuracy of these difficult spectral determinations. There are enough channels of resolution in these instruments to provide differential spectra, which produce more rigorous and precise comparisons than traditional cumulative presentations of CCN concentrations. Differential spectra are also more revealing than cumulative spectra. Only one of the eleven flights exhibited typical maritime concentrations. Average below cloud concentrations over the two hours furthest from the coast for the 8 flights with low polluted stratus was 614?233 at 1% S, 149?60 at 0.1% S and 57?33 at 0.04% S cm-3. Immediately above cloud average concentrations were respectively 74%, 55%, and 18% higher. Concentration variability among those 8 flights was a factor of two. Variability within each flight excluding distances close to the coast ranged from 15-56% at 1% S. However, CN and probably CCN concentrations sometimes varied by less than 1% over distances of more than a km. Volatility and size-critical S measurements indicated that the air masses were very polluted throughout MASE. The aerosol above the clouds was more polluted than the below cloud aerosol. These high CCN concentrations from
Ekin, Jack W.; Cheggour, Najib; Goodrich, Loren; Splett, Jolene
2017-03-01
In Part 2 of these articles, an extensive analysis of pinning-force curves and raw scaling data was used to derive the Extrapolative Scaling Expression (ESE). This is a parameterization of the Unified Scaling Law (USL) that has the extrapolation capability of fundamental unified scaling, coupled with the application ease of a simple fitting equation. Here in Part 3, the accuracy of the ESE relation to interpolate and extrapolate limited critical-current data to obtain complete I c(B,T,ε) datasets is evaluated and compared with present fitting equations. Accuracy is analyzed in terms of root mean square (RMS) error and fractional deviation statistics. Highlights from 92 test cases are condensed and summarized, covering most fitting protocols and proposed parameterizations of the USL. The results show that ESE reliably extrapolates critical currents at fields B, temperatures T, and strains ε that are remarkably different from the fitted minimum dataset. Depending on whether the conductor is moderate-J c or high-J c, effective RMS extrapolation errors for ESE are in the range 2–5 A at 12 T, which approaches the I c measurement error (1–2%). The minimum dataset for extrapolating full I c(B,T,ε) characteristics is also determined from raw scaling data. It consists of one set of I c(B,ε) data at a fixed temperature (e.g., liquid helium temperature), and one set of I c(B,T) data at a fixed strain (e.g., zero applied strain). Error analysis of extrapolations from the minimum dataset with different fitting equations shows that ESE reduces the percentage extrapolation errors at individual data points at high fields, temperatures, and compressive strains down to 1/10th to 1/40th the size of those for extrapolations with present fitting equations. Depending on the conductor, percentage fitting errors for interpolations are also reduced to as little as 1/15th the size. The extrapolation accuracy of the ESE relation offers the prospect of straightforward implementation
spectral-cube: Read and analyze astrophysical spectral data cubes
Robitaille, Thomas; Ginsburg, Adam; Beaumont, Chris; Leroy, Adam; Rosolowsky, Erik
2016-09-01
Spectral-cube provides an easy way to read, manipulate, analyze, and write data cubes with two positional dimensions and one spectral dimension, optionally with Stokes parameters. It is a versatile data container for building custom analysis routines. It provides a uniform interface to spectral cubes, robust to the wide range of conventions of axis order, spatial projections, and spectral units that exist in the wild, and allows easy extraction of cube sub-regions using physical coordinates. It has the ability to create, combine, and apply masks to datasets and is designed to work with datasets too large to load into memory, and provide basic summary statistic methods like moments and array aggregates.
Abrashkevich, A. G.; Abrashkevich, D. G.
1994-09-01
A FORTRAN-77 program is presented which solves the Sturm-Liouville problem for a system of coupled second-order differential equations by the finite difference method of the second order using the iterative Richardson extrapolation of the difference eigensolutions on a sequence of doubly condensed meshes. The same extrapolational procedure and error estimations are applied to the eigenvalues and eigenfunctions. Zero-value (Dirichlet) or zero-gradient (Neumann) boundary conditions are considered.
Timescale Analysis of Spectral Lags
Institute of Scientific and Technical Information of China (English)
Ti-Pei Li; Jin-Lu Qu; Hua Feng; Li-Ming Song; Guo-Qiang Ding; Li Chen
2004-01-01
A technique for timescale analysis of spectral lags performed directly in the time domain is developed. Simulation studies are made to compare the time domain technique with the Fourier frequency analysis for spectral time lags. The time domain technique is applied to studying rapid variabilities of X-ray binaries and γ-ray bursts. The results indicate that in comparison with the Fourier analysis the timescale analysis technique is more powerful for the study of spectral lags in rapid variabilities on short time scales and short duration flaring phenomena.
Liu, Ning; Chen, Xiaohong; Yang, Chao
2016-11-01
During the reconstruction of a digital hologram, the reconstructed image is usually degraded by speckle noise, which makes it hard to observe the original object pattern. In this paper, a new reconstructed image enhancement method is proposed, which first reduces the speckle noise using an adaptive Gaussian filter, then calculates the high frequencies that belong to the object pattern based on a frequency extrapolation strategy. The proposed frequency extrapolation first calculates the frequency spectrum of the Fourier-filtered image, which is originally reconstructed from the +1 order of the hologram, and then gives the initial parameters for an iterative solution. The analytic iteration is implemented by continuous gradient threshold convergence to estimate the image level and vertical gradient information. The predicted spectrum is acquired through the analytical iteration of the original spectrum and gradient spectrum analysis. Finally, the reconstructed spectrum of the restoration image is acquired from the synthetic correction of the original spectrum using the predicted gradient spectrum. We conducted our experiment very close to the diffraction limit and used low quality equipment to prove the feasibility of our method. Detailed analysis and figure demonstrations are presented in the paper.
Liu, Ning; Li, Weiliang; Zhao, Dongxue
2016-06-01
During the reconstruction of a digital hologram, the reconstructed image is usually degraded by speckle noise, which makes it hard to observe the original object pattern. In this paper, a new reconstructed image enhancement method is proposed, which first reduces the speckle noise using an adaptive Gaussian filter, then calculates the high frequencies that belong to the object pattern based on a frequency extrapolation strategy. The proposed frequency extrapolation first calculates the frequency spectrum of the Fourier-filtered image, which is originally reconstructed from the +1 order of the hologram, and then gives the initial parameters for an iterative solution. The analytic iteration is implemented by continuous gradient threshold convergence to estimate the image level and vertical gradient information. The predicted spectrum is acquired through the analytical iteration of the original spectrum and gradient spectrum analysis. Finally, the reconstructed spectrum of the restoration image is acquired from the synthetic correction of the original spectrum using the predicted gradient spectrum. We conducted our experiment very close to the diffraction limit and used low-quality equipment to prove the feasibility of our method. Detailed analysis and figure demonstrations are presented in the paper.
Shida, Satomi; Utoh, Masahiro; Murayama, Norie; Shimizu, Makiko; Uno, Yasuhiro; Yamazaki, Hiroshi
2015-01-01
1. Cynomolgus monkeys are widely used in preclinical studies as non-human primate species. Pharmacokinetics of human cytochrome P450 probes determined in cynomolgus monkeys after single oral or intravenous administrations were extrapolated to give human plasma concentrations. 2. Plasma concentrations of slowly eliminated caffeine and R-/S-warfarin and rapidly eliminated omeprazole and midazolam previously observed in cynomolgus monkeys were scaled to human oral biomonitoring equivalents using known species allometric scaling factors and in vitro metabolic clearance data with a simple physiologically based pharmacokinetic (PBPK) model. Results of the simplified human PBPK models were consistent with reported experimental PK data in humans or with values simulated by a fully constructed population-based simulator (Simcyp). 3. Oral administrations of metoprolol and dextromethorphan (human P450 2D probes) in monkeys reportedly yielded plasma concentrations similar to their quantitative detection limits. Consequently, ratios of in vitro hepatic intrinsic clearances of metoprolol and dextromethorphan determined in monkeys and humans were used with simplified PBPK models to extrapolate intravenous PK in monkeys to oral PK in humans. 4. These results suggest that cynomolgus monkeys, despite their rapid clearance of some human P450 substrates, could be a suitable model for humans, especially when used in conjunction with simple PBPK models.
Ekin, Jack W; Goodrich, Loren; Splett, Jolene; Bordini, Bernardo; Richter, David
2016-01-01
A scaling study of several thousand Nb3Sn critical-current $(I_c)$ measurements is used to derive the Extrapolative Scaling Expression (ESE), a relation that can quickly and accurately extrapolate limited datasets to obtain full three-dimensional dependences of I c on magnetic field (B), temperature (T), and mechanical strain (ε). The relation has the advantage of being easy to implement, and offers significant savings in sample characterization time and a useful tool for magnet design. Thorough data-based analysis of the general parameterization of the Unified Scaling Law (USL) shows the existence of three universal scaling constants for practical Nb3Sn conductors. The study also identifies the scaling parameters that are conductor specific and need to be fitted to each conductor. This investigation includes two new, rare, and very large I c(B,T,ε) datasets (each with nearly a thousand I c measurements spanning magnetic fields from 1 to 16 T, temperatures from ~2.26 to 14 K, and intrinsic strains from –...
Directory of Open Access Journals (Sweden)
Jin Wang
2017-05-01
Full Text Available The reconstruction for limited-view scanning, though often the case in practice, has remained a difficult issue for photoacoustic imaging (PAI. The incompleteness of sampling data will cause serious artifacts and fuzziness in those missing views and it will heavily affect the quality of the image. To solve the problem of limited-view PAI, a compensation method based on the Gerchberg–Papoulis (GP extrapolation is applied into PAI. Based on the known data, missing detectors elements are estimated and the image in the missing views is then compensated using the Fast Fourier Transform (FFT. To accelerate the convergence speed of the algorithm, the total variation (TV-based iterative algorithm is incorporated into the GP extrapolation-based FFT-utilized compensation method (TV-GPEF. The effective variable splitting and Barzilai–Borwein based method is adopted to solve the optimization problem. Simulations and in vitro experiments for both limited-angle circular scanning and straight-line scanning are conducted to validate the proposed algorithm. Results show that the proposed algorithm can greatly suppress the artifacts caused by the missing views and enhance the edges and the details of the image. It can be indicated that the proposed TV-GPEF algorithm is efficient for limited-view PAI.
Spectral shifts and helium configurations in 4HeN-tetracene clusters
Energy Technology Data Exchange (ETDEWEB)
Whitley, H D; DuBois, J L; Whaley, K B
2009-05-20
Spectral shifts of electronic transitions of tetracene in helium droplets are investigated in a theoretical study of {sup 4}He{sub N}-tetracene clusters with 1 {le} N {le} 150. Utilizing a pair-wise interaction for the S{sub 0} state of tetracene with helium that is extended by semi-empirical terms to construct a potential for the S1 state of tetracene with helium, the spectral shift is calculated from path integral Monte Carlo calculations of the helium equilibrium properties with tetracene in the S{sub 0} and S{sub 1} states at T = 0 and at T = 0.625 K. The calculated spectral shifts are in quantitative agreement with available experimental measurements for small values of N ({le} 8) at T {approx} 0.4 K and show qualitative agreement for larger N (10-20). The extrapolated value of the spectral shift in large droplets (N {approx} 10{sup 4}) is {approx} 90% of the experimentally measured value. We find no evidence of multiple configurations of helium for any cluster size, for both the S{sub 0} or S{sub 1} states of tetracene. These results suggest that the observed spectral splitting of electronic transitions of tetracene in large helium droplets is not due to co-existence of static meta-stable helium densities, unlike the situation previously analyzed for the phthalocyanine molecule.
Wang, Yu; Deng, Renren; Xie, Xiaoji; Huang, Ling; Liu, Xiaogang
2016-03-28
Optical tuning of lanthanide-doped upconversion nanoparticles has attracted considerable attention over the past decade because this development allows the advance of new frontiers in energy conversion, materials science, and biological imaging. Here we present a rational approach to manipulating the spectral profile and lifetime of lanthanide emission in upconversion nanoparticles by tailoring their nonlinear optical properties. We demonstrate that the incorporation of energy distributors, such as surface defects or an extra amount of dopants, into a rare-earth-based host lattice alters the decay behavior of excited sensitizers, thus markedly improving the emitters' sensitivity to excitation power. This work provides insight into mechanistic understanding of upconversion phenomena in nanoparticles and also enables exciting new opportunities of using these nanomaterials for photonic applications.
Broadband Advanced Spectral System Project
National Aeronautics and Space Administration — NovaSol proposes to develop an advanced hyperspectral imaging system for earth science missions named BRASS (Broadband Advanced Spectral System). BRASS combines...
Matched Spectral Filter Imager Project
National Aeronautics and Space Administration — OPTRA proposes the development of an imaging spectrometer for greenhouse gas and volcanic gas imaging based on matched spectral filtering and compressive imaging....
Spectral Methods for Numerical Relativity
Grandclément, Philippe
2007-01-01
Equations arising in General Relativity are usually to complicated to be solved analytically and one has to rely on numerical methods to solve sets of coupled, partial differential, equations. Amongst the possible choices, this paper focuses on a class called spectral methods where, typically, the various functions are expanded onto sets of orthogonal polynomials or functions. A theoretical introduction on spectral expansion is first given and a particular emphasize is put on the fast convergence of the spectral approximation. We present then different approaches to solve partial differential equations, first limiting ourselves to the one-dimensional case, with one or several domains. Generalization to more dimensions is then discussed. In particular, the case of time evolutions is carefully studied and the stability of such evolutions investigated. One then turns to results obtained by various groups in the field of General Relativity by means of spectral methods. First, works which do not involve explicit t...
Substitution dynamical systems spectral analysis
Queffélec, Martine
2010-01-01
This volume mainly deals with the dynamics of finitely valued sequences, and more specifically, of sequences generated by substitutions and automata. Those sequences demonstrate fairly simple combinatorical and arithmetical properties and naturally appear in various domains. As the title suggests, the aim of the initial version of this book was the spectral study of the associated dynamical systems: the first chapters consisted in a detailed introduction to the mathematical notions involved, and the description of the spectral invariants followed in the closing chapters. This approach, combined with new material added to the new edition, results in a nearly self-contained book on the subject. New tools - which have also proven helpful in other contexts - had to be developed for this study. Moreover, its findings can be concretely applied, the method providing an algorithm to exhibit the spectral measures and the spectral multiplicity, as is demonstrated in several examples. Beyond this advanced analysis, many...
Spectral Theory and Mirror Symmetry
Marino, Marcos
2015-01-01
Recent developments in string theory have revealed a surprising connection between spectral theory and local mirror symmetry: it has been found that the quantization of mirror curves to toric Calabi-Yau threefolds leads to trace class operators, whose spectral properties are conjecturally encoded in the enumerative geometry of the Calabi-Yau. This leads to a new, infinite family of solvable spectral problems: the Fredholm determinants of these operators can be found explicitly in terms of Gromov-Witten invariants and their refinements; their spectrum is encoded in exact quantization conditions, and turns out to be determined by the vanishing of a quantum theta function. Conversely, the spectral theory of these operators provides a non-perturbative definition of topological string theory on toric Calabi-Yau threefolds. In particular, their integral kernels lead to matrix integral representations of the topological string partition function, which explain some number-theoretic properties of the periods. In this...
Nanocatalytic resonance scattering spectral analysis
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
The resonance scattering spectral technique has been established using the synchronous scanning technique on spectrofluorometry.Because of its advantages of simplicity,rapidity and sensitivity,it has been widely applied to analyses of proteins,nucleic acids and inorganic ions.This paper summarizes the application of immunonanogold and aptamer modified nanogold(AptAu) catalytic resonance scattering spectral technique in combination with the work of our group,citing 53 references.
Spectral Conditions for Positive Maps
Chruściński, Dariusz; Kossakowski, Andrzej
2009-09-01
We provide partial classification of positive linear maps in matrix algebras which is based on a family of spectral conditions. This construction generalizes the celebrated Choi example of a map which is positive but not completely positive. It is shown how the spectral conditions enable one to construct linear maps on tensor products of matrix algebras which are positive but only on a convex subset of separable elements. Such maps provide basic tools to study quantum entanglement in multipartite systems.
Prym varieties of spectral covers
Hausel, Tamás
2010-01-01
Given a possibly reducible and non-reduced spectral cover X over a smooth projective complex curve C we determine the group of connected components of the Prym variety Prym(X/C). We also describe the sublocus of characteristics a for which the Prym variety Prym(X_a/C) is connected. These results extend special cases of work of Ng\\^o who considered integral spectral curves.
Fenyvesi, A
2015-01-01
Spectral yield of p+Be neutrons emitted by thick (stopping) beryllium target bombarded by 16 MeV protons was estimated via extrapolation of literature data. The spectrum was validated via multi-foil activation method and irradiation of 2N2222 transistors. The hardness parameter (NIEL scaling factor) for displacement damage in bulk silicon was calculated and measured and kappa = 1.26 +- 0.1 was obtained.
Ordinary Chondrite Spectral Signatures in the 243 Ida Asteroid System
Granahan, J. C.
2012-12-01
The NASA Galileo spacecraft observed asteroid 243 Ida and satellite Dactyl on August 28, 1993, with the Near Infrared Mapping Spectrometer (NIMS) at wavelengths ranging from 0.7 to 5.2 micrometers[Carlson et al., 1994]. Work is being conducted to produce radiance-calibrated spectral images of 243 Ida consisting of 17-channel, 299 meters per pixel files and a 102-channel, 3.2 kilometer per pixel NIMS observations of 243 Ida for the NASA Planetary Data System (PDS). These data are currently archived in PDS as uncalibrated data number counts. Radiometric calibrated 17-channel and 102-channel NIMS spectral data files of Dactyl and light curve 243 Ida observations are also being prepared. Analysis of this infrared asteroid data has confirmed that both 243 Ida and Dactyl are S-type asteroid objects and found that their olivine and pyroxene mineral abundances are consistent with that of ordinary chondrite meteorites. Tholen [1989] identified 243 Ida and Chapman et al. [1995] identified Dactyl as S-type asteroids on the basis of spectral data ranging from 0.4 to 1.0 micrometers. S-type are described [Tholen, 1989] as asteroids with a moderate albedos, a moderate to strong absorption feature shortward of 0.7 micrometers, and moderate to nonexistent absorption features longward of 0.7 micrometers. DeMeo et al. [2009] found 243 Ida to be a Sw asteroid based on Earth-based spectral observations 0.4 to 2.5 micrometers in range. Sw is a subclass of S-type asteroids that has a space weathering spectral component [DeMeo et al., 2009]. The NIMS data 243 Ida and Dactyl processed in this study exhibit signatures consistent with the Sw designation of DeMeo et al. [2009]. Measurements of olivine and pyroxene spectral bands were also conducted for the NIMS radiance data of 243 Ida and Dactyl. Band depth and band center measurements have been used to compare S-type asteroids with those of meteorites [Dunn et al., 2010; Gaffey et al., 1993]. The 243 Ida spectra were found to be consistent
LNG pool fire spectral data and calculation of emissive power.
Raj, Phani K
2007-04-11
Spectral description of thermal emission from fires provides a fundamental basis on which the fire thermal radiation hazard assessment models can be developed. Several field experiments were conducted during the 1970s and 1980s to measure the thermal radiation field surrounding LNG fires. Most of these tests involved the measurement of fire thermal radiation to objects outside the fire envelope using either narrow-angle or wide-angle radiometers. Extrapolating the wide-angle radiometer data without understanding the nature of fire emission is prone to errors. Spectral emissions from LNG fires have been recorded in four test series conducted with LNG fires on different substrates and of different diameters. These include the AGA test series of LNG fires on land of diameters 1.8 and 6m, 35 m diameter fire on an insulated concrete dike in the Montoir tests conducted by Gaz de France, a 1976 test with 13 m diameter and the 1980 tests with 10 m diameter LNG fire on water carried out at China Lake, CA. The spectral data from the Montoir test series have not been published in technical journals; only recently has some data from this series have become available. This paper presents the details of the LNG fire spectral data from, primarily, the China Lake test series, their analysis and results. Available data from other test series are also discussed. China Lake data indicate that the thermal radiation emission from 13 m diameter LNG fire is made up of band emissions of about 50% of energy by water vapor (band emission), about 25% by carbon dioxide and the remainder constituting the continuum emission by luminous soot. The emissions from the H2O and CO2 bands are completely absorbed by the intervening atmosphere in less than about 200 m from the fire, even in the relatively dry desert air. The effective soot radiation constitutes only about 23% during the burning period of methane and increases slightly when other higher hydrocarbon species (ethane, propane, etc.) are
Gaspar, Leticia; López-Vicente, Manuel; Palazón, Leticia; Quijano, Laura; Navas, Ana
2015-04-01
The use of fallout radionuclides, particularly 137Cs, in soil erosion investigations has been successfully used over a range of different landscapes. This technique provides mean annual values of spatially distributed soil erosion and deposition rates for the last 40-50 years. However, upscaling the data provided by fallout radionuclides to catchment level is required to understand soil redistribution processes, to support catchment management strategies, and to assess the main soil erosion factors like vegetation cover or topography. In recent years, extrapolating field scale soil erosion rates estimated from 137Cs data to catchment scale has been addressed using geostatistical interpolation and Geographical Information Systems (GIS). This study aims to assess soil redistribution in an agroforestry catchment characterized by abrupt topography and an intricate mosaic of land uses using 137Cs data and GIS. A new methodological approach using GIS is presented as an alternative of interpolation tools to extrapolating soil redistribution rates in complex landscapes. This approach divides the catchment into Homogeneous Physiographic Units (HPUs) based on unique land use, hydrological network and slope value. A total of 54 HPUs presenting specific land use, strahler order and slope combinations, were identified within the study area (2.5 km2) located in the north of Spain. Using 58 soil erosion and deposition rates estimated from 137Cs data, we were able to characterize the predominant redistribution processes in 16 HPUs, which represent the 78% of the study area surface. Erosion processes predominated in 6 HPUs (23%) which correspond with cultivated units in which slope and strahler order is moderate or high, and with scrubland units with high slope. Deposition was predominant in 3 HPUs (6%), mainly in riparian areas, and to a lesser extent in forest and scrubland units with low slope and low and moderate strahler order. Redistribution processes, both erosion and
Basith, M. A.; Islam, M. A.; Ahmmad, Bashir; Sarowar Hossain, M. D.; Mølhave, K.
2017-07-01
A simple route to prepare Gd0.7Sr0.3MnO3 nanoparticles by ultrasonication of their bulk powder materials is presented in this article. For comparison, Gd0.7Sr0.3MnO3 nanoparticles are also prepared by ball milling. The prepared samples are characterized by x-ray diffraction (XRD), field emission scanning electron microscope (FESEM), energy dispersive x-ray (EDX), x-ray photoelectron spectroscope (XPS), and superconducting quantum interference device (SQUID) magnetometer. XRD Rietveld analysis is carried out extensively for the determination of crystallographic parameters and the amount of crystalline and amorphous phases. FESEM images demonstrate the formation of nanoparticles with average particle size in the range of 50-100 nm for both ultrasonication and 4 h (h) of ball milling. The bulk materials and nanoparticles synthesized by both ultrasonication and 4 h ball milling exhibit a paramagnetic to spin-glass transition. However, nanoparticles synthesized by 8 h and 12 h ball milling do not reveal any phase transition, rather show an upturn of magnetization at low temperature. The degradation of the magnetic properties in ball milled nanoparticles may be associated with amorphization of the nanoparticles due to ball milling particularly for milling time exceeding 8 h. This investigation demonstrates the potential of ultrasonication as a simple route to prepare high crystalline rare-earth based manganite nanoparticles with improved control compared to the traditional ball milling technique.
Energy Technology Data Exchange (ETDEWEB)
Scott, B.R.; Muggenburg, B.A.; Welsh, C.A.; Angerstein, D.A.
1994-11-01
The alpha emitter plutonium-238 ({sup 238}Pu), which is produced in uranium-fueled, light-water reactors, is used as a thermoelectric power source for space applications. Inhalation of a mixed oxide form of Pu is the most likely mode of exposure of workers and the general public. Occupational exposures to {sup 238}PuO{sub 2} have occurred in association with the fabrication of radioisotope thermoelectric generators. Organs and tissue at risk for deterministic and stochastic effects of {sup 238}Pu-alpha irradiation include the lung, liver, skeleton, and lymphatic tissue. Little has been reported about the effects of inhaled {sup 238}PuO{sub 2} on peripheral blood cell counts in humans. The purpose of this study was to investigate hematological responses after a single inhalation exposure of Beagle dogs to alpha-emitting {sup 238}PuO{sub 2} particles and to extrapolate results to humans.
Ketcheson, David I.
2014-04-11
In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.
Fernandes, Ryan I
2012-01-01
An alternating direction implicit (ADI) orthogonal spline collocation (OSC) method is described for the approximate solution of a class of nonlinear reaction-diffusion systems. Its efficacy is demonstrated on the solution of well-known examples of such systems, specifically the Brusselator, Gray-Scott, Gierer-Meinhardt and Schnakenberg models, and comparisons are made with other numerical techniques considered in the literature. The new ADI method is based on an extrapolated Crank-Nicolson OSC method and is algebraically linear. It is efficient, requiring at each time level only $O({\\cal N})$ operations where ${\\cal N}$ is the number of unknowns. Moreover,it is shown to produce approximations which are of optimal global accuracy in various norms, and to possess superconvergence properties.
Rong, Lu; Wang, Dayong; Zhou, Xun; Huang, Haochong; Li, Zeyu; Wang, Yunxin
2014-01-01
We report here on terahertz (THz) digital holography on a biological specimen. A continuous-wave (CW) THz in-line holographic setup was built based on a 2.52 THz CO2 pumped THz laser and a pyroelectric array detector. We introduced novel statistical method of obtaining true intensity values for the pyroelectric array detector's pixels. Absorption and phase-shifting images of a dragonfly's hind wing were reconstructed simultaneously from single in-line hologram. Furthermore, we applied phase retrieval routines to eliminate twin image and enhanced the resolution of the reconstructions by hologram extrapolation beyond the detector area. The finest observed features are 35 {\\mu}m width cross veins.
Wiegelmann, T; Inhester, B; Tadesse, T; Sun, X; Hoeksema, J T
2012-01-01
The SDO/HMI instruments provide photospheric vector magnetograms with a high spatial and temporal resolution. Our intention is to model the coronal magnetic field above active regions with the help of a nonlinear force-free extrapolation code. Our code is based on an optimization principle and has been tested extensively with semi-analytic and numeric equilibria and been applied before to vector magnetograms from Hinode and ground based observations. Recently we implemented a new version which takes measurement errors in photospheric vector magnetograms into account. Photospheric field measurements are often due to measurement errors and finite nonmagnetic forces inconsistent as a boundary for a force-free field in the corona. In order to deal with these uncertainties, we developed two improvements: 1.) Preprocessing of the surface measurements in order to make them compatible with a force-free field 2.) The new code keeps a balance between the force-free constraint and deviation from the photospheric field m...
Energy Technology Data Exchange (ETDEWEB)
Schwahofer, Andrea [German Cancer Research Center, Heidelberg (Germany). Dept. of Medical Physics in Radiation Oncology; Clinical Center Vivantes, Neukoelln (Germany). Dept. of Radiotherapy and Oncology; Baer, Esther [German Cancer Research Center, Heidelberg (Germany). Dept. of Medical Physics in Radiation Oncology; Kuchenbecker, Stefan; Kachelriess, Marc [German Cancer Research Center, Heidelberg (Germany). Dept. of Medical Physics in Radiology; Grossmann, J. Guenter [German Cancer Research Center, Heidelberg (Germany). Dept. of Medical Physics in Radiation Oncology; Ortenau Klinikum Offenburg-Gengenbach (Germany). Dept. of Radiooncology; Sterzing, Florian [Heidelberg Univ. (Germany). Dept. of Radiation Oncology; German Cancer Research Center, Heidelberg (Germany). Dept. of Radiotherapy
2015-07-01
Metal artifacts in computed tomography CT images are one of the main problems in radiation oncology as they introduce uncertainties to target and organ at risk delineation as well as dose calculation. This study is devoted to metal artifact reduction (MAR) based on the monoenergetic extrapolation of a dual energy CT (DECT) dataset. In a phantom study the CT artifacts caused by metals with different densities: aluminum (ρ{sub Al} = 2.7 g/cm{sup 3}), titanium (ρ{sub Ti} = 4.5 g/cm{sup 3}), steel (ρ{sub steel} = 7.9 g/cm{sup 3}) and tungsten (ρ{sub W} = 19.3 g/cm{sup 3}) have been investigated. Data were collected using a clinical dual source dual energy CT (DECT) scanner (Siemens Sector Healthcare, Forchheim, Germany) with tube voltages of 100 kV and 140 kV (Sn). For each tube voltage the data set in a given volume was reconstructed. Based on these two data sets a voxel by voxel linear combination was performed to obtain the monoenergetic data sets. The results were evaluated regarding the optical properties of the images as well as the CT values (HU) and the dosimetric consequences in computed treatment plans. A data set without metal substitute served as the reference. Also, a head and neck patient with dental fillings (amalgam ρ = 10 g/cm{sup 3}) was scanned with a single energy CT (SECT) protocol and a DECT protocol. The monoenergetic extrapolation was performed as described above and evaluated in the same way. Visual assessment of all data shows minor reductions of artifacts in the images with aluminum and titanium at a monoenergy of 105 keV. As expected, the higher the densities the more distinctive are the artifacts. For metals with higher densities such as steel or tungsten, no artifact reduction has been achieved. Likewise in the CT values, no improvement by use of the monoenergetic extrapolation can be detected. The dose was evaluated at a point 7 cm behind the isocenter of a static field. Small improvements (around 1%) can be seen with 105 ke
Furillo, F. T.; Purushothaman, S.; Tien, J. K.
1977-01-01
The Larson-Miller (L-M) method of extrapolating stress rupture and creep results is based on the contention that the absolute temperature-compensated time function should have a unique value for a given material. This value should depend only on the applied stress level. The L-M method has been found satisfactory in the case of many steels and superalloys. The derivation of the L-M relation is discussed, taking into account a power law creep relationship considered by Dorn (1965) and Barrett et al. (1964), a correlation expression reported by Garofalo et al. (1961), and relations concerning the constant C. Attention is given to a verification of the validity of the considered derivation with the aid of suitable materials.
Schwahofer, Andrea; Bär, Esther; Kuchenbecker, Stefan; Grossmann, J Günter; Kachelrieß, Marc; Sterzing, Florian
2015-12-01
Metal artifacts in computed tomography CT images are one of the main problems in radiation oncology as they introduce uncertainties to target and organ at risk delineation as well as dose calculation. This study is devoted to metal artifact reduction (MAR) based on the monoenergetic extrapolation of a dual energy CT (DECT) dataset. In a phantom study the CT artifacts caused by metals with different densities: aluminum (ρ Al=2.7 g/cm(3)), titanium (ρ Ti=4.5 g/cm(3)), steel (ρ steel=7.9 g/cm(3)) and tungsten (ρ W=19.3g/cm(3)) have been investigated. Data were collected using a clinical dual source dual energy CT (DECT) scanner (Siemens Sector Healthcare, Forchheim, Germany) with tube voltages of 100 kV and 140 kV(Sn). For each tube voltage the data set in a given volume was reconstructed. Based on these two data sets a voxel by voxel linear combination was performed to obtain the monoenergetic data sets. The results were evaluated regarding the optical properties of the images as well as the CT values (HU) and the dosimetric consequences in computed treatment plans. A data set without metal substitute served as the reference. Also, a head and neck patient with dental fillings (amalgam ρ=10 g/cm(3)) was scanned with a single energy CT (SECT) protocol and a DECT protocol. The monoenergetic extrapolation was performed as described above and evaluated in the same way. Visual assessment of all data shows minor reductions of artifacts in the images with aluminum and titanium at a monoenergy of 105 keV. As expected, the higher the densities the more distinctive are the artifacts. For metals with higher densities such as steel or tungsten, no artifact reduction has been achieved. Likewise in the CT values, no improvement by use of the monoenergetic extrapolation can be detected. The dose was evaluated at a point 7 cm behind the isocenter of a static field. Small improvements (around 1%) can be seen with 105 keV. However, the dose uncertainty remains of the order of 10
Wang, Zhen; Leung, Kenneth M Y
2015-10-01
Unionised ammonia (NH3) is highly toxic to freshwater organisms. Yet, most of the available toxicity data on NH3 were predominantly generated from temperate regions, while toxicity data on NH3 derived from tropical species were limited. To address this issue, we first conducted standard acute toxicity tests on NH3 using ten tropical freshwater species. Subsequently, we constructed a tropical species sensitivity distribution (SSD) using these newly generated toxicity data and available tropical toxicity data of NH3, which was then compared with the corresponding temperate SSD constructed from documented temperate acute toxicity data. Our results showed that tropical species were generally more sensitive to NH3 than their temperate counterparts. Based on the ratio between temperate and tropical hazardous concentration 10% values, we recommend an extrapolation factor of four to be applied when surrogate temperate toxicity data or temperate water quality guidelines of NH3 are used for protecting tropical freshwater ecosystems.
Stadnicka-Michalak, Julita; Tanneberger, Katrin; Schirmer, Kristin; Ashauer, Roman
2014-01-01
Effect concentrations in the toxicity assessment of chemicals with fish and fish cells are generally based on external exposure concentrations. External concentrations as dose metrics, may, however, hamper interpretation and extrapolation of toxicological effects because it is the internal concentration that gives rise to the biological effective dose. Thus, we need to understand the relationship between the external and internal concentrations of chemicals. The objectives of this study were to: (i) elucidate the time-course of the concentration of chemicals with a wide range of physicochemical properties in the compartments of an in vitro test system, (ii) derive a predictive model for toxicokinetics in the in vitro test system, (iii) test the hypothesis that internal effect concentrations in fish (in vivo) and fish cell lines (in vitro) correlate, and (iv) develop a quantitative in vitro to in vivo toxicity extrapolation method for fish acute toxicity. To achieve these goals, time-dependent amounts of organic chemicals were measured in medium, cells (RTgill-W1) and the plastic of exposure wells. Then, the relation between uptake, elimination rate constants, and log KOW was investigated for cells in order to develop a toxicokinetic model. This model was used to predict internal effect concentrations in cells, which were compared with internal effect concentrations in fish gills predicted by a Physiologically Based Toxicokinetic model. Our model could predict concentrations of non-volatile organic chemicals with log KOW between 0.5 and 7 in cells. The correlation of the log ratio of internal effect concentrations in fish gills and the fish gill cell line with the log KOW was significant (r>0.85, p = 0.0008, F-test). This ratio can be predicted from the log KOW of the chemical (77% of variance explained), comprising a promising model to predict lethal effects on fish based on in vitro data.
Spectral Estimation of NMR Relaxation
Naugler, David G.; Cushley, Robert J.
2000-08-01
In this paper, spectral estimation of NMR relaxation is constructed as an extension of Fourier Transform (FT) theory as it is practiced in NMR or MRI, where multidimensional FT theory is used. nD NMR strives to separate overlapping resonances, so the treatment given here deals primarily with monoexponential decay. In the domain of real error, it is shown how optimal estimation based on prior knowledge can be derived. Assuming small Gaussian error, the estimation variance and bias are derived. Minimum bias and minimum variance are shown to be contradictory experimental design objectives. The analytical continuation of spectral estimation is constructed in an optimal manner. An important property of spectral estimation is that it is phase invariant. Hence, hypercomplex data storage is unnecessary. It is shown that, under reasonable assumptions, spectral estimation is unbiased in the context of complex error and its variance is reduced because the modulus of the whole signal is used. Because of phase invariance, the labor of phasing and any error due to imperfect phase can be avoided. A comparison of spectral estimation with nonlinear least squares (NLS) estimation is made analytically and with numerical examples. Compared to conventional sampling for NLS estimation, spectral estimation would typically provide estimation values of comparable precision in one-quarter to one-tenth of the spectrometer time when S/N is high. When S/N is low, the time saved can be used for signal averaging at the sampled points to give better precision. NLS typically provides one estimate at a time, whereas spectral estimation is inherently parallel. The frequency dimensions of conventional nD FT NMR may be denoted D1, D2, etc. As an extension of nD FT NMR, one can view spectral estimation of NMR relaxation as an extension into the zeroth dimension. In nD NMR, the information content of a spectrum can be extracted as a set of n-tuples (ω1, … ωn), corresponding to the peak maxima
Speech recognition from spectral dynamics
Indian Academy of Sciences (India)
Hynek Hermansky
2011-10-01
Information is carried in changes of a signal. The paper starts with revisiting Dudley’s concept of the carrier nature of speech. It points to its close connection to modulation spectra of speech and argues against short-term spectral envelopes as dominant carriers of the linguistic information in speech. The history of spectral representations of speech is brieﬂy discussed. Some of the history of gradual infusion of the modulation spectrum concept into Automatic recognition of speech (ASR) comes next, pointing to the relationship of modulation spectrum processing to wellaccepted ASR techniques such as dynamic speech features or RelAtive SpecTrAl (RASTA) ﬁltering. Next, the frequency domain perceptual linear prediction technique for deriving autoregressive models of temporal trajectories of spectral power in individual frequency bands is reviewed. Finally, posterior-based features, which allow for straightforward application of modulation frequency domain information, are described. The paper is tutorial in nature, aims at a historical global overview of attempts for using spectral dynamics in machine recognition of speech, and does not always provide enough detail of the described techniques. However, extensive references to earlier work are provided to compensate for the lack of detail in the paper.
New approach to spectral features modeling
Brug, H. van; Scalia, P.S.
2012-01-01
The origin of spectral features, speckle effects, is explained, followed by a discussion on many aspects of spectral features generation. The next part gives an overview of means to limit the amplitude of the spectral features. This paper gives a discussion of all means to reduce the spectral featur
Spectral element simulation of ultrafiltration
DEFF Research Database (Denmark)
Hansen, M.; Barker, Vincent A.; Hassager, Ole
1998-01-01
A spectral element method for simulating stationary 2-D ultrafiltration is presented. The mathematical model is comprised of the Navier-Stokes equations for the velocity field of the fluid and a transport equation for the concentration of the solute. In addition to the presence of the velocity...... vector in the transport equation, the system is coupled by the dependency of the fluid viscosity on the solute concentration and by a concentration-dependent boundary condition for the Navier-Stokes equations at the membrane surface. The spectral element discretization yields a nonlinear algebraic system....... The performance of the spectral element code when applied to several ultrafiltration problems is reported. (C) 1998 Elsevier Science Ltd. All rights reserved....
Spectral Tensor-Train Decomposition
DEFF Research Database (Denmark)
Bigoni, Daniele; Engsig-Karup, Allan Peter; Marzouk, Youssef M.
2016-01-01
The accurate approximation of high-dimensional functions is an essential task in uncertainty quantification and many other fields. We propose a new function approximation scheme based on a spectral extension of the tensor-train (TT) decomposition. We first define a functional version of the TT.......e., the “cores”) comprising the functional TT decomposition. This result motivates an approximation scheme employing polynomial approximations of the cores. For functions with appropriate regularity, the resulting spectral tensor-train decomposition combines the favorable dimension-scaling of the TT...... decomposition with the spectral convergence rate of polynomial approximations, yielding efficient and accurate surrogates for high-dimensional functions. To construct these decompositions, we use the sampling algorithm \\tt TT-DMRG-cross to obtain the TT decomposition of tensors resulting from suitable...
Optical Spectral Variability of Blazars
Indian Academy of Sciences (India)
Haritma Gaur
2014-09-01
It is well established that blazars show flux variations in the complete electromagnetic (EM) spectrum on all possible time scales ranging from a few tens of minutes to several years. Here, we report the review of optical flux and spectral variability properties of different classes of blazars on IDV and STV time-scales. Our analysis show HSPs are less variable in optical bands as compared to LSPs. Also, we investigated the spectral slope variability and found that the average spectral slopes of LSPs showed a good agreement with the synchrotron self-Compton loss-dominated model. However, spectra of the HSPs and FSRQs have significant additional emission components. In general, spectra of BL Lacs get flatter when they become brighter, while for FSRQs the opposite trend appears to hold.
Spectral analysis by correlation; Analyse spectrale par correlation
Energy Technology Data Exchange (ETDEWEB)
Fauque, J.M.; Berthier, D.; Max, J.; Bonnet, G. [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires
1969-07-01
The spectral density of a signal, which represents its power distribution along the frequency axis, is a function which is of great importance, finding many uses in all fields concerned with the processing of the signal (process identification, vibrational analysis, etc...). Amongst all the possible methods for calculating this function, the correlation method (correlation function calculation + Fourier transformation) is the most promising, mainly because of its simplicity and of the results it yields. The study carried out here will lead to the construction of an apparatus which, coupled with a correlator, will constitute a set of equipment for spectral analysis in real time covering the frequency range 0 to 5 MHz. (author) [French] La densite spectrale d'un signal qui represente la repartition de sa puissance sur l'axe des frequences est une fonction de premiere importance, constamment utilisee dans tout ce qui touche le traitement du signal (identification de processus, analyse de vibrations, etc...). Parmi toutes les methodes possibles de calcul de cette fonction, la methode par correlation (calcul de la fonction de correlation + transformation de Fourier) est tres seduisante par sa simplicite et ses performances. L'etude qui est faite ici va deboucher sur la realisation d'un appareil qui, couple a un correlateur, constituera un ensemble d'analyse spectrale en temps reel couvrant la gamme de frequence 0 a 5 MHz. (auteur)
Multi-spectral camera development
CSIR Research Space (South Africa)
Holloway, M
2012-10-01
Full Text Available stream_source_info Holloway_2012.pdf.txt stream_content_type text/plain stream_size 6209 Content-Encoding ISO-8859-1 stream_name Holloway_2012.pdf.txt Content-Type text/plain; charset=ISO-8859-1 Multi-Spectral Camera... Development 4th Biennial Conference Presented by Mark Holloway 10 October 2012 Fused image ? Red, Green and Blue Applications of the Multi-Spectral Camera ? CSIR 2012 Slide 2 Green and Blue, Near Infrared (IR) RED Applications of the Multi...
Stingray: Spectral-timing software
Huppenkothen, Daniela; Bachetti, Matteo; Stevens, Abigail L.; Migliari, Simone; Balm, Paul
2016-08-01
Stingray is a spectral-timing software package for astrophysical X-ray (and more) data. The package merges existing efforts for a (spectral-)timing package in Python and is composed of a library of time series methods (including power spectra, cross spectra, covariance spectra, and lags); scripts to load FITS data files from different missions; a simulator of light curves and event lists that includes different kinds of variability and more complicated phenomena based on the impulse response of given physical events (e.g. reverberation); and a GUI to ease the learning curve for new users.
Spectral Analysis of Nonstationary Spacecraft Vibration Data
1965-11-01
the instantaneous power spectral density function for the process (y(t)). This spectral function can take on negative values for certain cases...power spectral density function is not directly measurable in the frequency domain. An experimental estimate for the function can be obtained only by...called the generalized power spectral density function for the process (y(t)) . This spectral description for nonstationary data is of great value for
Institute of Scientific and Technical Information of China (English)
WU Guofeng; Jan de Leeuw; Andrew K. Skidmore; LIU Yaolin; Herbert H. T. Prins
2010-01-01
Measurements of photosynthetically active radiation (PAR), which are indispensable for simulating plant growth and productivity, are generally very scarce. This study aimed to compare two extrapolation and one interpolation methods for estimating daily PAR reaching the earth surface within the Poyang Lake national nature reserve, China. The daily global solar radiation records at Nanchang meteorological station and daily sunshine duration measurements at nine meteorological stations around Poyang Lake were obtained to achieve the objective. Two extrapolation methods of PARs using recorded and estimated global solar radiation at Nanchang station and three stations (Yongxiu, Xingzi and Duchang) near the nature reserve were carried out, respectively, and a spatial interpolation method combining triangulated irregular network (TIN) and inverse distance weighting (IDW) was implemented to estimate daily PAR. The performance evaluation of the three methods using the PARs measured at Dahuchi Conservation Station (day number of measurement = 105 days) revealed that: (1) the spatial interpolation method achieved the best PAR estimation (R2 = 0.89, s.e. = 0.99, F = 830.02, P ＜ 0.001＝; (2) the extrapolation method from Nanchang station obtained an unbiased result (R2 = 0.88, s.e. = 0.99, F = 745.29, P ＜ 0.001＝; however, (3) the extrapolation methods from Yongxiu, Xingzi and Duchang stations were not suitable for this specific site for their biased estimations. Considering the assumptions and principles supporting the extrapolation and interpolation methods, the authors conclude that the spatial interpolation method produces more reliable results than the extrapolation methods and holds the greatest potential in all tested methods, and more PAR measurements should be recorded to evaluate the seasonal, yearly and spatial stabilities of these models for their application to the whole nature reserve of Poyang Lake.
Rayleigh imaging in spectral mammography
Berggren, Karl; Danielsson, Mats; Fredenberg, Erik
2016-03-01
Spectral imaging is the acquisition of multiple images of an object at different energy spectra. In mammography, dual-energy imaging (spectral imaging with two energy levels) has been investigated for several applications, in particular material decomposition, which allows for quantitative analysis of breast composition and quantitative contrast-enhanced imaging. Material decomposition with dual-energy imaging is based on the assumption that there are two dominant photon interaction effects that determine linear attenuation: the photoelectric effect and Compton scattering. This assumption limits the number of basis materials, i.e. the number of materials that are possible to differentiate between, to two. However, Rayleigh scattering may account for more than 10% of the linear attenuation in the mammography energy range. In this work, we show that a modified version of a scanning multi-slit spectral photon-counting mammography system is able to acquire three images at different spectra and can be used for triple-energy imaging. We further show that triple-energy imaging in combination with the efficient scatter rejection of the system enables measurement of Rayleigh scattering, which adds an additional energy dependency to the linear attenuation and enables material decomposition with three basis materials. Three available basis materials have the potential to improve virtually all applications of spectral imaging.
Spectral Methods for Numerical Relativity
Directory of Open Access Journals (Sweden)
Grandclément Philippe
2009-01-01
Full Text Available Equations arising in general relativity are usually too complicated to be solved analytically and one must rely on numerical methods to solve sets of coupled partial differential equations. Among the possible choices, this paper focuses on a class called spectral methods in which, typically, the various functions are expanded in sets of orthogonal polynomials or functions. First, a theoretical introduction of spectral expansion is given with a particular emphasis on the fast convergence of the spectral approximation. We then present different approaches to solving partial differential equations, first limiting ourselves to the one-dimensional case, with one or more domains. Generalization to more dimensions is then discussed. In particular, the case of time evolutions is carefully studied and the stability of such evolutions investigated. We then present results obtained by various groups in the field of general relativity by means of spectral methods. Work, which does not involve explicit time-evolutions, is discussed, going from rapidly-rotating strange stars to the computation of black-hole–binary initial data. Finally, the evolution of various systems of astrophysical interest are presented, from supernovae core collapse to black-hole–binary mergers.
Polynomial J-spectral factorization
Kwakernaak, Huibert; Sebek, Michael
1994-01-01
Several algorithms are presented for the J-spectral factorization of a para-Hermitian polynomial matrix. The four algorithms that are discussed are based on diagonalization, successive factor extraction, interpolation, and the solution of an algebraic Riccati equation, respectively. The paper includ
Asymptotics of thermal spectral functions
Caron-Huot, S
2009-01-01
We use operator product expansion (OPE) techniques to study the spectral functions of currents at finite temperature, in the high-energy time-like region $\\omega\\gg T$. The leading corrections to the spectral function of currents and stress tensors are proportional to $\\sim T^4$ expectation values in general, and the leading corrections $\\sim g^2T^4$ are calculated at weak coupling, up to one undetermined coefficient in the shear viscosity channel. Spectral functions in the asymptotic regime are shown to be infrared safe up to order $g^8T^4$. The convergence of sum rules in the shear and bulk viscosity channels is established in QCD to all orders in perturbation theory, though numerically significant tails $\\sim T^4/(\\log\\omega)^3$ are shown to exist in the bulk viscosity channel and to have an impact on sum rules recently proposed by Kharzeev and Tuchin. We argue that the spectral functions of currents and stress tensors in strongly coupled $\\mathcal{N}=4$ super Yang-Mills do not receive any medium-dependent...
Spectral representation of Gaussian semimartingales
DEFF Research Database (Denmark)
Basse-O'Connor, Andreas
2009-01-01
The aim of the present paper is to characterize the spectral representation of Gaussian semimartingales. That is, we provide necessary and sufficient conditions on the kernel K for X t =∫ K t (s) dN s to be a semimartingale. Here, N denotes an independently scattered Gaussian random measure...
Spectral problems for operator matrices
Bátkai, A.; Binding, P.; Dijksma, A.; Hryniv, R.; Langer, H.
2005-01-01
We study spectral properties of 2 × 2 block operator matrices whose entries are unbounded operators between Banach spaces and with domains consisting of vectors satisfying certain relations between their components. We investigate closability in the product space, essential spectra and generation of
Goldhirsh, J.
1982-01-01
The first absolute rain fade distribution method described establishes absolute fade statistics at a given site by means of a sampled radar data base. The second method extrapolates absolute fade statistics from one location to another, given simultaneously measured fade and rain rate statistics at the former. Both methods employ similar conditional fade statistic concepts and long term rain rate distributions. Probability deviations in the 2-19% range, with an 11% average, were obtained upon comparison of measured and predicted levels at given attenuations. The extrapolation of fade distributions to other locations at 28 GHz showed very good agreement with measured data at three sites located in the continental temperate region.
Ekin, Jack W.; Cheggour, Najib; Goodrich, Loren; Splett, Jolene; Bordini, Bernardo; Richter, David
2016-12-01
A scaling study of several thousand Nb3Sn critical-current (I c) measurements is used to derive the Extrapolative Scaling Expression (ESE), a relation that can quickly and accurately extrapolate limited datasets to obtain full three-dimensional dependences of I c on magnetic field (B), temperature (T), and mechanical strain (ɛ). The relation has the advantage of being easy to implement, and offers significant savings in sample characterization time and a useful tool for magnet design. Thorough data-based analysis of the general parameterization of the Unified Scaling Law (USL) shows the existence of three universal scaling constants for practical Nb3Sn conductors. The study also identifies the scaling parameters that are conductor specific and need to be fitted to each conductor. This investigation includes two new, rare, and very large I c(B,T,ɛ) datasets (each with nearly a thousand I c measurements spanning magnetic fields from 1 to 16 T, temperatures from ˜2.26 to 14 K, and intrinsic strains from -1.1% to +0.3%). The results are summarized in terms of the general USL parameters given in table 3 of Part 1 (Ekin J W 2010 Supercond. Sci. Technol. 23 083001) of this series of articles. The scaling constants determined for practical Nb3Sn conductors are: the upper-critical-field temperature parameter v = 1.50 ± 0.04 the cross-link parameter w = 3.0 ± 0.3 and the strain curvature parameter u = 1.7 ± 0.1 (from equation (29) for b c2(ɛ) in Part 1). These constants and required fitting parameters result in the ESE relation, given by I c ( B , T , ɛ ) B = C [ b c 2 ( ɛ ) ] s ( 1 - t 1.5 ) η - μ ( 1 - t 2 ) μ b p ( 1 - b ) q with reduced magnetic field b ≡ B/B c2*(T,ɛ) and reduced temperature t ≡ T/T c*(ɛ), where: B c 2 * ( T , ɛ ) = B c 2 * ( 0 , 0 ) ( 1 - t 1.5 ) b c 2 ( ɛ ) T c * ( ɛ ) = T c * ( 0 ) [ b c 2 ( ɛ ) ] 1/3 and fitting parameters: C, B c2*(0,0), T c*(0), s, either η or μ (but not both), plus the parameters in the strain function b c2
Directory of Open Access Journals (Sweden)
Luigi Margiotta-Casaluci
Full Text Available Fish are an important model for the pharmacological and toxicological characterization of human pharmaceuticals in drug discovery, drug safety assessment and environmental toxicology. However, do fish respond to pharmaceuticals as humans do? To address this question, we provide a novel quantitative cross-species extrapolation approach (qCSE based on the hypothesis that similar plasma concentrations of pharmaceuticals cause comparable target-mediated effects in both humans and fish at similar level of biological organization (Read-Across Hypothesis. To validate this hypothesis, the behavioural effects of the anti-depressant drug fluoxetine on the fish model fathead minnow (Pimephales promelas were used as test case. Fish were exposed for 28 days to a range of measured water concentrations of fluoxetine (0.1, 1.0, 8.0, 16, 32, 64 µg/L to produce plasma concentrations below, equal and above the range of Human Therapeutic Plasma Concentrations (H(TPCs. Fluoxetine and its metabolite, norfluoxetine, were quantified in the plasma of individual fish and linked to behavioural anxiety-related endpoints. The minimum drug plasma concentrations that elicited anxiolytic responses in fish were above the upper value of the H(TPC range, whereas no effects were observed at plasma concentrations below the H(TPCs. In vivo metabolism of fluoxetine in humans and fish was similar, and displayed bi-phasic concentration-dependent kinetics driven by the auto-inhibitory dynamics and saturation of the enzymes that convert fluoxetine into norfluoxetine. The sensitivity of fish to fluoxetine was not so dissimilar from that of patients affected by general anxiety disorders. These results represent the first direct evidence of measured internal dose response effect of a pharmaceutical in fish, hence validating the Read-Across hypothesis applied to fluoxetine. Overall, this study demonstrates that the qCSE approach, anchored to internal drug concentrations, is a powerful tool
J-85 jet engine noise measured in the ONERA S1 wind tunnel and extrapolated to far field
Soderman, Paul T.; Julienne, Alain; Atencio, Adolph, Jr.
1991-01-01
Noise from a J-85 turbojet with a conical, convergent nozzle was measured in simulated flight in the ONERA S1 Wind Tunnel. Data are presented for several flight speeds up to 130 m/sec and for radiation angles of 40 to 160 degrees relative to the upstream direction. The jet was operated with subsonic and sonic exhaust speeds. A moving microphone on a 2 m sideline was used to survey the radiated sound field in the acoustically treated, closed test section. The data were extrapolated to a 122 m sideline by means of a multiple-sideline source-location method, which was used to identify the acoustic source regions, directivity patterns, and near field effects. The source-location method is described along with its advantages and disadvantages. Results indicate that the effects of simulated flight on J-85 noise are significant. At the maximum forward speed of 130 m/sec, the peak overall sound levels in the aft quadrant were attentuated approximately 10 dB relative to sound levels of the engine operated statically. As expected, the simulated flight and static data tended to merge in the forward quadrant as the radiation angle approached 40 degrees. There is evidence that internal engine or shock noise was important in the forward quadrant. The data are compared with published predictions for flight effects on pure jet noise and internal engine noise. A new empirical prediction is presented that relates the variation of internally generated engine noise or broadband shock noise to forward speed. Measured near field noise extrapolated to far field agrees reasonably well with data from similar engines tested statically outdoors, in flyover, in a wind tunnel, and on the Bertin Aerotrain. Anomalies in the results for the forward quadrant and for angles above 140 degrees are discussed. The multiple-sideline method proved to be cumbersome in this application, and it did not resolve all of the uncertainties associated with measurements of jet noise close to the jet. The
Energy Technology Data Exchange (ETDEWEB)
Manwaring, John, E-mail: manwaring.jd@pg.com [Procter & Gamble Inc., Mason Business Center, Mason, OH 45040 (United States); Rothe, Helga [Procter & Gamble Service GmbH, Sulzbacher Str. 40, 65823 Schwalbach am Taunus (Germany); Obringer, Cindy; Foltz, David J.; Baker, Timothy R.; Troutman, John A. [Procter & Gamble Inc., Mason Business Center, Mason, OH 45040 (United States); Hewitt, Nicola J. [SWS, Erzhausen (Germany); Goebel, Carsten [Procter & Gamble Service GmbH, Sulzbacher Str. 40, 65823 Schwalbach am Taunus (Germany)
2015-09-01
Approaches to assess the role of absorption, metabolism and excretion of cosmetic ingredients that are based on the integration of different in vitro data are important for their safety assessment, specifically as it offers an opportunity to refine that safety assessment. In order to estimate systemic exposure (AUC) to aromatic amine hair dyes following typical product application conditions, skin penetration and epidermal and systemic metabolic conversion of the parent compound was assessed in human skin explants and human keratinocyte (HaCaT) and hepatocyte cultures. To estimate the amount of the aromatic amine that can reach the general circulation unchanged after passage through the skin the following toxicokinetically relevant parameters were applied: a) Michaelis–Menten kinetics to quantify the epidermal metabolism; b) the estimated keratinocyte cell abundance in the viable epidermis; c) the skin penetration rate; d) the calculated Mean Residence Time in the viable epidermis; e) the viable epidermis thickness and f) the skin permeability coefficient. In a next step, in vitro hepatocyte K{sub m} and V{sub max} values and whole liver mass and cell abundance were used to calculate the scaled intrinsic clearance, which was combined with liver blood flow and fraction of compound unbound in the blood to give hepatic clearance. The systemic exposure in the general circulation (AUC) was extrapolated using internal dose and hepatic clearance, and C{sub max} was extrapolated (conservative overestimation) using internal dose and volume of distribution, indicating that appropriate toxicokinetic information can be generated based solely on in vitro data. For the hair dye, p-phenylenediamine, these data were found to be in the same order of magnitude as those published for human volunteers. - Highlights: • An entirely in silico/in vitro approach to predict in vivo exposure to dermally applied hair dyes • Skin penetration and epidermal conversion assessed in human
Diagnostics of Ellerman bombs with high-resolution spectral data
Li, Zhen; Fang, Cheng; Guo, Yang; Chen, Peng-Fei; Xu, Zhi; Cao, Wen-Da
2015-09-01
Ellerman bombs (EBs) are tiny brightenings often observed near sunspots. The most impressive characteristic of EB spectra is the two emission bumps in both wings of the Hα and Ca II 8542Å lines. High-resolution spectral data of three small EBs were obtained on 2013 June 6 with the largest solar telescope, the 1.6 m New Solar Telescope at the Big Bear Solar Observatory. The characteristics of these EBs are analyzed. The sizes of the EBs are in the range of 0.3‧ - 0.8‧ and their durations are only 3-5 min. Our semi-empirical atmospheric models indicate that the heating occurs around the temperature minimum region with a temperature increase of 2700-3000 K, which is surprisingly higher than previously thought. The radiative and kinetic energies are estimated to be as high as 5 × 1025 - 3.0 × 1026 erg despite the small size of these EBs. Observations of the magnetic field show that the EBs just appeared in a parasitic region with mixed polarities and were accompanied by mass motions. Nonlinear force-free field extrapolation reveals that the three EBs are connected with a series of magnetic field lines associated with bald patches, which strongly implies that these EBs should be produced by magnetic reconnection in the solar lower atmosphere. According to the lightcurves and the estimated magnetic reconnection rate, we propose that there is a three phase process in EBs: pre-heating, flaring and cooling phases.
Characterizing source confusion in HI spectral line stacking experiments
Baker, Andrew J.; Elson, Edward C.; Blyth, Sarah
2017-01-01
Forthcoming studies like the Looking At the Distant Universe with the MeerKAT Array (LADUMA) deep HI survey will rely in part on stacking experiments to detect the mean level of HI emission from populations of galaxies that are too faint to be detected individually. Preparations for such experiments benefit from the use of synthetic data cubes built from mock galaxy catalogs and containing model galaxies with realistic spatial and spectral HI distributions over large cosmological volumes. I will present a new set of such synthetic data cubes and show the results of stacking experiments with them. Because the stacked spectra can be accurately decomposed into contributions from target and non-target galaxies, it is possible to characterize the large fractions of contaminant mass that are included in stacked totals due to source confusion. Consistent with estimates extrapolated from z = 0 observational data, we find that the amount of confused mass in a stacked spectrum grows almost linearly with the size of the observational beam, suggesting potential overestimates of the cosmic neutral gas density by some recent HI stacking experiments.
Van der Kallen, Wilberd
2015-01-01
Let R be a noetherian ring of dimension d and let n be an integer so that n≤d≤2n-3. Let (a
Hybrid Image Fusion for Sharpness Enhancement of Multi-Spectral Lunar Images
Awumah, Anna; Mahanti, Prasun; Robinson, Mark
2016-10-01
Image fusion enhances the sharpness of a multi-spectral (MS) image by incorporating spatial details from a higher-resolution panchromatic (Pan) image [1,2]. Known applications of image fusion for planetary images are rare, although image fusion is well-known for its applications to Earth-based remote sensing. In a recent work [3], six different image fusion algorithms were implemented and their performances were verified with images from the Lunar Reconnaissance Orbiter (LRO) Camera. The image fusion procedure obtained a high-resolution multi-spectral (HRMS) product from the LRO Narrow Angle Camera (used as Pan) and LRO Wide Angle Camera (used as MS) images. The results showed that the Intensity-Hue-Saturation (IHS) algorithm results in a high-spatial quality product while the Wavelet-based image fusion algorithm best preserves spectral quality among all the algorithms. In this work we show the results of a hybrid IHS-Wavelet image fusion algorithm when applied to LROC MS images. The hybrid method provides the best HRMS product - both in terms of spatial resolution and preservation of spectral details. Results from hybrid image fusion can enable new science and increase the science return from existing LROC images.[1] Pohl, Cle, and John L. Van Genderen. "Review article multisensor image fusion in remote sensing: concepts, methods and applications." International journal of remote sensing 19.5 (1998): 823-854.[2] Zhang, Yun. "Understanding image fusion." Photogramm. Eng. Remote Sens 70.6 (2004): 657-661.[3] Mahanti, Prasun et al. "Enhancement of spatial resolution of the LROC Wide Angle Camera images." Archives, XXIII ISPRS Congress Archives (2016).
Energy Technology Data Exchange (ETDEWEB)
Maingi, R [PPPL
2014-07-01
Large edge localized modes (ELMs) typically accompany good H-mode confinement in fusion devices, but can present problems for plasma facing components because of high transient heat loads. Here the range of techniques for ELM control deployed in fusion devices is reviewed. The two baseline strategies in the ITER baseline design are emphasized: rapid ELM triggering and peak heat flux control via pellet injection, and the use of magnetic perturbations to suppress or mitigate ELMs. While both of these techniques are moderately well developed, with reasonable physical bases for projecting to ITER, differing observations between multiple devices are also discussed to highlight the needed community R & D. In addition, recent progress in ELM-free regimes, namely Quiescent H-mode, I-mode, and Enhanced Pedestal H-mode is reviewed, and open questions for extrapolability are discussed. Finally progress and outstanding issues in alternate ELM control techniques are reviewed: supersonic molecular beam injection, edge electron cyclotron heating, lower hybrid heating and/or current drive, controlled periodic jogs of the vertical centroid position, ELM pace-making via periodic magnetic perturbations, ELM elimination with lithium wall conditioning, and naturally occurring small ELM regimes.
Kwok, Kevin W H; Leung, Kenneth M Y; Lui, Gilbert S G; Chu, S Vincent K H; Lam, Paul K S; Morritt, David; Maltby, Lorraine; Brock, Theo C M; Van den Brink, Paul J; Warne, Michael St J; Crane, Mark
2007-01-01
Toxicity data for tropical species are often lacking for ecological risk assessment. Consequently, tropical and subtropical countries use water quality criteria (WQC) derived from temperate species (e.g., United States, Canada, or Europe) to assess ecological risks in their aquatic systems, leaving an unknown margin of uncertainty. To address this issue, we use species sensitivity distributions of freshwater animal species to determine whether temperate datasets are adequately protective of tropical species assemblages for 18 chemical substances. The results indicate that the relative sensitivities of tropical and temperate species are noticeably different for some of these chemicals. For most metals, temperate species tend to be more sensitive than their tropical counterparts. However, for un-ionized ammonia, phenol, and some pesticides (e.g., chlorpyrifos), tropical species are probably more sensitive. On the basis of the results from objective comparisons of the ratio between temperate and tropical hazardous concentration values for 10% of species, or the 90% protection level, we recommend that an extrapolation factor of 10 should be applied when such surrogate temperate WQCs are used for tropical or subtropical regions and a priori knowledge on the sensitivity of tropical species is very limited or not available.
Extrapolation of IAPWS-IF97 data: The saturation pressure of H2O in the critical region
Ustyuzhanin, E. E.; Ochkov, V. F.; Shishakov, V. V.; Rykov, A. V.
2015-11-01
Some literature sources and web sites are analyzed in this report. These sources contain an information about thermophysical properties of H2O including the vapor pressure Ps. (Ps,T)-data have a form of the international standard tables named as “IAPWS-IF97 data”. Our analysis shows that traditional databases represent (Ps,T)-data at t > 0.002, here t = (Tc - T)/Tc is a reduced temperature. It is an interesting task to extrapolate IAPWS-IF97 data in to the critical region and to get (Ps,T)-data at t laws of the scaling theory (ST). A combined model (CM) is chosen as a form, F(t,D,B), to express a function ln(Ps/Pc) in the critical region including t laws of ST are taken into account to elaborate F(t, D, B). Adjustable coefficients (B) are determined by fitting CM to input (Ps,T)-points those belong to IAPWS-IF97 data. Application results are got with a help of CM in the critical region including values of the first and the second derivatives for Ps(T). Some models Ps(T) are compared with CM.
Ducasse, Q; Mathieu, L; Marini, P; Morillon, B; Aiche, M; Tsekhanovich, I
2015-01-01
The study of transfer-induced gamma-decay probabilities is very useful for understanding the surrogate-reaction method and, more generally, for constraining statistical-model calculations. One of the main difficulties in the measurement of gamma-decay probabilities is the determination of the gamma-cascade detection efficiency. In [Nucl. Instrum. Meth. A 700, 59 (2013)] we developed the Extrapolated Efficiency Method (EXEM), a new method to measure this quantity. In this work, we have applied, for the first time, the EXEM to infer the gamma-cascade detection efficiency in the actinide region. In particular, we have considered the 238U(d,p)239U and 238U(3He,d)239Np reactions. We have performed Hauser-Feshbach calculations to interpret our results and to verify the hypothesis on which the EXEM is based. The determination of fission and gamma-decay probabilities of 239Np below the neutron separation energy allowed us to validate the EXEM.
Sprecher, D; Beyer, M; Merkt, F
2013-01-01
Recent experiments are reviewed which have led to the determination of the ionization and dissociation energies of molecular hydrogen with a precision of 0.0007 cm(-)1 (8 mJ/mol or 20 MHz) using a procedure based on high-resolution spectroscopic measurements of high Rydberg states and the extrapolation of the Rydberg series to the ionization thresholds. Molecular hydrogen, with only two protons and two electrons, is the simplest molecule with which all aspects of a chemical bond, including electron correlation effects, can be studied. Highly precise values of its ionization and dissociation energies provide stringent tests of the precision of molecular quantum mechanics and of quantum-electrodynamics calculations in molecules. The comparison of experimental and theoretical values for these quantities enable one to quantify the contributions to a chemical bond that are neglected when making the Born-Oppenheimer approximation, i.e. adiabatic, nonadiabatic, relativistic, and radiative corrections. Ionization energies of a broad range of molecules can now be determined experimentally with high accuracy (i.e. about 0.01 cm(-1)). Calculations at similar accuracies are extremely challenging for systems containing more than two electrons. The combination of precision measurements of molecular ionization energies with highly accurateab initio calculations has the potential to provide, in future, fully reliable sets of thermochemical quantities for gas-phase reactions.
Tassis, Konstantinos
2014-01-01
Recent Planck results have shown that the path to isolating an inflationary B-mode signal in microwave polarization passes through understanding and modeling the interstellar dust polarized emission foreground, even in regions of the sky with the lowest level of dust emission. One of the most commonly used ways to remove the dust foreground is to extrapolate the polarized dust emission signal from frequencies where it dominates (e.g., 350 GHz) to frequencies commonly targeted by cosmic microwave background experiments (e.g., 150 GHz). We show, using a simple 2-cloud model, that if more than one cloud is present along the line-of-sight, with even mildly different temperature and dust column density, but severely misaligned magnetic field, then the 350 GHz polarized sky map is not predictive of that at 150 GHz. This problem is intrinsic to all microwave experiments and is due to information loss due to line-of-sight integration. However, it can be alleviated through interstellar medium tomography: a reconstruct...
Institute of Scientific and Technical Information of China (English)
郭茂林; 孟庆元; 王彪
2003-01-01
A new extrapolation approach was proposed to calculate the strain energy release rates of complex cracks. The point-by-point closed method was used to calculate the closed energy, thus the disadvantage of self inconsistency in some published papers can be avoided. The disadvantage is that the closed energy is repeatedly calculated: when closed nodal number along radial direction is more than two, the displacement of nodes behind the crack tip that is multiplied by nodal forces, the closed energy has been calculated and the crack surfaces have been closed, and that closed energy of middle point is calculated repeatedly. A DCB ( double cantilever beam) specimen was calculated and compared with other theoretical results, it is shown that a better coincidence is obtained. In addition the same results are also obtained for compact tension specimen, three point bend specimen and single edge cracked specinen. In comparison with theoretical results, the error can be limited within 1 per cent. This method can be extended to analyze the fracture of composite laminates with various delamination cracks.
Crater, Horace; Yang, Dujiu
1991-09-01
A semirelativistic expansion in powers of 1/c2 is canonically matched through order (1/c4) of the two-particle total Hamiltonian of Wheeler-Feynman vector and scalar electrodynamics to a similar expansion of the center of momentum (c.m.) total energy of two interacting particles obtained from covariant generalized mass shell constraints derived with the use of the classical Todorov equation and Dirac's Hamiltonian constraint mechanics. This determines through order 1/c4 the direct interaction used in the covariant Todorov constraint equation. We show that these interactions are momentum independent in spite of the extensive and complicated momentum dependence of the potential energy terms in the Wheeler-Feynman Hamiltonian. The invariant expressions for the relativistic reduced mass and energy of the fictitious particle of relative motion used in the Todorov equation are also dynamically determined through this order by this same procedure. The resultant covariant Todorov equation then not only reproduces the noncovariant Wheeler-Feynman dynamics through order 1/c4 but also implicitly provides a rather simple covariant extrapolation of it to all orders of 1/c2.
Trapa, Patrick E; Beaumont, Kevin; Atkinson, Karen; Eng, Heather; King-Ahmad, Amanda; Scott, Dennis O; Maurer, Tristan S; Di, Li
2017-03-01
Prediction of intestinal availability (FaFg) of carboxylesterase (CES) substrates is of critical importance in designing oral prodrugs with optimal properties, projecting human pharmacokinetics and dose, and estimating drug-drug interaction potentials. A set of ester prodrugs were evaluated using in vitro permeability (parallel artificial membrane permeability assay and Madin-Darby canine kidney cell line-low efflux) and intestinal stability (intestine S9) assays, as well as in vivo portal vein-cannulated cynomolgus monkey. In vitro-in vivo extrapolation (IVIVE) of FaFg was developed with a number of modeling approaches, including a full physiologically based pharmacokinetic (PBPK) model as well as a simplified competitive-rate analytical solution. Both methods converged as in the PBPK simulations enterocyte blood flow behaved as a sink, a key assumption in the competitive-rate analysis. For this specific compound set, the straightforward analytical solution therefore can be used to generate in vivo predictions. Strong IVIVE of FaFg was observed for cynomolgus monkey with R(2) of 0.71-0.93. The results suggested in vitro assays can be used to predict in vivo FaFg for CES substrates with high confidence. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Bomble, Yannick J.; Vázquez, Juana; Kállay, Mihály; Michauk, Christine; Szalay, Péter G.; Császár, Attila G.; Gauss, Jürgen; Stanton, John F.
2006-08-01
The recently developed high-accuracy extrapolated ab initio thermochemistry method for theoretical thermochemistry, which is intimately related to other high-precision protocols such as the Weizmann-3 and focal-point approaches, is revisited. Some minor improvements in theoretical rigor are introduced which do not lead to any significant additional computational overhead, but are shown to have a negligible overall effect on the accuracy. In addition, the method is extended to completely treat electron correlation effects up to pentuple excitations. The use of an approximate treatment of quadruple and pentuple excitations is suggested; the former as a pragmatic approximation for standard cases and the latter when extremely high accuracy is required. For a test suite of molecules that have rather precisely known enthalpies of formation {as taken from the active thermochemical tables of Ruscic and co-workers [Lecture Notes in Computer Science, edited by M. Parashar (Springer, Berlin, 2002), Vol. 2536, pp. 25-38; J. Phys. Chem. A 108, 9979 (2004)]}, the largest deviations between theory and experiment are 0.52, -0.70, and 0.51kJmol-1 for the latter three methods, respectively. Some perspective is provided on this level of accuracy, and sources of remaining systematic deficiencies in the approaches are discussed.
Maingi, R.
2014-11-01
Large edge localized modes (ELMs) typically accompany good H-mode confinement in fusion devices, but can present problems for plasma facing components because of high transient heat loads. Here the range of techniques for ELM control deployed in fusion devices is reviewed. Two strategies in the ITER baseline design are emphasized: rapid ELM triggering and peak heat flux control via pellet injection, and the use of magnetic perturbations to suppress or mitigate ELMs. While both of these techniques are moderately well developed, with reasonable physical bases for projecting to ITER, differing observations between multiple devices are also discussed to highlight the needed community R&D. In addition, recent progress in ELM-free regimes, namely quiescent H-mode, I-mode, and enhanced pedestal H-mode is reviewed, and open questions for extrapolability are discussed. Finally progress and outstanding issues in alternate ELM control techniques are reviewed: supersonic molecular beam injection, edge electron cyclotron heating, lower hybrid heating and/or current drive, controlled periodic jogs of the vertical centroid position, ELM pace-making via periodic magnetic perturbations, ELM elimination with lithium wall conditioning, and naturally occurring small ELM regimes.
Powers, Jennifer S; Corre, Marife D; Twine, Tracy E; Veldkamp, Edzo
2011-04-12
Accurately quantifying changes in soil carbon (C) stocks with land-use change is important for estimating the anthropogenic fluxes of greenhouse gases to the atmosphere and for implementing policies such as REDD (Reducing Emissions from Deforestation and Degradation) that provide financial incentives to reduce carbon dioxide fluxes from deforestation and land degradation. Despite hundreds of field studies and at least a dozen literature reviews, there is still considerable disagreement on the direction and magnitude of changes in soil C stocks with land-use change. We conducted a meta-analysis of studies that quantified changes in soil C stocks with land use in the tropics. Conversion from one land use to another caused significant increases or decreases in soil C stocks for 8 of the 14 transitions examined. For the three land-use transitions with sufficient observations, both the direction and magnitude of the change in soil C pools depended strongly on biophysical factors of mean annual precipitation and dominant soil clay mineralogy. When we compared the distribution of biophysical conditions of the field observations to the area-weighted distribution of those factors in the tropics as a whole or the tropical lands that have undergone conversion, we found that field observations are highly unrepresentative of most tropical landscapes. Because of this geographic bias we strongly caution against extrapolating average values of land-cover change effects on soil C stocks, such as those generated through meta-analysis and literature reviews, to regions that differ in biophysical conditions.
Energy Technology Data Exchange (ETDEWEB)
Ducasse, Q. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); CEA-Cadarache, DEN/DER/SPRC/LEPh, 13108 Saint Paul lez Durance (France); Jurado, B., E-mail: jurado@cenbg.in2p3.fr [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); Mathieu, L.; Marini, P. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); Morillon, B. [CEA DAM DIF, 91297 Arpajon (France); Aiche, M.; Tsekhanovich, I. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France)
2016-08-01
The study of transfer-induced gamma-decay probabilities is very useful for understanding the surrogate-reaction method and, more generally, for constraining statistical-model calculations. One of the main difficulties in the measurement of gamma-decay probabilities is the determination of the gamma-cascade detection efficiency. In Boutoux et al. (2013) [10] we developed the EXtrapolated Efficiency Method (EXEM), a new method to measure this quantity. In this work, we have applied, for the first time, the EXEM to infer the gamma-cascade detection efficiency in the actinide region. In particular, we have considered the {sup 238}U(d,p){sup 239}U and {sup 238}U({sup 3}He,d){sup 239}Np reactions. We have performed Hauser–Feshbach calculations to interpret our results and to verify the hypothesis on which the EXEM is based. The determination of fission and gamma-decay probabilities of {sup 239}Np below the neutron separation energy allowed us to validate the EXEM.
cDNA Cloning of Fathead minnow (Pimephales promelas) Estrogen and Androgen Receptors for Use in Steroid Receptor Extrapolation Studies for Endocrine Disrupting Chemicals. Wilson, V.S.1,, Korte, J.2, Hartig P. 1, Ankley, G.T.2, Gray, L.E., Jr 1, , and Welch, J.E.1. 1U.S...
cDNA Cloning of Fathead minnow (Pimephales promelas) Estrogen and Androgen Receptors for Use in Steroid Receptor Extrapolation Studies for Endocrine Disrupting Chemicals. Wilson, V.S.1,, Korte, J.2, Hartig P. 1, Ankley, G.T.2, Gray, L.E., Jr 1, , and Welch, J.E.1. 1U.S...
Spectral computations for bounded operators
Ahues, Mario; Limaye, Balmohan
2001-01-01
Exact eigenvalues, eigenvectors, and principal vectors of operators with infinite dimensional ranges can rarely be found. Therefore, one must approximate such operators by finite rank operators, then solve the original eigenvalue problem approximately. Serving as both an outstanding text for graduate students and as a source of current results for research scientists, Spectral Computations for Bounded Operators addresses the issue of solving eigenvalue problems for operators on infinite dimensional spaces. From a review of classical spectral theory through concrete approximation techniques to finite dimensional situations that can be implemented on a computer, this volume illustrates the marriage of pure and applied mathematics. It contains a variety of recent developments, including a new type of approximation that encompasses a variety of approximation methods but is simple to verify in practice. It also suggests a new stopping criterion for the QR Method and outlines advances in both the iterative refineme...
Spectral diagonal ensemble Kalman filters
Kasanický, Ivan; Vejmelka, Martin
2015-01-01
A new type of ensemble Kalman filter is developed, which is based on replacing the sample covariance in the analysis step by its diagonal in a spectral basis. It is proved that this technique improves the aproximation of the covariance when the covariance itself is diagonal in the spectral basis, as is the case, e.g., for a second-order stationary random field and the Fourier basis. The method is extended by wavelets to the case when the state variables are random fields, which are not spatially homogeneous. Efficient implementations by the fast Fourier transform (FFT) and discrete wavelet transform (DWT) are presented for several types of observations, including high-dimensional data given on a part of the domain, such as radar and satellite images. Computational experiments confirm that the method performs well on the Lorenz 96 problem and the shallow water equations with very small ensembles and over multiple analysis cycles.
Spectral Synthesis of SDSS Galaxies
Sodre, J; Mateus, A; Stasinska, G; Gomes, J M
2005-01-01
We investigate the power of spectral synthesis as a mean to estimate physical properties of galaxies. Spectral synthesis is nothing more than the decomposition of an observed spectrum in terms of a superposition of a base of simple stellar populations of various ages and metallicities (here from Bruzual & Charlot 2003), producing as output the star-formation and chemical histories of a galaxy, its extinction and velocity dispersion. We discuss the reliability of this approach and apply it to a volume limited sample of 50362 galaxies from the SDSS Data Release 2, producing a catalog of stellar population properties. A comparison with recent estimates of both observed and physical properties of these galaxies obtained by other groups shows good qualitative and quantitative agreement, despite substantial differences in the method of analysis. The confidence in the method is further strengthened by several empirical and astrophysically reasonable correlations between synthesis results and independent quantiti...
Spectral Clustering with Imbalanced Data
Qian, Jing; Saligrama, Venkatesh
2013-01-01
Spectral clustering is sensitive to how graphs are constructed from data particularly when proximal and imbalanced clusters are present. We show that Ratio-Cut (RCut) or normalized cut (NCut) objectives are not tailored to imbalanced data since they tend to emphasize cut sizes over cut values. We propose a graph partitioning problem that seeks minimum cut partitions under minimum size constraints on partitions to deal with imbalanced data. Our approach parameterizes a family of graphs, by ada...
Remote application for spectral collection
Cone, Shelli R.; Steele, R. J.; Tzeng, Nigel H.; Firpi, Alexer H.; Rodriguez, Benjamin M.
2016-05-01
In the area of collecting field spectral data using a spectrometer, it is common to have the instrument over the material of interest. In certain instances it is beneficial to have the ability to remotely control the spectrometer. While several systems have the ability to use a form of connectivity to capture the measurement it is essential to have the ability to control the settings. Additionally, capturing reference information (metadata) about the setup, system configuration, collection, location, atmospheric conditions, and sample information is necessary for future analysis leading towards material discrimination and identification. This has the potential to lead to cumbersome field collection and a lack of necessary information for post processing and analysis. The method presented in this paper describes a capability to merge all parts of spectral collection from logging reference information to initial analysis as well as importing information into a web-hosted spectral database. This allows the simplification of collecting, processing, analyzing and storing field spectra for future analysis and comparisons. This concept is developed for field collection of thermal data using the Designs and Prototypes (D&P) Hand Portable FT-IR Spectrometer (Model 102). The remote control of the spectrometer is done with a customized Android application allowing the ability to capture reference information, process the collected data from radiance to emissivity using a temperature emissivity separation algorithm and store the data into a custom web-based service. The presented system of systems allows field collected spectra to be used for various applications by spectral analysts in the future.
Chebyshev and Fourier spectral methods
Boyd, John P
2001-01-01
Completely revised text focuses on use of spectral methods to solve boundary value, eigenvalue, and time-dependent problems, but also covers Hermite, Laguerre, rational Chebyshev, sinc, and spherical harmonic functions, as well as cardinal functions, linear eigenvalue problems, matrix-solving methods, coordinate transformations, methods for unbounded intervals, spherical and cylindrical geometry, and much more. 7 Appendices. Glossary. Bibliography. Index. Over 160 text figures.
The JCMT Spectral Legacy Survey
Plume, R; Helmich, F; Van der Tak, F F S; Roberts, H; Bowey, J; Buckle, J; Butner, H; Caux, E; Ceccarelli, C; Van Dishoeck, E F; Friberg, P; Gibb, A G; Hatchell, J; Hogerheijde, M R; Matthews, H; Millar, T; Mitchell, G; Moore, T J T; Ossenkopf, V; Rawlings, J; Richer, J; Roellig, M; Schilke, P; Spaans, M; Tielens, A G G M; Thompson, M A; Viti, S; Weferling, B; White, G J; Wouterloot, J; Yates, J; Zhu, M; White, Glenn J.
2006-01-01
Stars form in the densest, coldest, most quiescent regions of molecular clouds. Molecules provide the only probes which can reveal the dynamics, physics, chemistry and evolution of these regions, but our understanding of the molecular inventory of sources and how this is related to their physical state and evolution is rudimentary and incomplete. The Spectral Legacy Survey (SLS) is one of seven surveys recently approved by the JCMT Board. Starting in 2007, the SLS will produce a spectral imaging survey of the content and distribution of all the molecules detected in the 345 GHz atmospheric window (between 332 GHz and 373 GHz) towards a sample of 5 sources. Our intended targets are: a low mass core (NGC1333 IRAS4), 3 high mass cores spanning a range of star forming environments and evolutionary states (W49, AFGL2591, and IRAS20126), and a PDR (the Orion Bar). The SLS will use the unique spectral imaging capabilities of HARP-B/ACSIS to study the molecular inventory and the physical structure of these objects, w...
On the concept of spectral singularities
Indian Academy of Sciences (India)
Gusein Sh Guseinov
2009-09-01
In this paper, we discuss the concept of spectral singularities for non-Hermitian Hamiltonians. We exihibit spectral singularities of some well-known concrete Hamiltonians with complex-valued coefficients.
Global and local aspects of spectral actions
Iochum, Bruno; Vassilevich, Dmitri
2012-01-01
The principal object in noncommutatve geometry is the spectral triple consisting of an algebra A, a Hilbert space H, and a Dirac operator D. Field theories are incorporated in this approach by the spectral action principle, that sets the field theory action to Tr f(D^2/\\Lambda^2), where f is a real function such that the trace exists, and \\Lambda is a cutoff scale. In the low-energy (weak-field) limit the spectral action reproduces reasonably well the known physics including the standard model. However, not much is known about the spectral action beyond the low-energy approximation. In this paper, after an extensive introduction to spectral triples and spectral actions, we study various expansions of the spectral actions (exemplified by the heat kernel). We derive the convergence criteria. For a commutative spectral triple, we compute the heat kernel on the torus up the second order in gauge connection and consider limiting cases.
Spectral efficiency analysis of OCDMA systems
Institute of Scientific and Technical Information of China (English)
Hui Yan; Kun Qiu; Yun Ling
2009-01-01
We discuss several kinds of code schemes and analyze their spectral efficiency, code utilizing efficiency, and the maximal spectral efficiency. Error correction coding is used to increase the spectral efficiency, and it can avoid the spectral decrease with the increase of the length. The extended primer code (EPC) has the highest spectral efficiency in the unipolar code system. The bipolar code system has larger spectral efficiency than unipolar code system, but has lower code utilizing efficiency and the maximal spectral efficiency. From the numerical results, we can see that the spectral efficiency increases by 0.025 (b/s)/Hz when the bit error rate (BER) increases from 10-9 to 10-7.
Calibration with near-continuous spectral measurements
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Rasmussen, Michael; Madsen, Henrik
2001-01-01
In chemometrics traditional calibration in case of spectral measurements express a quantity of interest (e.g. a concentration) as a linear combination of the spectral measurements at a number of wavelengths. Often the spectral measurements are performed at a large number of wavelengths and in thi...... by an example in which the octane number of gasoline is related to near infrared spectral measurements. The performance is found to be much better that for the traditional calibration methods....
USGS Spectral Library Version 7
Kokaly, Raymond F.; Clark, Roger N.; Swayze, Gregg A.; Livo, K. Eric; Hoefen, Todd M.; Pearson, Neil C.; Wise, Richard A.; Benzel, William M.; Lowers, Heather A.; Driscoll, Rhonda L.; Klein, Anna J.
2017-04-10
We have assembled a library of spectra measured with laboratory, field, and airborne spectrometers. The instruments used cover wavelengths from the ultraviolet to the far infrared (0.2 to 200 microns [μm]). Laboratory samples of specific minerals, plants, chemical compounds, and manmade materials were measured. In many cases, samples were purified, so that unique spectral features of a material can be related to its chemical structure. These spectro-chemical links are important for interpreting remotely sensed data collected in the field or from an aircraft or spacecraft. This library also contains physically constructed as well as mathematically computed mixtures. Four different spectrometer types were used to measure spectra in the library: (1) Beckman™ 5270 covering the spectral range 0.2 to 3 µm, (2) standard, high resolution (hi-res), and high-resolution Next Generation (hi-resNG) models of Analytical Spectral Devices (ASD) field portable spectrometers covering the range from 0.35 to 2.5 µm, (3) Nicolet™ Fourier Transform Infra-Red (FTIR) interferometer spectrometers covering the range from about 1.12 to 216 µm, and (4) the NASA Airborne Visible/Infra-Red Imaging Spectrometer AVIRIS, covering the range 0.37 to 2.5 µm. Measurements of rocks, soils, and natural mixtures of minerals were made in laboratory and field settings. Spectra of plant components and vegetation plots, comprising many plant types and species with varying backgrounds, are also in this library. Measurements by airborne spectrometers are included for forested vegetation plots, in which the trees are too tall for measurement by a field spectrometer. This report describes the instruments used, the organization of materials into chapters, metadata descriptions of spectra and samples, and possible artifacts in the spectral measurements. To facilitate greater application of the spectra, the library has also been convolved to selected spectrometer and imaging spectrometers sampling and
Planck 2013 results. IX. HFI spectral response
DEFF Research Database (Denmark)
Planck Collaboration,; Ade, P. A. R.; Aghanim, N.;
2013-01-01
The Planck HFI spectral response was determined through a series of ground based tests conducted with the HFI focal plane in a cryogenic environment prior to launch. The main goal of the spectral transmission tests is to measure the relative spectral response (including the level of out-of-band s...
Spectral averaging techniques for Jacobi matrices
del Rio, Rafael; Schulz-Baldes, Hermann
2008-01-01
Spectral averaging techniques for one-dimensional discrete Schroedinger operators are revisited and extended. In particular, simultaneous averaging over several parameters is discussed. Special focus is put on proving lower bounds on the density of the averaged spectral measures. These Wegner type estimates are used to analyze stability properties for the spectral types of Jacobi matrices under local perturbations.
Yi, Grace Y; He, Wenqing
2012-05-01
It has been well known that ignoring measurement error may result in substantially biased estimates in many contexts including linear and nonlinear regressions. For survival data with measurement error in covariates, there has been extensive discussion in the literature with the focus on proportional hazards (PH) models. Recently, research interest has extended to accelerated failure time (AFT) and additive hazards (AH) models. However, the impact of measurement error on other models, such as the proportional odds model, has received relatively little attention, although these models are important alternatives when PH, AFT, or AH models are not appropriate to fit data. In this paper, we investigate this important problem and study the bias induced by the naive approach of ignoring covariate measurement error. To adjust for the induced bias, we describe the simulation-extrapolation method. The proposed method enjoys a number of appealing features. Its implementation is straightforward and can be accomplished with minor modifications of existing software. More importantly, the proposed method does not require modeling the covariate process, which is quite attractive in practice. As the precise values of error-prone covariates are often not observable, any modeling assumption on such covariates has the risk of model misspecification, hence yielding invalid inferences if this happens. The proposed method is carefully assessed both theoretically and empirically. Theoretically, we establish the asymptotic normality for resulting estimators. Numerically, simulation studies are carried out to evaluate the performance of the estimators as well as the impact of ignoring measurement error, along with an application to a data set arising from the Busselton Health Study. Sensitivity of the proposed method to misspecification of the error model is studied as well.
Directory of Open Access Journals (Sweden)
B. Deutsch
2010-04-01
Full Text Available Rates of denitrification in sediments were measured with the isotope pairing technique at different sites in the southern and central Baltic Sea. They varied between 0.5 μmol m^{−2} h^{−1} in sands and 28.7 μmol m^{−2} h^{−1} in muddy sediments and showed a good correlation to the organic carbon contents of the surface sediments. N-removal rates via sedimentary denitrification were estimated for the entire Baltic Sea calculating sediment specific denitrification rates and interpolating them to the whole Baltic Sea area. Another approach was carried out by using the relationship between the organic carbon content and the rate of denitrification. For the entire Baltic Sea the N-removal by denitrification in sediments varied between 426–652 kt N a^{−1}, which is around 48–73% of the external N inputs delivered via rivers, coastal point sources and atmospheric deposition. Moreover, an expansion of the anoxic bottom areas was considered under the assumption of a rising oxycline from 100 to 80 m water depth. This leads to an increase of the area with anoxic conditions and an overall decrease in sedimentary denitrification by 14%. Overall we can show here that this type of data extrapolation is a powerful tool to estimate the nitrogen losses for a whole coastal sea and may be applicable to other coastal regions and enclosed seas, too.
Li, Zhaojun; Yang, Hua; Li, Yupeng; Long, Jian; Liang, Yongchao
2014-01-01
There has been increasing concern in recent years regarding lead (Pb) transfer in the soil-plant system. In this study the transfer of Pb (exogenous salts) was investigated from a wide range of Chinese soils to corn grain (Zhengdan 958). Prediction models were developed with combination of the Pb bioconcentration factor (BCF) of Zhengdan 958, and soil pH, organic matter (OM) content, and cation exchange capacity (CEC) through multiple stepwise regressions. Moreover, these prediction models from Zhengdan 958 were applied to other non-model corn species through cross-species extrapolation approach. The results showed that the soil pH and OM were the major factors that controlled Pb transfer from soil to corn grain. The lower pH and OM could improve the bioaccumulation of Pb in corn grain. No significant differences were found between two prediction models derived from the different exogenous Pb contents. When the prediction models were applied to other non-model corn species, the ratio ranges between the predicted BCF values and the measured BCF values were within an interval of 2-fold and close to the solid line of 1∶1 relationship. Moreover, the prediction model i.e. Log[BCF] = -0.098 pH-0.150 log[OM] -1.894 at the treatment of high Pb can effectively reduce the measured BCF intra-species variability for all non-model corn species. These suggested that this prediction model derived from the high Pb content was more adaptable to be applied to other non-model corn species to predict the Pb bioconcentration in corn grain and assess the ecological risk of Pb in different agricultural soils.
Teeguarden, Justin G; Barton, Hugh A
2004-06-01
One measure of the potency of compounds that lead to the effects through ligand-dependent gene transcription is the relative affinity for the critical receptor. Endocrine active compounds that are presumed to act principally through binding to the estrogen receptor (e.g., estradiol, genistein, bisphenol A, and octylphenol) comprise one class of such compounds. For making simple comparisons, receptor-binding affinity has been equated to in vivo potency, which consequently defines the dose-response characteristics for the compound. Direct extrapolation of in vitro estimated affinities to the corresponding in vivo system and to specific species or life stages (e.g., neonatal, pregnancy) can be misleading. Accurate comparison of the potency of endocrine active compounds requires characterization of biochemical and pharmacokinetic factors that affect their free concentration. Quantitative in vitro and in vivo models were developed for integrating pharmacokinetics factors (e.g., serum protein and receptor-binding affinities, clearance) that affect potency. Data for parameterizing these models for several estrogenic compounds were evaluated and the models exercised. While simulations of adult human or rat sera were generally successful, difficulties in describing early life stages were identified. Exogenous compounds were predicted to be largely ineffective at competing estradiol off serum-binding proteins, suggesting this was unlikely to be physiologically significant. Discrepancies were identified between relative potencies based upon modeling in vitro receptor-binding activity versus in vivo activity in the presence of clearance and serum-binding proteins. The examples illustrate the utility of this approach for integrating available experimental data from in vitro and in vivo studies to estimate the relative potency of these compounds.
Kobayashi, Hiroyuki
2012-01-01
Single-molecule study of phenylenevinylene oligomers revealed distinct spectral forms due to different conjugation lengths which are determined by torsional defects. Large spectral jumps between different spectral forms were ascribed to torsional flips of a single phenylene ring. These spectral changes reflect the dynamic nature of electron delocalization in oligophenylenevinylenes and enable estimation of the phenylene torsional barriers. © 2012 The Owner Societies.
Sharp Upper and Lower Bounds for the Laplacian Spectral Radius and the Spectral Radius of Graphs
Institute of Scientific and Technical Information of China (English)
Ji-ming Guo
2008-01-01
In this paper, sharp upper bounds for the Laplacian spectral radius and the spectral radius of graphs are given, respectively. We show that some known bounds can be obtained from our bounds. For a bipartite graph G, we also present sharp lower bounds for the Laplacian spectral radius and the spectral radius,respectively.
Numerical and experimental results on the spectral wave transfer in finite depth
Benassai, Guido
2016-04-01
Determination of the form of the one-dimensional surface gravity wave spectrum in water of finite depth is important for many scientific and engineering applications. Spectral parameters of deep water and intermediate depth waves serve as input data for the design of all coastal structures and for the description of many coastal processes. Moreover, the wave spectra are given as an input for the response and seakeeping calculations of high speed vessels in extreme sea conditions and for reliable calculations of the amount of energy to be extracted by wave energy converters (WEC). Available data on finite depth spectral form is generally extrapolated from parametric forms applicable in deep water (e.g., JONSWAP) [Hasselmann et al., 1973; Mitsuyasu et al., 1980; Kahma, 1981; Donelan et al., 1992; Zakharov, 2005). The present paper gives a contribution in this field through the validation of the offshore energy spectra transfer from given spectral forms through the measurement of inshore wave heights and spectra. The wave spectra on deep water were recorded offshore Ponza by the Wave Measurement Network (Piscopia et al.,2002). The field regressions between the spectral parameters, fp and the nondimensional energy with the fetch length were evaluated for fetch-limited sea conditions. These regressions gave the values of the spectral parameters for the site of interest. The offshore wave spectra were transfered from the measurement station offshore Ponza to a site located offshore the Gulf of Salerno. The offshore local wave spectra so obtained were transfered on the coastline with the TMA model (Bouws et al., 1985). Finally the numerical results, in terms of significant wave heights, were compared with the wave data recorded by a meteo-oceanographic station owned by Naples Hydrographic Office on the coastline of Salerno in 9m depth. Some considerations about the wave energy to be potentially extracted by Wave Energy Converters were done and the results were discussed.
Planar-waveguide integrated spectral comparator.
Mossberg, T W; Iazikov, D; Greiner, C
2004-06-01
A cost-effective yet robust and versatile dual-channel spectral comparator is presented. The silica-on-silicon planar-waveguide integrated device includes two holographic Bragg-grating reflectors (HBRs) with complementary spectral transfer functions. Output comprises projections of input signal spectra onto the complementary spectral channels. Spectral comparators may be useful in optical code-division multiplexing, optical packet decoding, spectral target recognition, and the identification of molecular spectra. HBRs may be considered to be mode-specific photonic crystals.
Spectral clustering for TRUS images
Directory of Open Access Journals (Sweden)
Salama Magdy MA
2007-03-01
Full Text Available Abstract Background Identifying the location and the volume of the prostate is important for ultrasound-guided prostate brachytherapy. Prostate volume is also important for prostate cancer diagnosis. Manual outlining of the prostate border is able to determine the prostate volume accurately, however, it is time consuming and tedious. Therefore, a number of investigations have been devoted to designing algorithms that are suitable for segmenting the prostate boundary in ultrasound images. The most popular method is the deformable model (snakes, a method that involves designing an energy function and then optimizing this function. The snakes algorithm usually requires either an initial contour or some points on the prostate boundary to be estimated close enough to the original boundary which is considered a drawback to this powerful method. Methods The proposed spectral clustering segmentation algorithm is built on a totally different foundation that doesn't involve any function design or optimization. It also doesn't need any contour or any points on the boundary to be estimated. The proposed algorithm depends mainly on graph theory techniques. Results Spectral clustering is used in this paper for both prostate gland segmentation from the background and internal gland segmentation. The obtained segmented images were compared to the expert radiologist segmented images. The proposed algorithm obtained excellent gland segmentation results with 93% average overlap areas. It is also able to internally segment the gland where the segmentation showed consistency with the cancerous regions identified by the expert radiologist. Conclusion The proposed spectral clustering segmentation algorithm obtained fast excellent estimates that can give rough prostate volume and location as well as internal gland segmentation without any user interaction.
Spectral Methods for Magnetic Anomalies
Parker, R. L.; Gee, J. S.
2013-12-01
Spectral methods, that is, those based in the Fourier transform, have long been employed in the analysis of magnetic anomalies. For example, Schouten and MaCamy's Earth filter is used extensively to map patterns to the pole, and Parker's Fourier transform series facilitates forward modeling and provides an efficient algorithm for inversion of profiles and surveys. From a different, and perhaps less familiar perspective, magnetic anomalies can be represented as the realization of a stationary stochastic process and then statistical theory can be brought to bear. It is vital to incorporate the full 2-D power spectrum, even when discussing profile data. For example, early analysis of long profiles failed to discover the small-wavenumber peak in the power spectrum predicted by one-dimensional theory. The long-wavelength excess is the result of spatial aliasing, when energy leaks into the along-track spectrum from the cross-track components of the 2-D spectrum. Spectral techniques may be used to improve interpolation and downward continuation of survey data. They can also evaluate the reliability of sub-track magnetization models both across and and along strike. Along-strike profiles turn out to be surprisingly good indicators of the magnetization directly under them; there is high coherence between the magnetic anomaly and the magnetization over a wide band. In contrast, coherence is weak at long wavelengths on across-strike lines, which is naturally the favored orientation for most studies. When vector (or multiple level) measurements are available, cross-spectral analysis can reveal the wavenumber interval where the geophysical signal resides, and where noise dominates. One powerful diagnostic is that the phase spectrum between the vertical and along-path components of the field must be constant 90 degrees. To illustrate, it was found that on some very long Project Magnetic lines, only the lowest 10% of the wavenumber band contain useful geophysical signal. In this
Numerical relativity and spectral methods
Grandclement, P.
2016-12-01
The term numerical relativity denotes the various techniques that aim at solving Einstein's equations using computers. Those computations can be divided into two families: temporal evolutions on the one hand and stationary or periodic solutions on the other one. After a brief presentation of those two classes of problems, I will introduce a numerical tool designed to solve Einstein's equations: the KADATH library. It is based on the the use of spectral methods that can reach high accuracy with moderate computational resources. I will present some applications about quasicircular orbits of black holes and boson star configurations.
Spectral analysis of bedform dynamics
DEFF Research Database (Denmark)
Winter, Christian; Ernstsen, Verner Brandbyge; Noormets, Riko
. An assessment of bedform migration was achieved, as the growth and displacement of every single constituent can be distinguished. It can be shown that the changes in amplitude remain small for all harmonic constituents, whereas the phase shifts differ significantly. Thus the harmonics can be classified....... The proposed method overcomes the above mentioned problems of common descriptive analysis as it is an objective and straightforward mathematical process. The spectral decomposition of superimposed dunes allows a detailed description and analysis of dune patterns and migration....
Spectral Properties of Schwarzschild Instantons
Jante, Rogelio
2016-01-01
We study spectral properties of the Dirac and scalar Laplace operator on the Euclidean Schwarzschild space, both twisted by a family of abelian connections with anti-self-dual curvature. We show that the zero-modes of the gauged Dirac operator, first studied by Pope, take a particularly simple form in terms of the radius of the Euclidean time orbits, and interpret them in the context of geometric models of matter. For the gauged Laplace operator, we study the spectrum of bound states numerically and observe that it can be approximated with remarkable accuracy by that of the exactly solvable gauged Laplace operator on the Euclidean Taub-NUT space.
Spectral Methods in Spatial Statistics
Directory of Open Access Journals (Sweden)
Kun Chen
2014-01-01
Full Text Available When the spatial location area increases becoming extremely large, it is very difficult, if not possible, to evaluate the covariance matrix determined by the set of location distance even for gridded stationary Gaussian process. To alleviate the numerical challenges, we construct a nonparametric estimator called periodogram of spatial version to represent the sample property in frequency domain, because periodogram requires less computational operation by fast Fourier transform algorithm. Under some regularity conditions on the process, we investigate the asymptotic unbiasedness property of periodogram as estimator of the spectral density function and achieve the convergence rate.
Science with CMB spectral distortions
Chluba, Jens
2014-01-01
The measurements of COBE/FIRAS have shown that the CMB spectrum is extremely close to a perfect blackbody. There are, however, a number of processes in the early Universe that should create spectral distortions at a level which is within reach of present day technology. In this talk, I will give a brief overview of recent theoretical and experimental developments, explaining why future measurements of the CMB spectrum will open up an unexplored window to early-universe and particle physics with possible non-standard surprises but also several guaranteed signals awaiting us.
[Spectral emissivity of thin films].
Zhong, D
2001-02-01
In this paper, the contribution of multiple reflections in thin film to the spectral emissivity of thin films of low absorption is discussed. The expression of emissivity of thin films derived here is related to the thin film thickness d and the optical constants n(lambda) and k(lambda). It is shown that in the special case d-->infinity the emissivity of thin films is equivalent to that of the bulk material. Realistic numerical and more precise general numerical results for the dependence of the emissivity on d, n(lambda) and k(lambda) are given.
Subnanosecond spectral diffusion measurement using photon correlation
Sallen, Gregory; Aichele, Thomas; André, Régis; Besombes, Lucien; Bougerol, Catherine; Richard, Maxime; Tatarenko, Serge; Kheng, Kuntheak; Poizat, Jean-Philippe; 10.1038/nphoton.2010.174
2012-01-01
Spectral diffusion is a result of random spectral jumps of a narrow line as a result of a fluctuating environment. It is an important issue in spectroscopy, because the observed spectral broadening prevents access to the intrinsic line properties. However, its characteristic parameters provide local information on the environment of a light emitter embedded in a solid matrix, or moving within a fluid, leading to numerous applications in physics and biology. We present a new experimental technique for measuring spectral diffusion based on photon correlations within a spectral line. Autocorrelation on half of the line and cross-correlation between the two halves give a quantitative value of the spectral diffusion time, with a resolution only limited by the correlation set-up. We have measured spectral diffusion of the photoluminescence of a single light emitter with a time resolution of 90 ps, exceeding by four orders of magnitude the best resolution reported to date.
Language identification using spectral and prosodic features
Rao, K Sreenivasa; Maity, Sudhamay
2015-01-01
This book discusses the impact of spectral features extracted from frame level, glottal closure regions, and pitch-synchronous analysis on the performance of language identification systems. In addition to spectral features, the authors explore prosodic features such as intonation, rhythm, and stress features for discriminating the languages. They present how the proposed spectral and prosodic features capture the language specific information from two complementary aspects, showing how the development of language identification (LID) system using the combination of spectral and prosodic features will enhance the accuracy of identification as well as improve the robustness of the system. This book provides the methods to extract the spectral and prosodic features at various levels, and also suggests the appropriate models for developing robust LID systems according to specific spectral and prosodic features. Finally, the book discuss about various combinations of spectral and prosodic features, and the desire...
Planck 2013 results. IX. HFI spectral response
Ade, P A R; Armitage-Caplan, C; Arnaud, M; Ashdown, M; Atrio-Barandela, F; Aumont, J; Baccigalupi, C; Banday, A J; Barreiro, R B; Battaner, E; Benabed, K; Benoît, A; Benoit-Lévy, A; Bernard, J -P; Bersanelli, M; Bielewicz, P; Bobin, J; Bock, J J; Bond, J R; Borrill, J; Bouchet, F R; Boulanger, F; Bridges, M; Bucher, M; Burigana, C; Cardoso, J -F; Catalano, A; Challinor, A; Chamballu, A; Chary, R -R; Chen, X; Chiang, L -Y; Chiang, H C; Christensen, P R; Church, S; Clements, D L; Colombi, S; Colombo, L P L; Combet, C; Comis, B; Couchot, F; Coulais, A; Crill, B P; Curto, A; Cuttaia, F; Danese, L; Davies, R D; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Delouis, J -M; Désert, F -X; Dickinson, C; Diego, J M; Dole, H; Donzelli, S; Doré, O; Douspis, M; Dupac, X; Efstathiou, G; Enßlin, T A; Eriksen, H K; Falgarone, E; Finelli, F; Forni, O; Frailis, M; Franceschi, E; Galeotta, S; Ganga, K; Giard, M; Giraud-Héraud, Y; González-Nuevo, J; Górski, K M; Gratton, S; Gregorio, A; Gruppuso, A; Hansen, F K; Hanson, D; Harrison, D; Henrot-Versillé, S; Hernández-Monteagudo, C; Herranz, D; Hildebrandt, S R; Hivon, E; Hobson, M; Holmes, W A; Hornstrup, A; Hovest, W; Huffenberger, K M; Hurier, G; Jaffe, T R; Jaffe, A H; Jones, W C; Juvela, M; Keihänen, E; Keskitalo, R; Kisner, T S; Kneissl, R; Knoche, J; Knox, L; Kunz, M; Kurki-Suonio, H; Lagache, G; Lamarre, J -M; Lasenby, A; Laureijs, R J; Lawrence, C R; Leahy, J P; Leonardi, R; Leroy, C; Lesgourgues, J; Liguori, M; Lilje, P B; Linden-Vørnle, M; López-Caniego, M; Lubin, P M; Macías-Pérez, J F; Maffei, B; Mandolesi, N; Maris, M; Marshall, D J; Martin, P G; Martínez-González, E; Masi, S; Matarrese, S; Matthai, F; Mazzotta, P; McGehee, P; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Miville-Deschênes, M -A; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Munshi, D; Murphy, J A; Naselsky, P; Nati, F; Natoli, P; Netterfield, C B; Nørgaard-Nielsen, H U; North, C; Noviello, F; Novikov, D; Novikov, I; Osborne, S; Oxborrow, C A; Paci, F; Pagano, L; Pajot, F; Paoletti, D; Pasian, F; Patanchon, G; Perdereau, O; Perotto, L; Perrotta, F; Piacentini, F; Piat, M; Pierpaoli, E; Pietrobon, D; Plaszczynski, S; Pointecouteau, E; Polenta, G; Ponthieu, N; Popa, L; Poutanen, T; Pratt, G W; Prézeau, G; Prunet, S; Puget, J -L; Rachen, J P; Reinecke, M; Remazeilles, M; Renault, C; Ricciardi, S; Riller, T; Ristorcelli, I; Rocha, G; Rosset, C; Roudier, G; Rusholme, B; Santos, D; Savini, G; Shellard, E P S; Spencer, L D; Starck, J -L; Stolyarov, V; Stompor, R; Sudiwala, R; Sureau, F; Sutton, D; Suur-Uski, A -S; Sygnet, J -F; Tauber, J A; Tavagnacco, D; Terenzi, L; Tomasi, M; Tristram, M; Tucci, M; Umana, G; Valenziano, L; Valiviita, J; Van Tent, B; Vielva, P; Villa, F; Vittorio, N; Wade, L A; Wandelt, B D; Yvon, D; Zacchei, A; Zonca, A
2014-01-01
The Planck High Frequency Instrument (HFI) spectral response was determined through a series of ground based tests conducted with the HFI focal plane in a cryogenic environment prior to launch. The main goal of the spectral transmission tests was to measure the relative spectral response (including out-of-band signal rejection) of all HFI detectors. This was determined by measuring the output of a continuously scanned Fourier transform spectrometer coupled with all HFI detectors. As there is no on-board spectrometer within HFI, the ground-based spectral response experiments provide the definitive data set for the relative spectral calibration of the HFI. The spectral response of the HFI is used in Planck data analysis and component separation, this includes extraction of CO emission observed within Planck bands, dust emission, Sunyaev-Zeldovich sources, and intensity to polarization leakage. The HFI spectral response data have also been used to provide unit conversion and colour correction analysis tools. Ver...
Amore, Paolo; Fernandez, Francisco M; Rösler, Boris
2015-01-01
We apply second order finite difference to calculate the lowest eigenvalues of the Helmholtz equation, for complicated non-tensor domains in the plane, using different grids which sample exactly the border of the domain. We show that the results obtained applying Richardson and Pad\\'e-Richardson extrapolation to a set of finite difference eigenvalues corresponding to different grids allows to obtain extremely precise values. When possible we have assessed the precision of our extrapolations comparing them with the highly precise results obtained using the method of particular solutions. Our empirical findings suggest an asymptotic nature of the FD series. In all the cases studied, we are able to report numerical results which are more precise than those available in the literature.
On extrapolation blowups in the
2006-01-01
Yano's extrapolation theorem dated back to 1951 establishes boundedness properties of a subadditive operator acting continuously in for close to and/or taking into as and/or with norms blowing up at speed and/or , . Here we give answers in terms of Zygmund, Lorentz-Zygmund and small Lebesgue spaces to what happens if as . The study has been motivated by current investigations of convolution maximal functions in stochastic analysis, where the problem occurs for . We also touch the ...
Institute of Scientific and Technical Information of China (English)
Shu-hua Zhang; Tao Lin; Yan-ping Lin; Ming Rao
2001-01-01
In this paper we will show that the Richardson extrapolation can be used to enhance the numerical solution generated by a Petrov-Galerkin finite element method for the initialvalue problem for a nonlinear Volterra integro-differential equation. As by-products, we will also show that these enhanced approximations can be used to form a class of aposteriori estimators for this Petrov-Galerkin finite element method. Numerical examples are supplied to illustrate the theoretical results.
Methodological Analysis of Extrapolating Input-Output Tables of China%中国投入产出序列表外推方法研究
Institute of Scientific and Technical Information of China (English)
马向前; 任若恩
2004-01-01
This paper compared the estimating precision and applicability for extrapolating China's Input-Output tables series based on Kuroda and RAS approach, respectively. The statistic results showed that Kuroda approach was slightly prior to RAS methlod and both estimates had large errorsin the case that time periods were longer than five years,which ascrbed to significant continued changes in China's industry structure. However, the modified Kuroela approach will be applicable for updating Input-Output tables of China.
Directory of Open Access Journals (Sweden)
Hidehiko eOkamoto
2012-05-01
Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.
Ciambella, J; Paolone, A; Vidoli, S
2014-09-01
We report about the experimental identification of viscoelastic constitutive models for frequencies ranging within 0-10Hz. Dynamic moduli data are fitted forseveral materials of interest to medical applications: liver tissue (Chatelin et al., 2011), bioadhesive gel (Andrews et al., 2005), spleen tissue (Nicolle et al., 2012) and synthetic elastomer (Osanaiye, 1996). These materials actually represent a rather wide class of soft viscoelastic materials which are usually subjected to low frequencies deformations. We also provide prescriptions for the correct extrapolation of the material behavior at higher frequencies. Indeed, while experimental tests are more easily carried out at low frequency, the identified viscoelastic models are often used outside the frequency range of the actual test. We consider two different classes of models according to their relaxation function: Debye models, whose kernel decays exponentially fast, and fractional models, including Cole-Cole, Davidson-Cole, Nutting and Havriliak-Negami, characterized by a slower decay rate of the material memory. Candidate constitutive models are hence rated according to the accurateness of the identification and to their robustness to extrapolation. It is shown that all kernels whose decay rate is too fast lead to a poor fitting and high errors when the material behavior is extrapolated to broader frequency ranges.
Mossetti, Stefano; de Bartolo, Daniela; Veronese, Ivan; Cantone, Marie Claire; Cosenza, Cristina; Nava, Elisa
2016-12-01
International and national organizations have formulated guidelines establishing limits for occupational and residential electromagnetic field (EMF) exposure at high-frequency fields. Italian legislation fixed 20 V/m as a limit for public protection from exposure to EMFs in the frequency range 0.1 MHz-3 GHz and 6 V/m as a reference level. Recently, the law was changed and the reference level must now be evaluated as the 24-hour average value, instead of the previous highest 6 minutes in a day. The law refers to a technical guide (CEI 211-7/E published in 2013) for the extrapolation techniques that public authorities have to use when assessing exposure for compliance with limits. In this work, we present measurements carried out with a vectorial spectrum analyzer to identify technical critical aspects in these extrapolation techniques, when applied to UMTS and LTE signals. We focused also on finding a good balance between statistically significant values and logistic managements in control activity, as the signal trend in situ is not known. Measurements were repeated several times over several months and for different mobile companies. The outcome presented in this article allowed us to evaluate the reliability of the extrapolation results obtained and to have a starting point for defining operating procedures.
Spectral compression of single photons
Lavoie, Jonathan; Wright, Logan G; Fedrizzi, Alessandro; Resch, Kevin J
2013-01-01
Photons are critical to quantum technologies since they can be used for virtually all quantum information tasks: in quantum metrology, as the information carrier in photonic quantum computation, as a mediator in hybrid systems, and to establish long distance networks. The physical characteristics of photons in these applications differ drastically; spectral bandwidths span 12 orders of magnitude from 50 THz for quantum-optical coherence tomography to 50 Hz for certain quantum memories. Combining these technologies requires coherent interfaces that reversibly map centre frequencies and bandwidths of photons to avoid excessive loss. Here we demonstrate bandwidth compression of single photons by a factor 40 and tunability over a range 70 times that bandwidth via sum-frequency generation with chirped laser pulses. This constitutes a time-to-frequency interface for light capable of converting time-bin to colour entanglement and enables ultrafast timing measurements. It is a step toward arbitrary waveform generatio...
A Spectral Canonical Electrostatic Algorithm
Webb, Stephen D
2015-01-01
Studying single-particle dynamics over many periods of oscillations is a well-understood problem solved using symplectic integration. Such integration schemes derive their update sequence from an approximate Hamiltonian, guaranteeing that the geometric structure of the underlying problem is preserved. Simulating a self-consistent system over many oscillations can introduce numerical artifacts such as grid heating. This unphysical heating stems from using non-symplectic methods on Hamiltonian systems. With this guidance, we derive an electrostatic algorithm using a discrete form of Hamilton's Principle. The resulting algorithm, a gridless spectral electrostatic macroparticle model, does not exhibit the unphysical heating typical of most particle-in-cell methods. We present results of this using a two-body problem as an example of the algorithm's energy- and momentum-conserving properties.
Active spectral imaging and mapping
Steinvall, Ove
2014-04-01
Active imaging and mapping using lasers as illumination sources have been of increasing interest during the last decades. Applications range from defense and security, remote sensing, medicine, robotics, and others. So far, these laser systems have mostly been based on a fix wavelength laser. Recent advances in lasers enable emission of tunable, multiline, or broadband emission, which together with the development of array detectors will extend the capabilities of active imaging and mapping. This paper will review some of the recent work on active imaging mainly for defense and security and remote sensing applications. A short survey of basic lidar relations and present fix wavelength laser systems is followed by a review of the benefits of adding the spectral dimension to active and/or passive electro-optical systems.
Spectral emissivity of cirrus clouds
Beck, Gordon H.; Davis, John M.; Cox, Stephen K.
1993-01-01
The inference of cirrus cloud properties has many important applications including global climate studies, radiation budget determination, remote sensing techniques and oceanic studies from satellites. Data taken at the Parsons Kansas site during the FIRE II project are used for this study. On November 26 there were initially clear sky conditions gradually giving way to a progressively thickening cirrus shield over a period of a few hours. Interferometer radiosonde and lidar data were taken throughout this event. Two techniques are used to infer the downward spectral emittance of the observed cirrus layer. One uses only measurements and the other involves measurements and FASCODE III calculations. FASCODE III is a line-by line radiance/transmittance model developed at the Air Force Geophysics Laboratory.
Spectral Selectivity Applied To Hybrid Concentration Systems
Hamdy, M. A.; Luttmann, F.; Osborn, D. E.; Jacobson, M. R.; MacLeod, H. A.
1985-12-01
The efficiency of conversion of concentrated solar energy can be improved by separating the solar spectrum into portions matched to specific photoquantum processes and the balance used for photothermal conversion. The basic approaches of spectrally selective beam splitters are presented. A detailed simulation analysis using TRNSYS is developed for a spectrally selective hybrid photovoltaic/photothermal concentrating system. The analysis shows definite benefits to a spectrally selective approach.
Spectral mapping theorems a bluffer's guide
Harte, Robin
2014-01-01
Written by an author who was at the forefront of developments in multi-variable spectral theory during the seventies and the eighties, this guide sets out to describe in detail the spectral mapping theorem in one, several and many variables. The basic algebraic systems – semigroups, rings and linear algebras – are summarised, and then topological-algebraic systems, including Banach algebras, to set up the basic language of algebra and analysis. Spectral Mapping Theorems is written in an easy-to-read and engaging manner and will be useful for both the beginner and expert. It will be of great importance to researchers and postgraduates studying spectral theory.
Spectral Lag Evolution among -Ray Burst Pulses
Indian Academy of Sciences (India)
Lan-Wei Jia; Yun-Feng Liang; En-Wei Liang
2014-09-01
We analyse the spectral lag evolution of -ray burst (GRB) pulses with observations by CGRO/BATSE. No universal spectral lag evolution feature and pulse luminosity-lag relation within a GRB is observed.Our results suggest that the spectral lag would be due to radiation physics and dynamics of a given emission episode, possibly due to the longer lasting emission in a lower energy band, and the spectral lag may not be an intrinsic parameter to discriminate the long and short GRBs.