WorldWideScience

Sample records for linear extrapolation method

  1. Technique of Critical Current Density Measurement of Bulk Superconductor with Linear Extrapolation Method

    International Nuclear Information System (INIS)

    Adi, Wisnu Ari; Sukirman, Engkir; Winatapura, Didin S.

    2000-01-01

    Technique of critical current density measurement (Jc) of HTc bulk ceramic superconductor has been performed by using linear extrapolation with four-point probes method. The measurement of critical current density HTc bulk ceramic superconductor usually causes damage in contact resistance. In order to decrease this damage factor, we introduce extrapolation method. The extrapolating data show that the critical current density Jc for YBCO (123) and BSCCO (2212) at 77 K are 10,85(6) Amp.cm - 2 and 14,46(6) Amp.cm - 2, respectively. This technique is easier, simpler, and the use of the current flow is low, so it will not damage the contact resistance of the sample. We expect that the method can give a better solution for bulk superconductor application. Key words. : superconductor, critical temperature, and critical current density

  2. Linear extrapolation distance for a black cylindrical control rod with the pulsed neutron method

    International Nuclear Information System (INIS)

    Loewenhielm, G.

    1978-03-01

    The objective of this experiment was to measure the linear extrapolation distance for a central black cylindrical control rod in a cylindrical water moderator. The radius for both the control rod and the moderator was varied. The pulsed neutron technique was used and the decay constant was measured for both a homogeneous and a heterogeneous system. From the difference in the decay constants the extrapolation distance could be calculated. The conclusion is that within experimental error it is safe to use the approximate formula given by Pellaud or the more exact one given by Kavenoky. We can also conclude that linear anisotropic scattering is accounted for in a correct way in the approximate formula given by Pellaud and Prinja and Williams

  3. A regularization method for extrapolation of solar potential magnetic fields

    Science.gov (United States)

    Gary, G. A.; Musielak, Z. E.

    1992-01-01

    The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.

  4. Extrapolation methods theory and practice

    CERN Document Server

    Brezinski, C

    1991-01-01

    This volume is a self-contained, exhaustive exposition of the extrapolation methods theory, and of the various algorithms and procedures for accelerating the convergence of scalar and vector sequences. Many subroutines (written in FORTRAN 77) with instructions for their use are provided on a floppy disk in order to demonstrate to those working with sequences the advantages of the use of extrapolation methods. Many numerical examples showing the effectiveness of the procedures and a consequent chapter on applications are also provided - including some never before published results and applicat

  5. Standardization of electron-capture and complex beta-gamma radionuclides by the efficiency extrapolation method

    International Nuclear Information System (INIS)

    Grigorescu, L.

    1976-07-01

    The efficiency extrapolation method was improved by establishing ''linearity conditions'' for the discrimination on the gamma channel of the coincidence equipment. These conditions were proved to eliminate the systematic error of the method. A control procedure for the fulfilment of linearity conditions and estimation of residual systematic error was given. For law-energy gamma transitions an ''equivalent scheme principle'' was established, which allow for a correct application of the method. Solutions of Cs-134, Co-57, Ba-133 and Zn-65 were standardized with an ''effective standard deviation'' of 0.3-0.7 per cent. For Zn-65 ''special linearity conditions'' were applied. (author)

  6. Multiparameter extrapolation and deflation methods for solving equation systems

    Directory of Open Access Journals (Sweden)

    A. J. Hughes Hallett

    1984-01-01

    Full Text Available Most models in economics and the applied sciences are solved by first order iterative techniques, usually those based on the Gauss-Seidel algorithm. This paper examines the convergence of multiparameter extrapolations (accelerations of first order iterations, as an improved approximation to the Newton method for solving arbitrary nonlinear equation systems. It generalises my earlier results on single parameter extrapolations. Richardson's generalised method and the deflation method for detecting successive solutions in nonlinear equation systems are also presented as multiparameter extrapolations of first order iterations. New convergence results are obtained for those methods.

  7. The optimizied expansion method for wavefield extrapolation

    KAUST Repository

    Wu, Zedong

    2013-01-01

    Spectral methods are fast becoming an indispensable tool for wave-field extrapolation, especially in anisotropic media, because of its dispersion and artifact free, as well as highly accurate, solutions of the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain operator.In this abstract, we propose an optimized expansion method that can approximate this operator with its low rank representation. The rank defines the number of inverse FFT required per time extrapolation step, and thus, a lower rank admits faster extrapolations. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its low rank representation.Thus,we obtain more accurate wave-fields using lower rank representation, and thus cheaper extrapolations. The optimization operation to define the low rank representation depends only on the velocity model, and this is done only once, and valid for a full reverse time migration (many shots) or one iteration of full waveform inversion. Applications on the BP model yielded superior results than those obtained using the decomposition approach. For transversely isotopic media, the solutions were free of the shear wave artifacts, and does not require that eta>0.

  8. The effects of different expansions of the exit distribution on the extrapolation length for linearly anisotropic scattering

    International Nuclear Information System (INIS)

    Bulut, S.; Guelecyuez, M.C.; Kaskas, A.; Tezcan, C.

    2007-01-01

    H N and singular eigenfunction methods are used to determine the neutron distribution everywhere in a source-free half space with zero incident flux for a linearly anisotropic scattering kernel. The singular eigenfunction expansion of the method of elementary solutions is used. The orthogonality relations of the discrete and continuous eigenfunctions for linearly anisotropic scattering provides the determination of the expansion coefficients. Different expansions of the exit distribution are used: the expansion in powers of μ, the expansion in terms of Legendre polynomials and the expansion in powers of 1/(1+μ). The results are compared to each other. In the second part of our work, the transport equation and the infinite medium Green function are used. The numerical results of the extrapolation length obtained for the different expansions is discussed. (orig.)

  9. Extrapolations of nuclear binding energies from new linear mass relations

    DEFF Research Database (Denmark)

    Hove, D.; Jensen, A. S.; Riisager, K.

    2013-01-01

    We present a method to extrapolate nuclear binding energies from known values for neighboring nuclei. We select four specific mass relations constructed to eliminate smooth variation of the binding energy as function nucleon numbers. The fast odd-even variations are avoided by comparing nuclei...

  10. Application of the largest Lyapunov exponent and non-linear fractal extrapolation algorithm to short-term load forecasting

    International Nuclear Information System (INIS)

    Wang Jianzhou; Jia Ruiling; Zhao Weigang; Wu Jie; Dong Yao

    2012-01-01

    Highlights: ► The maximal predictive step size is determined by the largest Lyapunov exponent. ► A proper forecasting step size is applied to load demand forecasting. ► The improved approach is validated by the actual load demand data. ► Non-linear fractal extrapolation method is compared with three forecasting models. ► Performance of the models is evaluated by three different error measures. - Abstract: Precise short-term load forecasting (STLF) plays a key role in unit commitment, maintenance and economic dispatch problems. Employing a subjective and arbitrary predictive step size is one of the most important factors causing the low forecasting accuracy. To solve this problem, the largest Lyapunov exponent is adopted to estimate the maximal predictive step size so that the step size in the forecasting is no more than this maximal one. In addition, in this paper a seldom used forecasting model, which is based on the non-linear fractal extrapolation (NLFE) algorithm, is considered to develop the accuracy of predictions. The suitability and superiority of the two solutions are illustrated through an application to real load forecasting using New South Wales electricity load data from the Australian National Electricity Market. Meanwhile, three forecasting models: the gray model, the seasonal autoregressive integrated moving average approach and the support vector machine method, which received high approval in STLF, are selected to compare with the NLFE algorithm. Comparison results also show that the NLFE model is outstanding, effective, practical and feasible.

  11. The optimized expansion based low-rank method for wavefield extrapolation

    KAUST Repository

    Wu, Zedong

    2014-03-01

    Spectral methods are fast becoming an indispensable tool for wavefield extrapolation, especially in anisotropic media because it tends to be dispersion and artifact free as well as highly accurate when solving the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain extrapolation operator efficiently. To solve this problem, we evaluated an optimized expansion method that can approximate this operator with a low-rank variable separation representation. The rank defines the number of inverse Fourier transforms for each time extrapolation step, and thus, the lower the rank, the faster the extrapolation. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its explicit low-rank representation. As a result, we obtain lower rank representations compared with the standard low-rank method within reasonable accuracy and thus cheaper extrapolations. Additional bounds set on the range of propagated wavenumbers to adhere to the physical wave limits yield unconditionally stable extrapolations regardless of the time step. An application on the BP model provided superior results compared to those obtained using the decomposition approach. For transversely isotopic media, because we used the pure P-wave dispersion relation, we obtained solutions that were free of the shear wave artifacts, and the algorithm does not require that n > 0. In addition, the required rank for the optimization approach to obtain high accuracy in anisotropic media was lower than that obtained by the decomposition approach, and thus, it was more efficient. A reverse time migration result for the BP tilted transverse isotropy model using this method as a wave propagator demonstrated the ability of the algorithm.

  12. The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation

    Directory of Open Access Journals (Sweden)

    Bing-Yuan Pu

    2013-01-01

    Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.

  13. NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.

    Science.gov (United States)

    Hinrichs, R N; McLean, S P

    1995-10-01

    This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.

  14. A high precision extrapolation method in multiphase-field model for simulating dendrite growth

    Science.gov (United States)

    Yang, Cong; Xu, Qingyan; Liu, Baicheng

    2018-05-01

    The phase-field method coupling with thermodynamic data has become a trend for predicting the microstructure formation in technical alloys. Nevertheless, the frequent access to thermodynamic database and calculation of local equilibrium conditions can be time intensive. The extrapolation methods, which are derived based on Taylor expansion, can provide approximation results with a high computational efficiency, and have been proven successful in applications. This paper presents a high precision second order extrapolation method for calculating the driving force in phase transformation. To obtain the phase compositions, different methods in solving the quasi-equilibrium condition are tested, and the M-slope approach is chosen for its best accuracy. The developed second order extrapolation method along with the M-slope approach and the first order extrapolation method are applied to simulate dendrite growth in a Ni-Al-Cr ternary alloy. The results of the extrapolation methods are compared with the exact solution with respect to the composition profile and dendrite tip position, which demonstrate the high precision and efficiency of the newly developed algorithm. To accelerate the phase-field and extrapolation computation, the graphic processing unit (GPU) based parallel computing scheme is developed. The application to large-scale simulation of multi-dendrite growth in an isothermal cross-section has demonstrated the ability of the developed GPU-accelerated second order extrapolation approach for multiphase-field model.

  15. Assessment of load extrapolation methods for wind turbines

    DEFF Research Database (Denmark)

    Toft, H.S.; Sørensen, John Dalsgaard; Veldkamp, D.

    2010-01-01

    an approximate analytical solution for the distribution of the peaks is given by Rice. In the present paper three different methods for statistical load extrapolation are compared with the analytical solution for one mean wind speed. The methods considered are global maxima, block maxima and the peak over...

  16. Assessment of Load Extrapolation Methods for Wind Turbines

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard; Veldkamp, Dick

    2011-01-01

    , an approximate analytical solution for the distribution of the peaks is given by Rice. In the present paper, three different methods for statistical load extrapolation are compared with the analytical solution for one mean wind speed. The methods considered are global maxima, block maxima, and the peak over...

  17. Novel method of interpolation and extrapolation of functions by a linear initial value problem

    CSIR Research Space (South Africa)

    Shatalov, M

    2008-09-01

    Full Text Available A novel method of function approximation using an initial value, linear, ordinary differential equation (ODE) is presented. The main advantage of this method is to obtain the approximation expressions in a closed form. This technique can be taught...

  18. π π scattering by pole extrapolation methods

    International Nuclear Information System (INIS)

    Lott, F.W. III.

    1978-01-01

    A 25-inch hydrogen bubble chamber was used at the Lawrence Berkeley Laboratory Bevatron to produce 300,000 pictures of π + p interactions at an incident momentum of the π + of 2.67 GeV/c. The 2-prong events were processed using the FSD and the FOG-CLOUDY-FAIR data reduction system. Events of the nature π + p→π + pπ 0 and π + p→π + π + n with values of momentum transfer to the proton of -t less than or equal to 0.238 GeV 2 were selected. These events were used to extrapolate to the pion pole (t = m/sub π/ 2 ) in order to investigate the π π interaction with isospins of both T=1 and T=2. Two methods were used to do the extrapolation: the original Chew-Low method developed in 1959 and the Durr-Pilkuhn method developed in 1965, which takes into account centrifugal barrier penetration factors. At first it seemed that, while the Durr-Pilkuhn method gave better values for the total π π cross section, the Chew-Low method gave better values for the angular distribution. Further analysis, however, showed that, if the requirement of total OPE (one-pion-exchange) was dropped, then the Durr-Pilkuhn method gave more reasonable values of the angular distribution as well as for the total π π cross section

  19. π π scattering by pole extrapolation methods

    International Nuclear Information System (INIS)

    Lott, F.W. III.

    1977-01-01

    A 25-inch hydrogen bubble chamber was used at the Lawrence Berkeley Laboratory Bevatron to produce 300,000 pictures of π + p interactions at an incident momentum of the π + of 2.67 GeV/c. The 2-prong events were processed using the FSD and the FOG-CLOUDY-FAIR data reduction system. Events of the nature π + p → π + pπ 0 and π + p → π + π + n with values of momentum transfer to the proton of -t less than or equal to 0.238 GeV 2 were selected. These events were used to extrapolate to the pion pole (t = m/sub π/ 2 ) in order to investigate the π π interaction with isospins of both T = 1 and T = 2. Two methods were used to do the extrapolation: the original Chew-Low method developed in 1959 and the Durr-Pilkuhn method developed in 1965 which takes into account centrifugal barrier penetration factors. At first it seemed that, while the Durr-Pilkuhn method gave better values for the total π π cross section, the Chew-Low method gave better values for the angular distribution. Further analysis, however, showed that if the requirement of total OPE (one-pion-exchange) were dropped, then the Durr-Pilkuhn method gave more reasonable values of the angular distribution as well as for the total π π cross section

  20. Extrapolated stabilized explicit Runge-Kutta methods

    Science.gov (United States)

    Martín-Vaquero, J.; Kleefeld, B.

    2016-12-01

    Extrapolated Stabilized Explicit Runge-Kutta methods (ESERK) are proposed to solve multi-dimensional nonlinear partial differential equations (PDEs). In such methods it is necessary to evaluate the function nt times per step, but the stability region is O (nt2). Hence, the computational cost is O (nt) times lower than for a traditional explicit algorithm. In that way stiff problems can be integrated by the use of simple explicit evaluations in which case implicit methods usually had to be used. Therefore, they are especially well-suited for the method of lines (MOL) discretizations of parabolic nonlinear multi-dimensional PDEs. In this work, first s-stages first-order methods with extended stability along the negative real axis are obtained. They have slightly shorter stability regions than other traditional first-order stabilized explicit Runge-Kutta algorithms (also called Runge-Kutta-Chebyshev codes). Later, they are used to derive nt-stages second- and fourth-order schemes using Richardson extrapolation. The stability regions of these fourth-order codes include the interval [ - 0.01nt2, 0 ] (nt being the number of total functions evaluations), which are shorter than stability regions of ROCK4 methods, for example. However, the new algorithms neither suffer from propagation of errors (as other Runge-Kutta-Chebyshev codes as ROCK4 or DUMKA) nor internal instabilities. Additionally, many other types of higher-order (and also lower-order) methods can be obtained easily in a similar way. These methods also allow adaptation of the length step with no extra cost. Hence, the stability domain is adapted precisely to the spectrum of the problem at the current time of integration in an optimal way, i.e., with minimal number of additional stages. We compare the new techniques with other well-known algorithms with good results in very stiff diffusion or reaction-diffusion multi-dimensional nonlinear equations.

  1. Extrapolation method in the Monte Carlo Shell Model and its applications

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2011-01-01

    We demonstrate how the energy-variance extrapolation method works using the sequence of the approximated wave functions obtained by the Monte Carlo Shell Model (MCSM), taking 56 Ni with pf-shell as an example. The extrapolation method is shown to work well even in the case that the MCSM shows slow convergence, such as 72 Ge with f5pg9-shell. The structure of 72 Se is also studied including the discussion of the shape-coexistence phenomenon.

  2. extrap: Software to assist the selection of extrapolation methods for moving-boat ADCP streamflow measurements

    Science.gov (United States)

    Mueller, David S.

    2013-04-01

    Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity profiles from the entire cross section and multiple transects to determine a mean profile for the measurement. The use of an exponent derived from normalized data from the entire cross section is shown to be valid for application of the power velocity distribution law in the computation of the unmeasured discharge in a cross section. Selected statistics are combined with empirically derived criteria to automatically select the appropriate extrapolation methods. A graphical user interface (GUI) provides the user tools to visually evaluate the automatically selected extrapolation methods and manually change them, as necessary. The sensitivity of the total discharge to available extrapolation methods is presented in the GUI. Use of extrap by field hydrographers has demonstrated that extrap is a more accurate and efficient method of determining the appropriate extrapolation methods compared with tools currently (2012) provided in the ADCP manufacturers' software.

  3. Dead time corrections using the backward extrapolation method

    Energy Technology Data Exchange (ETDEWEB)

    Gilad, E., E-mail: gilade@bgu.ac.il [The Unit of Nuclear Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105 (Israel); Dubi, C. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel); Geslot, B.; Blaise, P. [DEN/CAD/DER/SPEx/LPE, CEA Cadarache, Saint-Paul-les-Durance 13108 (France); Kolin, A. [Department of Physics, Nuclear Research Center NEGEV (NRCN), Beer-Sheva 84190 (Israel)

    2017-05-11

    Dead time losses in neutron detection, caused by both the detector and the electronics dead time, is a highly nonlinear effect, known to create high biasing in physical experiments as the power grows over a certain threshold, up to total saturation of the detector system. Analytic modeling of the dead time losses is a highly complicated task due to the different nature of the dead time in the different components of the monitoring system (e.g., paralyzing vs. non paralyzing), and the stochastic nature of the fission chains. In the present study, a new technique is introduced for dead time corrections on the sampled Count Per Second (CPS), based on backward extrapolation of the losses, created by increasingly growing artificially imposed dead time on the data, back to zero. The method has been implemented on actual neutron noise measurements carried out in the MINERVE zero power reactor, demonstrating high accuracy (of 1–2%) in restoring the corrected count rate. - Highlights: • A new method for dead time corrections is introduced and experimentally validated. • The method does not depend on any prior calibration nor assumes any specific model. • Different dead times are imposed on the signal and the losses are extrapolated to zero. • The method is implemented and validated using neutron measurements from the MINERVE. • Result show very good correspondence to empirical results.

  4. Standardization of I-125 solution by extrapolation of an efficiency wave obtained by coincidence X-(X-γ) counting method

    International Nuclear Information System (INIS)

    Iwahara, A.

    1989-01-01

    The activity concentration of 125 I was determined by X-(X-α) coincidence counting method and efficiency extrapolation curve. The measurement system consists of 2 thin NaI(T1) scintillation detectors which are horizontally movable on a track. The efficiency curve is obtained by symmetricaly changing the distance between the source and the detectors and the activity is determined by applying a linear efficiency extrapolation curve. All sum-coincidence events are included between 10 and 100 KeV window counting and the main source of uncertainty is coming from poor counting statistic around zero efficiency. The consistence of results with other methods shows that this technique can be applied to photon cascade emitters and are not discriminating by the detectors. It has been also determined the 35,5 KeV gamma-ray emission probability of 125 I by using a Gamma-X type high purity germanium detector. (author) [pt

  5. Statistical modeling and extrapolation of carcinogenesis data

    International Nuclear Information System (INIS)

    Krewski, D.; Murdoch, D.; Dewanji, A.

    1986-01-01

    Mathematical models of carcinogenesis are reviewed, including pharmacokinetic models for metabolic activation of carcinogenic substances. Maximum likelihood procedures for fitting these models to epidemiological data are discussed, including situations where the time to tumor occurrence is unobservable. The plausibility of different possible shapes of the dose response curve at low doses is examined, and a robust method for linear extrapolation to low doses is proposed and applied to epidemiological data on radiation carcinogenesis

  6. A generalized sound extrapolation method for turbulent flows

    Science.gov (United States)

    Zhong, Siyang; Zhang, Xin

    2018-02-01

    Sound extrapolation methods are often used to compute acoustic far-field directivities using near-field flow data in aeroacoustics applications. The results may be erroneous if the volume integrals are neglected (to save computational cost), while non-acoustic fluctuations are collected on the integration surfaces. In this work, we develop a new sound extrapolation method based on an acoustic analogy using Taylor's hypothesis (Taylor 1938 Proc. R. Soc. Lon. A 164, 476-490. (doi:10.1098/rspa.1938.0032)). Typically, a convection operator is used to filter out the acoustically inefficient components in the turbulent flows, and an acoustics dominant indirect variable Dcp‧ is solved. The sound pressure p' at the far field is computed from Dcp‧ based on the asymptotic properties of the Green's function. Validations results for benchmark problems with well-defined sources match well with the exact solutions. For aeroacoustics applications: the sound predictions by the aerofoil-gust interaction are close to those by an earlier method specially developed to remove the effect of vortical fluctuations (Zhong & Zhang 2017 J. Fluid Mech. 820, 424-450. (doi:10.1017/jfm.2017.219)); for the case of vortex shedding noise from a cylinder, the off-body predictions by the proposed method match well with the on-body Ffowcs-Williams and Hawkings result; different integration surfaces yield close predictions (of both spectra and far-field directivities) for a co-flowing jet case using an established direct numerical simulation database. The results suggest that the method may be a potential candidate for sound projection in aeroacoustics applications.

  7. On Richardson extrapolation for low-dissipation low-dispersion diagonally implicit Runge-Kutta schemes

    Science.gov (United States)

    Havasi, Ágnes; Kazemi, Ehsan

    2018-04-01

    In the modeling of wave propagation phenomena it is necessary to use time integration methods which are not only sufficiently accurate, but also properly describe the amplitude and phase of the propagating waves. It is not clear if amending the developed schemes by extrapolation methods to obtain a high order of accuracy preserves the qualitative properties of these schemes in the perspective of dissipation, dispersion and stability analysis. It is illustrated that the combination of various optimized schemes with Richardson extrapolation is not optimal for minimal dissipation and dispersion errors. Optimized third-order and fourth-order methods are obtained, and it is shown that the proposed methods combined with Richardson extrapolation result in fourth and fifth orders of accuracy correspondingly, while preserving optimality and stability. The numerical applications include the linear wave equation, a stiff system of reaction-diffusion equations and the nonlinear Euler equations with oscillatory initial conditions. It is demonstrated that the extrapolated third-order scheme outperforms the recently developed fourth-order diagonally implicit Runge-Kutta scheme in terms of accuracy and stability.

  8. Extrapolation Method for System Reliability Assessment

    DEFF Research Database (Denmark)

    Qin, Jianjun; Nishijima, Kazuyoshi; Faber, Michael Havbro

    2012-01-01

    of integrals with scaled domains. The performance of this class of approximation depends on the approach applied for the scaling and the functional form utilized for the extrapolation. A scheme for this task is derived here taking basis in the theory of asymptotic solutions to multinormal probability integrals......The present paper presents a new scheme for probability integral solution for system reliability analysis, which takes basis in the approaches by Naess et al. (2009) and Bucher (2009). The idea is to evaluate the probability integral by extrapolation, based on a sequence of MC approximations...... that the proposed scheme is efficient and adds to generality for this class of approximations for probability integrals....

  9. Evaluation of extrapolation methods for actual state expenditures on health care in Russian Federation

    Directory of Open Access Journals (Sweden)

    S. A. Banin

    2016-01-01

    Full Text Available Forecasting methods, extrapolation ones in particular, are used in health care for medical, biological and clinical research. The author, using accessible internet space, has not met a single publication devoted to extrapolation of financial parameters of health care activities. This determined the relevance of the material presented in the article: based on health care financing dynamics in Russia in 2000–2010 the author examined possibility of application of basic perspective extrapolation methods - moving average, exponential smoothing and least squares. It is hypothesized that all three methods can equally forecast actual public expenditures on health care in medium term in Russia’s current financial and economic conditions. The study result was evaluated in two time periods: within the studied interval and a five-year period. It was found that within the study period all methods have an average relative extrapolation error of 3–5%, which means high precision of the forecast. The study shown a specific feature of the least squares method which were gradually accumulating results so their economic interpretation became possible only in the end of the studied period. That is why the extrapolating results obtained by least squares method are not applicable in an entire study period and rather have a theoretical value. Beyond the study period, however, this feature was found to be the most corresponding to the real situation. It was the least squares method that proved to be the most appropriate for economic interpretation of the forecast results of actual public expenditures on health care. The hypothesis was not confirmed, the author received three differently directed results, while each method had independent significance and its application depended on evaluation study objectives and real social, economic and financial situation in Russian health care system.

  10. Correction method for critical extrapolation of control-rods-rising during physical start-up of reactor

    International Nuclear Information System (INIS)

    Zhang Fan; Chen Wenzhen; Yu Lei

    2008-01-01

    During physical start-up of nuclear reactor, the curve got by lifting the con- trol rods to extrapolate to the critical state is often in protruding shape, by which the supercritical phenomena is led. In the paper, the reason why the curve was in protruding was analyzed. A correction method was introduced, and the calculations were carried out by the practical data used in a nuclear power plant. The results show that the correction method reverses the protruding shape of the extrapolating curve, and the risk of reactor supercritical phenomena can be reduced using the extrapolated curve got by the correction method during physical start-up of the reactor. (authors)

  11. A Method for Extrapolation of Atmospheric Soundings

    Science.gov (United States)

    2014-05-01

    case are not shown here. We also briefly examined data for the Anchorage, AK ( PANC ), radiosonde site for the case of the inversion height equal to...or greater than the extrapolation depth (i.e., hinv ≥ hext). PANC lies at the end of a broad inlet extending northward from the Gulf of Alaska at...type of terrain can affect the model and in turn affect the extrapolation. We examined a sounding from PANC (61.16 N and –150.01 W, elevation of 40

  12. Novel extrapolation method in the Monte Carlo shell model

    International Nuclear Information System (INIS)

    Shimizu, Noritaka; Abe, Takashi; Utsuno, Yutaka; Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio

    2010-01-01

    We propose an extrapolation method utilizing energy variance in the Monte Carlo shell model to estimate the energy eigenvalue and observables accurately. We derive a formula for the energy variance with deformed Slater determinants, which enables us to calculate the energy variance efficiently. The feasibility of the method is demonstrated for the full pf-shell calculation of 56 Ni, and the applicability of the method to a system beyond the current limit of exact diagonalization is shown for the pf+g 9/2 -shell calculation of 64 Ge.

  13. A method of creep rupture data extrapolation based on physical processes

    International Nuclear Information System (INIS)

    Leinster, M.G.

    2008-01-01

    There is a need for a reliable method to extrapolate generic creep rupture data to failure times in excess of the currently published times. A method based on well-understood and mathematically described physical processes is likely to be stable and reliable. Creep process descriptions have been developed based on accepted theory, to the extent that good fits with published data have been obtained. Methods have been developed to apply these descriptions to extrapolate creep rupture data to stresses below the published values. The relationship creep life parameter=f(ln(sinh(stress))) has been shown to be justifiable over the stress ranges of most interest, and gives realistic results at high temperatures and long times to failure. In the interests of continuity with past and present practice, the suggested method is intended to extend existing polynomial descriptions of life parameters at low stress. Where no polynomials exist, the method can be used to describe the behaviour of life parameters throughout the full range of a particular failure mode in the published data

  14. Linearization Method and Linear Complexity

    Science.gov (United States)

    Tanaka, Hidema

    We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.

  15. EXTRAPOLATION METHOD FOR MAXIMAL AND 24-H AVERAGE LTE TDD EXPOSURE ESTIMATION.

    Science.gov (United States)

    Franci, D; Grillo, E; Pavoncello, S; Coltellacci, S; Buccella, C; Aureli, T

    2018-01-01

    The Long-Term Evolution (LTE) system represents the evolution of the Universal Mobile Telecommunication System technology. This technology introduces two duplex modes: Frequency Division Duplex and Time Division Duplex (TDD). Despite having experienced a limited expansion in the European countries since the debut of the LTE technology, a renewed commercial interest for LTE TDD technology has recently been shown. Therefore, the development of extrapolation procedures optimised for TDD systems becomes crucial, especially for the regulatory authorities. This article presents an extrapolation method aimed to assess the exposure to LTE TDD sources, based on the detection of the Cell-Specific Reference Signal power level. The method introduces a βTDD parameter intended to quantify the fraction of the LTE TDD frame duration reserved for downlink transmission. The method has been validated by experimental measurements performed on signals generated by both a vector signal generator and a test Base Transceiver Station installed at Linkem S.p.A facility in Rome. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Parametric methods of describing and extrapolating the characteristics of long-term strength of refractory materials

    International Nuclear Information System (INIS)

    Tsvilyuk, I.S.; Avramenko, D.S.

    1986-01-01

    This paper carries out the comparative analysis of the suitability of parametric methods for describing and extrapolating the results of longterm tests on refractory materials. Diagrams are presented of the longterm strength of niobium based alloys tested in a vacuum of 1.3 X 10 -3 Pa. The predicted values and variance of the estimate of endurance of refractory alloys are presented by parametric dependences. The longterm strength characteristics can be described most adequately by the Manson-Sakkop and Sherby-Dorn methods. Several methods must be used to ensure the reliable extrapolation of the longterm strength characteristics to the time period an order of magnitude longer than the experimental data. The most suitable method cannot always be selected on the basis of the correlation ratio

  17. Evaluation of functioning of an extrapolation chamber using Monte Carlo method

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Alfonso Laguardia, R.

    2015-01-01

    The extrapolation chamber is a parallel plate chamber and variable volume based on the Braff-Gray theory. It determines in absolute mode, with high accuracy the dose absorbed by the extrapolation of the ionization current measured for a null distance between the electrodes. This camera is used for dosimetry of external beta rays for radiation protection. This paper presents a simulation for evaluating the functioning of an extrapolation chamber type 23392 of PTW, using the MCNPX Monte Carlo method. In the simulation, the fluence in the air collector cavity of the chamber was obtained. The influence of the materials that compose the camera on its response against beta radiation beam was also analysed. A comparison of the contribution of primary and secondary radiation was performed. The energy deposition in the air collector cavity for different depths was calculated. The component with the higher energy deposition is the Polymethyl methacrylate block. The energy deposition in the air collector cavity for chamber depth 2500 μm is greater with a value of 9.708E-07 MeV. The fluence in the air collector cavity decreases with depth. It's value is 1.758E-04 1/cm 2 for chamber depth 500 μm. The values reported are for individual electron and photon histories. The graphics of simulated parameters are presented in the paper. (Author)

  18. One-step lowrank wave extrapolation

    KAUST Repository

    Sindi, Ghada Atif

    2014-01-01

    Wavefield extrapolation is at the heart of modeling, imaging, and Full waveform inversion. Spectral methods gained well deserved attention due to their dispersion free solutions and their natural handling of anisotropic media. We propose a scheme a modified one-step lowrank wave extrapolation using Shanks transform in isotropic, and anisotropic media. Specifically, we utilize a velocity gradient term to add to the accuracy of the phase approximation function in the spectral implementation. With the higher accuracy, we can utilize larger time steps and make the extrapolation more efficient. Applications to models with strong inhomogeneity and considerable anisotropy demonstrates the utility of the approach.

  19. A comparison of high-order explicit Runge–Kutta, extrapolation, and deferred correction methods in serial and parallel

    KAUST Repository

    Ketcheson, David I.

    2014-06-13

    We compare the three main types of high-order one-step initial value solvers: extrapolation, spectral deferred correction, and embedded Runge–Kutta pairs. We consider orders four through twelve, including both serial and parallel implementations. We cast extrapolation and deferred correction methods as fixed-order Runge–Kutta methods, providing a natural framework for the comparison. The stability and accuracy properties of the methods are analyzed by theoretical measures, and these are compared with the results of numerical tests. In serial, the eighth-order pair of Prince and Dormand (DOP8) is most efficient. But other high-order methods can be more efficient than DOP8 when implemented in parallel. This is demonstrated by comparing a parallelized version of the wellknown ODEX code with the (serial) DOP853 code. For an N-body problem with N = 400, the experimental extrapolation code is as fast as the tuned Runge–Kutta pair at loose tolerances, and is up to two times as fast at tight tolerances.

  20. Free magnetic energy and relative magnetic helicity diagnostics for the quality of NLFF field extrapolations

    Science.gov (United States)

    Moraitis, Kostas; Archontis, Vasilis; Tziotziou, Konstantinos; Georgoulis, Manolis K.

    We calculate the instantaneous free magnetic energy and relative magnetic helicity of solar active regions using two independent approaches: a) a non-linear force-free (NLFF) method that requires only a single photospheric vector magnetogram, and b) well known semi-analytical formulas that require the full three-dimensional (3D) magnetic field structure. The 3D field is obtained either from MHD simulations, or from observed magnetograms via respective NLFF field extrapolations. We find qualitative agreement between the two methods and, quantitatively, a discrepancy not exceeding a factor of 4. The comparison of the two methods reveals, as a byproduct, two independent tests for the quality of a given force-free field extrapolation. We find that not all extrapolations manage to achieve the force-free condition in a valid, divergence-free, magnetic configuration. This research has been co-financed by the European Union (European Social Fund - ESF) and Greek national funds through the Operational Program "Education and Lifelong Learning" of the National Strategic Reference Framework (NSRF) - Research Funding Program: Thales. Investing in knowledge society through the European Social Fund.

  1. Solving the linearized forward-speed radiation problem using a high-order finite difference method on overlapping grids

    DEFF Research Database (Denmark)

    Amini Afshar, Mostafa; Bingham, Harry B.

    2017-01-01

    . Frequency-domain results are then obtained from a Fourier transform of the force and motion signals. In order to make a robust Fourier transform, and capture the response around the critical frequency, the tail of the force signal is asymptotically extrapolated assuming a linear decay rate. Fourth......The linearized potential flow approximation for the forward speed radiation problem is solved in the time domain using a high-order finite difference method. The finite-difference discretization is developed on overlapping, curvilinear body-fitted grids. To ensure numerical stability...

  2. Efficient Wavefield Extrapolation In Anisotropic Media

    KAUST Repository

    Alkhalifah, Tariq; Ma, Xuxin; Waheed, Umair bin; Zuberi, Mohammad Akbar Hosain

    2014-01-01

    Various examples are provided for wavefield extrapolation in anisotropic media. In one example, among others, a method includes determining an effective isotropic velocity model and extrapolating an equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield. The effective isotropic velocity model can be based upon a kinematic geometrical representation of an anisotropic, poroelastic or viscoelastic wavefield. Extrapolating the equivalent propagation can use isotopic, acoustic or elastic operators based upon the determined effective isotropic velocity model. In another example, non-transitory computer readable medium stores an application that, when executed by processing circuitry, causes the processing circuitry to determine the effective isotropic velocity model and extrapolate the equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield. In another example, a system includes processing circuitry and an application configured to cause the system to determine the effective isotropic velocity model and extrapolate the equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield.

  3. Efficient Wavefield Extrapolation In Anisotropic Media

    KAUST Repository

    Alkhalifah, Tariq

    2014-07-03

    Various examples are provided for wavefield extrapolation in anisotropic media. In one example, among others, a method includes determining an effective isotropic velocity model and extrapolating an equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield. The effective isotropic velocity model can be based upon a kinematic geometrical representation of an anisotropic, poroelastic or viscoelastic wavefield. Extrapolating the equivalent propagation can use isotopic, acoustic or elastic operators based upon the determined effective isotropic velocity model. In another example, non-transitory computer readable medium stores an application that, when executed by processing circuitry, causes the processing circuitry to determine the effective isotropic velocity model and extrapolate the equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield. In another example, a system includes processing circuitry and an application configured to cause the system to determine the effective isotropic velocity model and extrapolate the equivalent propagation of an anisotropic, poroelastic or viscoelastic wavefield.

  4. Builtin vs. auxiliary detection of extrapolation risk.

    Energy Technology Data Exchange (ETDEWEB)

    Munson, Miles Arthur; Kegelmeyer, W. Philip,

    2013-02-01

    A key assumption in supervised machine learning is that future data will be similar to historical data. This assumption is often false in real world applications, and as a result, prediction models often return predictions that are extrapolations. We compare four approaches to estimating extrapolation risk for machine learning predictions. Two builtin methods use information available from the classification model to decide if the model would be extrapolating for an input data point. The other two build auxiliary models to supplement the classification model and explicitly model extrapolation risk. Experiments with synthetic and real data sets show that the auxiliary models are more reliable risk detectors. To best safeguard against extrapolating predictions, however, we recommend combining builtin and auxiliary diagnostics.

  5. Functional differential equations with unbounded delay in extrapolation spaces

    Directory of Open Access Journals (Sweden)

    Mostafa Adimy

    2014-08-01

    Full Text Available We study the existence, regularity and stability of solutions for nonlinear partial neutral functional differential equations with unbounded delay and a Hille-Yosida operator on a Banach space X. We consider two nonlinear perturbations: the first one is a function taking its values in X and the second one is a function belonging to a space larger than X, an extrapolated space. We use the extrapolation techniques to prove the existence and regularity of solutions and we establish a linearization principle for the stability of the equilibria of our equation.

  6. A comparison between progressive extension method (PEM) and iterative method (IM) for magnetic field extrapolations in the solar atmosphere

    Science.gov (United States)

    Wu, S. T.; Sun, M. T.; Sakurai, Takashi

    1990-01-01

    This paper presents a comparison between two numerical methods for the extrapolation of nonlinear force-free magnetic fields, viz the Iterative Method (IM) and the Progressive Extension Method (PEM). The advantages and disadvantages of these two methods are summarized, and the accuracy and numerical instability are discussed. On the basis of this investigation, it is claimed that the two methods do resemble each other qualitatively.

  7. Medical extrapolation chamber dosimeter model XW6012A

    International Nuclear Information System (INIS)

    Jin Tao; Wang Mi; Wu Jinzheng; Guo Qi

    1992-01-01

    An extrapolation chamber dosimeter has been developed for clinical dosimetry of electron beams and X-rays from medical linear accelerators. It consists of a new type extrapolation chamber, a water phantom and an intelligent portable instrument. With a thin entrance window and a φ20 mm collecting electrode made of polystyrene, the electrode spacing can be varied from 0.2 to 6 mm. The dosimeter can accomplish dose measurement automatically, and has functions of error self-diagnosis and dose self-recording. The energy range applicable is 0.5-20 MeV, and the dose-rate range 0.02-40 Gy/min. The total uncertainty is 2.7%

  8. An Efficient Method of Reweighting and Reconstructing Monte Carlo Molecular Simulation Data for Extrapolation to Different Temperature and Density Conditions

    KAUST Repository

    Sun, Shuyu

    2013-06-01

    This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.

  9. An Efficient Method of Reweighting and Reconstructing Monte Carlo Molecular Simulation Data for Extrapolation to Different Temperature and Density Conditions

    KAUST Repository

    Sun, Shuyu; Kadoura, Ahmad Salim; Salama, Amgad

    2013-01-01

    This paper introduces an efficient technique to generate new molecular simulation Markov chains for different temperature and density conditions, which allow for rapid extrapolation of canonical ensemble averages at a range of temperatures and densities different from the original conditions where a single simulation is conducted. Obtained information from the original simulation are reweighted and even reconstructed in order to extrapolate our knowledge to the new conditions. Our technique allows not only the extrapolation to a new temperature or density, but also the double extrapolation to both new temperature and density. The method was implemented for Lennard-Jones fluid with structureless particles in single-gas phase region. Extrapolation behaviors as functions of extrapolation ranges were studied. Limits of extrapolation ranges showed a remarkable capability especially along isochors where only reweighting is required. Various factors that could affect the limits of extrapolation ranges were investigated and compared. In particular, these limits were shown to be sensitive to the number of particles used and starting point where the simulation was originally conducted.

  10. Simulation-extrapolation method to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates, 1950-2003

    Energy Technology Data Exchange (ETDEWEB)

    Allodji, Rodrigue S.; Schwartz, Boris; Diallo, Ibrahima; Vathaire, Florent de [Gustave Roussy B2M, Radiation Epidemiology Group/CESP - Unit 1018 INSERM, Villejuif Cedex (France); Univ. Paris-Sud, Villejuif (France); Agbovon, Cesaire [Pierre and Vacances - Center Parcs Group, L' artois - Espace Pont de Flandre, Paris Cedex 19 (France); Laurier, Dominique [Institut de Radioprotection et de Surete Nucleaire (IRSN), DRPH, SRBE, Laboratoire d' epidemiologie, BP17, Fontenay-aux-Roses Cedex (France)

    2015-08-15

    Analyses of the Life Span Study (LSS) of Japanese atomic bombing survivors have routinely incorporated corrections for additive classical measurement errors using regression calibration. Recently, several studies reported that the efficiency of the simulation-extrapolation method (SIMEX) is slightly more accurate than the simple regression calibration method (RCAL). In the present paper, the SIMEX and RCAL methods have been used to address errors in atomic bomb survivor dosimetry on solid cancer and leukaemia mortality risk estimates. For instance, it is shown that using the SIMEX method, the ERR/Gy is increased by an amount of about 29 % for all solid cancer deaths using a linear model compared to the RCAL method, and the corrected EAR 10{sup -4} person-years at 1 Gy (the linear terms) is decreased by about 8 %, while the corrected quadratic term (EAR 10{sup -4} person-years/Gy{sup 2}) is increased by about 65 % for leukaemia deaths based on a linear-quadratic model. The results with SIMEX method are slightly higher than published values. The observed differences were probably due to the fact that with the RCAL method the dosimetric data were partially corrected, while all doses were considered with the SIMEX method. Therefore, one should be careful when comparing the estimated risks and it may be useful to use several correction techniques in order to obtain a range of corrected estimates, rather than to rely on a single technique. This work will enable to improve the risk estimates derived from LSS data, and help to make more reliable the development of radiation protection standards. (orig.)

  11. Lowrank seismic-wave extrapolation on a staggered grid

    KAUST Repository

    Fang, Gang

    2014-05-01

    © 2014 Society of Exploration Geophysicists. We evaluated a new spectral method and a new finite-difference (FD) method for seismic-wave extrapolation in time. Using staggered temporal and spatial grids, we derived a wave-extrapolation operator using a lowrank decomposition for a first-order system of wave equations and designed the corresponding FD scheme. The proposed methods extend previously proposed lowrank and lowrank FD wave extrapolation methods from the cases of constant density to those of variable density. Dispersion analysis demonstrated that the proposed methods have high accuracy for a wide wavenumber range and significantly reduce the numerical dispersion. The method of manufactured solutions coupled with mesh refinement was used to verify each method and to compare numerical errors. Tests on 2D synthetic examples demonstrated that the proposed method is highly accurate and stable. The proposed methods can be used for seismic modeling or reverse-time migration.

  12. Lowrank seismic-wave extrapolation on a staggered grid

    KAUST Repository

    Fang, Gang; Fomel, Sergey; Du, Qizhen; Hu, Jingwei

    2014-01-01

    © 2014 Society of Exploration Geophysicists. We evaluated a new spectral method and a new finite-difference (FD) method for seismic-wave extrapolation in time. Using staggered temporal and spatial grids, we derived a wave-extrapolation operator using a lowrank decomposition for a first-order system of wave equations and designed the corresponding FD scheme. The proposed methods extend previously proposed lowrank and lowrank FD wave extrapolation methods from the cases of constant density to those of variable density. Dispersion analysis demonstrated that the proposed methods have high accuracy for a wide wavenumber range and significantly reduce the numerical dispersion. The method of manufactured solutions coupled with mesh refinement was used to verify each method and to compare numerical errors. Tests on 2D synthetic examples demonstrated that the proposed method is highly accurate and stable. The proposed methods can be used for seismic modeling or reverse-time migration.

  13. Comparison among creep rupture strength extrapolation methods with application to data for AISI 316 SS from Italy, France, U.K. and F.R.G

    International Nuclear Information System (INIS)

    Brunori, G.; Cappellato, S.; Vacchiano, S.; Guglielmi, F.

    1982-01-01

    Inside Activity 3 ''Materials'' of WGCS, the member states UK and FRG have developed a work regarding extrapolation methods for creep data. This work has been done by comparising extrapolation methods in use in their countries by applying them to creep rupture strength data on AISI 316 SS obtained in UK and FRG. This work has been issued on April 1978 and the Community has dealed it to all Activity 3 Members. Italy, in the figure of NIRA S.p.A., has received, from the European Community a contract to extend the work to Italian and French data, using extrapolation methods currently in use in Italy. The work should deal with the following points: - Collect of Italian experimental data; - Chemical analysis on Italian Specimen; - Comparison among Italian experimental data with French, FRG and UK data; - Description of extrapolation methods in use in Italy; - Application of these extrapolation methods to Italian, French, British and Germany data; - Extensions of a Final Report

  14. Load Extrapolation During Operation for Wind Turbines

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard

    2008-01-01

    In the recent years load extrapolation for wind turbines has been widely considered in the wind turbine industry. Loads on wind turbines during operations are normally dependent on the mean wind speed, the turbulence intensity and the type and settings of the control system. All these parameters...... must be taken into account when characteristic load effects during operation are determined. In the wind turbine standard IEC 61400-1 a method for load extrapolation using the peak over threshold method is recommended. In this paper this method is considered and some of the assumptions are examined...

  15. An efficient wave extrapolation method for anisotropic media with tilt

    KAUST Repository

    Waheed, Umair bin

    2015-03-23

    Wavefield extrapolation operators for elliptically anisotropic media offer significant cost reduction compared with that for the transversely isotropic case, particularly when the axis of symmetry exhibits tilt (from the vertical). However, elliptical anisotropy does not provide accurate wavefield representation or imaging for transversely isotropic media. Therefore, we propose effective elliptically anisotropic models that correctly capture the kinematic behaviour of wavefields for transversely isotropic media. Specifically, we compute source-dependent effective velocities for the elliptic medium using kinematic high-frequency representation of the transversely isotropic wavefield. The effective model allows us to use cheaper elliptic wave extrapolation operators. Despite the fact that the effective models are obtained by matching kinematics using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy trade-off for wavefield computations in transversely isotropic media, particularly for media of low to moderate complexity. In addition, the wavefield solution is free from shear-wave artefacts as opposed to the conventional finite-difference-based transversely isotropic wave extrapolation scheme. We demonstrate these assertions through numerical tests on synthetic tilted transversely isotropic models.

  16. An efficient wave extrapolation method for anisotropic media with tilt

    KAUST Repository

    Waheed, Umair bin; Alkhalifah, Tariq Ali

    2015-01-01

    Wavefield extrapolation operators for elliptically anisotropic media offer significant cost reduction compared with that for the transversely isotropic case, particularly when the axis of symmetry exhibits tilt (from the vertical). However, elliptical anisotropy does not provide accurate wavefield representation or imaging for transversely isotropic media. Therefore, we propose effective elliptically anisotropic models that correctly capture the kinematic behaviour of wavefields for transversely isotropic media. Specifically, we compute source-dependent effective velocities for the elliptic medium using kinematic high-frequency representation of the transversely isotropic wavefield. The effective model allows us to use cheaper elliptic wave extrapolation operators. Despite the fact that the effective models are obtained by matching kinematics using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy trade-off for wavefield computations in transversely isotropic media, particularly for media of low to moderate complexity. In addition, the wavefield solution is free from shear-wave artefacts as opposed to the conventional finite-difference-based transversely isotropic wave extrapolation scheme. We demonstrate these assertions through numerical tests on synthetic tilted transversely isotropic models.

  17. Statistically extrapolated nowcasting of summertime precipitation over the Eastern Alps

    Science.gov (United States)

    Chen, Min; Bica, Benedikt; Tüchler, Lukas; Kann, Alexander; Wang, Yong

    2017-07-01

    This paper presents a new multiple linear regression (MLR) approach to updating the hourly, extrapolated precipitation forecasts generated by the INCA (Integrated Nowcasting through Comprehensive Analysis) system for the Eastern Alps. The generalized form of the model approximates the updated precipitation forecast as a linear response to combinations of predictors selected through a backward elimination algorithm from a pool of predictors. The predictors comprise the raw output of the extrapolated precipitation forecast, the latest radar observations, the convective analysis, and the precipitation analysis. For every MLR model, bias and distribution correction procedures are designed to further correct the systematic regression errors. Applications of the MLR models to a verification dataset containing two months of qualified samples, and to one-month gridded data, are performed and evaluated. Generally, MLR yields slight, but definite, improvements in the intensity accuracy of forecasts during the late evening to morning period, and significantly improves the forecasts for large thresholds. The structure-amplitude-location scores, used to evaluate the performance of the MLR approach, based on its simulation of morphological features, indicate that MLR typically reduces the overestimation of amplitudes and generates similar horizontal structures in precipitation patterns and slightly degraded location forecasts, when compared with the extrapolated nowcasting.

  18. Irradiated food: validity of extrapolating wholesomeness data

    International Nuclear Information System (INIS)

    Taub, I.A.; Angelini, P.; Merritt, C. Jr.

    1976-01-01

    Criteria are considered for validly extrapolating the conclusions reached on the wholesomeness of an irradiated food receiving high doses to the same food receiving a lower dose. A consideration first is made of the possible chemical mechanisms that could give rise to different functional dependences of radiolytic products on dose. It is shown that such products should increase linearly with dose and the ratio of products should be constant throughout the dose range considered. The assumption, generally accepted in pharmacology, then is made that if any adverse effects related to the food are discerned in the test animals, then the intensity of these effects would increase with the concentration of radiolytic products in the food. Lastly, the need to compare data from animal studies with foods irradiated to several doses against chemical evidence obtained over a comparable dose range is considered. It is concluded that if the products depend linearly on dose and if feeding studies indicate no adverse effects, then an extrapolation to lower doses is clearly valid. This approach is illustrated for irradiated codfish. The formation of selected volatile products in samples receiving between 0.1 and 3 Mrads was examined, and their concentrations were found to increase linearly at least up to 1 Mrad. These data were compared with results from animal feeding studies establishing the wholesomeness of codfish and haddock irradiated to 0.2, 0.6 and 2.8 Mrads. It is stated, therefore, that if ocean fish, currently under consideration for onboard processing, were irradiated to 0.1 Mrad, it would be correspondingly wholesome

  19. Linear Algebraic Method for Non-Linear Map Analysis

    International Nuclear Information System (INIS)

    Yu, L.; Nash, B.

    2009-01-01

    We present a newly developed method to analyze some non-linear dynamics problems such as the Henon map using a matrix analysis method from linear algebra. Choosing the Henon map as an example, we analyze the spectral structure, the tune-amplitude dependence, the variation of tune and amplitude during the particle motion, etc., using the method of Jordan decomposition which is widely used in conventional linear algebra.

  20. One-step lowrank wave extrapolation

    KAUST Repository

    Sindi, Ghada Atif; Alkhalifah, Tariq Ali

    2014-01-01

    Wavefield extrapolation is at the heart of modeling, imaging, and Full waveform inversion. Spectral methods gained well deserved attention due to their dispersion free solutions and their natural handling of anisotropic media. We propose a scheme a

  1. Propagation of internal errors in explicit Runge–Kutta methods and internal stability of SSP and extrapolation methods

    KAUST Repository

    Ketcheson, David I.

    2014-04-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  2. Principles of animal extrapolation

    Energy Technology Data Exchange (ETDEWEB)

    Calabrese, E.J.

    1991-01-01

    Animal Extrapolation presents a comprehensive examination of the scientific issues involved in extrapolating results of animal experiments to human response. This text attempts to present a comprehensive synthesis and analysis of the host of biomedical and toxicological studies of interspecies extrapolation. Calabrese's work presents not only the conceptual basis of interspecies extrapolation, but also illustrates how these principles may be better used in selection of animal experimentation models and in the interpretation of animal experimental results. The book's theme centers around four types of extrapolation: (1) from average animal model to the average human; (2) from small animals to large ones; (3) from high-risk animal to the high risk human; and (4) from high doses of exposure to lower, more realistic, doses. Calabrese attacks the issues of interspecies extrapolation by dealing individually with the factors which contribute to interspecies variability: differences in absorption, intestinal flora, tissue distribution, metabolism, repair mechanisms, and excretion. From this foundation, Calabrese then discusses the heterogeneticity of these same factors in the human population in an attempt to evaluate the representativeness of various animal models in light of interindividual variations. In addition to discussing the question of suitable animal models for specific high-risk groups and specific toxicological endpoints, the author also examines extrapolation questions related to the use of short-term tests to predict long-term human carcinogenicity and birth defects. The book is comprehensive in scope and specific in detail; for those environmental health professions seeking to understand the toxicological models which underlay health risk assessments, Animal Extrapolation is a valuable information source.

  3. Comparative studies of parameters based on the most probable versus an approximate linear extrapolation distance estimates for circular cylindrical absorbing rod

    International Nuclear Information System (INIS)

    Wassef, W.A.

    1982-01-01

    Estimates and techniques that are valid to calculate the linear extrapolation distance for an infinitely long circular cylindrical absorbing region are reviewed. Two estimates, in particular, are put into consideration, that is the most probable and the value resulting from an approximate technique based on matching the integral transport equation inside the absorber with the diffusion approximation in the surrounding infinite scattering medium. Consequently, the effective diffusion parameters and the blackness of the cylinder are derived and subjected to comparative studies. A computer code is set up to calculate and compare the different parameters, which is useful in reactor analysis and serves to establish a beneficial estimates that are amenable to direct application to reactor design codes

  4. Direct activity determination of Mn-54 and Zn-65 by a non-extrapolation liquid scintillation method

    CSIR Research Space (South Africa)

    Simpson, BRS

    2004-02-01

    Full Text Available . The simple decay scheme exhibited by these radionuclides, with the emission of an energetic gamma ray, allows the absolute activity to be determined from 4pie-gamma data by direct calculation without the need for efficiency extrapolation. The method, which...

  5. extrap: Software to assist the selection of extrapolation methods for moving-boat ADCP streamflow measurements

    Science.gov (United States)

    Mueller, David S.

    2013-01-01

    Selection of the appropriate extrapolation methods for computing the discharge in the unmeasured top and bottom parts of a moving-boat acoustic Doppler current profiler (ADCP) streamflow measurement is critical to the total discharge computation. The software tool, extrap, combines normalized velocity

  6. Ultrasonic computerized tomography (CT) for temperature measurements with limited projection data based on extrapolated filtered back projection (FBP) method

    International Nuclear Information System (INIS)

    Zhu Ning; Jiang Yong; Kato, Seizo

    2005-01-01

    This study uses ultrasound in combination with tomography to obtain three-dimensional temperature measurements using projection data obtained from limited projection angle. The main feature of the new computerized tomography (CT) reconstruction algorithm is to employ extrapolation scheme to make up for the incomplete projection data, it is based on the conventional filtered back projection (FBP) method while on top of that taking into account the correlation between the projection data and Fourier transform-based extrapolation. Computer simulation is conducted to verify the above algorithm. An experimental 3D temperature distribution measurement is also carried out to validate the proposed algorithm. The simulation and experimental results demonstrate that the extrapolated FBP CT algorithm is highly effective in dealing with projection data from limited projection angle

  7. In situ LTE exposure of the general public: Characterization and extrapolation.

    Science.gov (United States)

    Joseph, Wout; Verloock, Leen; Goeminne, Francis; Vermeeren, Günter; Martens, Luc

    2012-09-01

    In situ radiofrequency (RF) exposure of the different RF sources is characterized in Reading, United Kingdom, and an extrapolation method to estimate worst-case long-term evolution (LTE) exposure is proposed. All electric field levels satisfy the International Commission on Non-Ionizing Radiation Protection (ICNIRP) reference levels with a maximal total electric field value of 4.5 V/m. The total values are dominated by frequency modulation (FM). Exposure levels for LTE of 0.2 V/m on average and 0.5 V/m maximally are obtained. Contributions of LTE to the total exposure are limited to 0.4% on average. Exposure ratios from 0.8% (LTE) to 12.5% (FM) are obtained. An extrapolation method is proposed and validated to assess the worst-case LTE exposure. For this method, the reference signal (RS) and secondary synchronization signal (S-SYNC) are measured and extrapolated to the worst-case value using an extrapolation factor. The influence of the traffic load and output power of the base station on in situ RS and S-SYNC signals are lower than 1 dB for all power and traffic load settings, showing that these signals can be used for the extrapolation method. The maximal extrapolated field value for LTE exposure equals 1.9 V/m, which is 32 times below the ICNIRP reference levels for electric fields. Copyright © 2012 Wiley Periodicals, Inc.

  8. Efficient and stable extrapolation of prestack wavefields

    KAUST Repository

    Wu, Zedong

    2013-09-22

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers and the image point, or in other words, prestack wavefields. Extrapolating such wavefields in time, nevertheless, is a big challenge because the radicand can be negative, thus reduce to a complex phase velocity, which will make the rank of the mixed domain matrix very high. Using the vertical offset between the sources and receivers, we introduce a method for deriving the DSR formulation, which gives us the opportunity to derive approximations for the mixed domain operator. The method extrapolates prestack wavefields by combining all data into one wave extrapolation procedure, allowing both upgoing and downgoing wavefields since the extrapolation is done in time, and doesn’t have the v(z) assumption in the offset axis of the media. Thus, the imaging condition is imposed by taking the zero-time and zero-offset slice from the multi-dimensional prestack wavefield. Unlike reverse time migration (RTM), no crosscorrelation is needed and we also have access to the subsurface offset information, which is important for migration velocity analysis. Numerical examples show the capability of this approach in dealing with complex velocity models and can provide a better quality image compared to RTM more efficiently.

  9. Efficient and stable extrapolation of prestack wavefields

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2013-01-01

    The double-square-root (DSR) relation offers a platform to perform prestack imaging using an extended single wavefield that honors the geometrical configuration between sources, receivers and the image point, or in other words, prestack wavefields. Extrapolating such wavefields in time, nevertheless, is a big challenge because the radicand can be negative, thus reduce to a complex phase velocity, which will make the rank of the mixed domain matrix very high. Using the vertical offset between the sources and receivers, we introduce a method for deriving the DSR formulation, which gives us the opportunity to derive approximations for the mixed domain operator. The method extrapolates prestack wavefields by combining all data into one wave extrapolation procedure, allowing both upgoing and downgoing wavefields since the extrapolation is done in time, and doesn’t have the v(z) assumption in the offset axis of the media. Thus, the imaging condition is imposed by taking the zero-time and zero-offset slice from the multi-dimensional prestack wavefield. Unlike reverse time migration (RTM), no crosscorrelation is needed and we also have access to the subsurface offset information, which is important for migration velocity analysis. Numerical examples show the capability of this approach in dealing with complex velocity models and can provide a better quality image compared to RTM more efficiently.

  10. SU-F-T-64: An Alternative Approach to Determining the Reference Air-Kerma Rate from Extrapolation Chamber Measurements

    International Nuclear Information System (INIS)

    Schneider, T

    2016-01-01

    Purpose: Since 2008 the Physikalisch-Technische Bundesanstalt (PTB) has been offering the calibration of "1"2"5I-brachytherapy sources in terms of the reference air-kerma rate (RAKR). The primary standard is a large air-filled parallel-plate extrapolation chamber. The measurement principle is based on the fact that the air-kerma rate is proportional to the increment of ionization per increment of chamber volume at chamber depths greater than the range of secondary electrons originating from the electrode x_0. Methods: Two methods for deriving the RAKR from the measured ionization charges are: (1) to determine the RAKR from the slope of the linear fit to the so-called ’extrapolation curve’, the measured ionization charges Q vs. plate separations x or (2) to differentiate Q(x) and to derive the RAKR by a linear extrapolation towards zero plate separation. For both methods, correcting the measured data for all known influencing effects before the evaluation method is applied is a precondition. However, the discrepancy of their results is larger than the uncertainty given for the determination of the RAKR with both methods. Results: A new approach to derive the RAKR from the measurements is investigated as an alternative. The method was developed from the ground up, based on radiation transport theory. A conversion factor C(x_1, x_2) is applied to the difference of charges measured at the two plate separations x_1 and x_2. This factor is composed of quotients of three air-kerma values calculated for different plate separations in the chamber: the air kerma Ka(0) for plate separation zero, and the mean air kermas at the plate separations x_1 and x_2, respectively. The RAKR determined with method (1) yields 4.877 µGy/h, and with method (2) 4.596 µGy/h. The application of the alternative approach results in 4.810 µGy/h. Conclusion: The alternative method shall be established in the future.

  11. Residual extrapolation operators for efficient wavefield construction

    KAUST Repository

    Alkhalifah, Tariq Ali

    2013-02-27

    Solving the wave equation using finite-difference approximations allows for fast extrapolation of the wavefield for modelling, imaging and inversion in complex media. It, however, suffers from dispersion and stability-related limitations that might hamper its efficient or proper application to high frequencies. Spectral-based time extrapolation methods tend to mitigate these problems, but at an additional cost to the extrapolation. I investigate the prospective of using a residual formulation of the spectral approach, along with utilizing Shanks transform-based expansions, that adheres to the residual requirements, to improve accuracy and reduce the cost. Utilizing the fact that spectral methods excel (time steps are allowed to be large) in homogeneous and smooth media, the residual implementation based on velocity perturbation optimizes the use of this feature. Most of the other implementations based on the spectral approach are focussed on reducing cost by reducing the number of inverse Fourier transforms required in every step of the spectral-based implementation. The approach here fixes that by improving the accuracy of each, potentially longer, time step.

  12. Comparison of extrapolation methods for creep rupture stresses of 12Cr and 18Cr10NiTi steels

    International Nuclear Information System (INIS)

    Ivarsson, B.

    1979-01-01

    As a part of a Soviet-Swedish research programme the creep rupture properties of two heat resisting steels namely a 12% Cr steel and an 18% Cr12% Ni titanium stabilized steel have been studied. One heat from each country of both steels were creep tested. The strength of the 12% Cr steels was similar to earlier reported strength values, the Soviet steel being some-what stronger due to a higher tungsten content. The strength of the Swedish 18/12 Ti steel agreed with earlier results, while the properties of the Soviet steel were inferior to those reported from earlier Soviet creep testings. Three extrapolation methods were compared on creep rupture data collected in both countries. Isothermal extrapolation and an algebraic method of Soviet origin gave in many cases rather similar results, while the parameter method recommended by ISO resulted in higher rupture strength values at longer times. (author)

  13. Low-cost extrapolation method for maximal LTE radio base station exposure estimation: test and validation.

    Science.gov (United States)

    Verloock, Leen; Joseph, Wout; Gati, Azeddine; Varsier, Nadège; Flach, Björn; Wiart, Joe; Martens, Luc

    2013-06-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on downlink band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2×2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders.

  14. Comparison of precipitation nowcasting by extrapolation and statistical-advection methods

    Czech Academy of Sciences Publication Activity Database

    Sokol, Zbyněk; Kitzmiller, D.; Pešice, Petr; Mejsnar, Jan

    2013-01-01

    Roč. 123, 1 April (2013), s. 17-30 ISSN 0169-8095 R&D Projects: GA MŠk ME09033 Institutional support: RVO:68378289 Keywords : Precipitation forecast * Statistical models * Regression * Quantitative precipitation forecast * Extrapolation forecast Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 2.421, year: 2013 http://www.sciencedirect.com/science/article/pii/S0169809512003390

  15. Response Load Extrapolation for Wind Turbines during Operation Based on Average Conditional Exceedance Rates

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Naess, Arvid; Saha, Nilanjan

    2011-01-01

    to cases where the Gumbel distribution is the appropriate asymptotic extreme value distribution. However, two extra parameters are introduced by which a more general and flexible class of extreme value distributions is obtained with the Gumbel distribution as a subclass. The general method is implemented...... within a hierarchical model where the variables that influence the loading are divided into ergodic variables and time-invariant non-ergodic variables. The presented method for statistical response load extrapolation was compared with the existing methods based on peak extrapolation for the blade out......The paper explores a recently developed method for statistical response load (load effect) extrapolation for application to extreme response of wind turbines during operation. The extrapolation method is based on average conditional exceedance rates and is in the present implementation restricted...

  16. SU-E-J-145: Geometric Uncertainty in CBCT Extrapolation for Head and Neck Adaptive Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Liu, C; Kumarasiri, A; Chetvertkov, M; Gordon, J; Chetty, I; Siddiqui, F; Kim, J [Henry Ford Health System, Detroit, MI (United States)

    2014-06-01

    Purpose: One primary limitation of using CBCT images for H'N adaptive radiotherapy (ART) is the limited field of view (FOV) range. We propose a method to extrapolate the CBCT by using a deformed planning CT for the dose of the day calculations. The aim was to estimate the geometric uncertainty of our extrapolation method. Methods: Ten H'N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken, were selected. Furthermore, a small FOV CBCT (CT2short) was synthetically created by cropping CT2 to the size of a CBCT image. Then, an extrapolated CBCT (CBCTextrp) was generated by deformably registering CT1 to CT2short and resampling with a wider FOV (42mm more from the CT2short borders), where CT1 is deformed through translation, rigid, affine, and b-spline transformations in order. The geometric error is measured as the distance map ||DVF|| produced by a deformable registration between CBCTextrp and CT2. Mean errors were calculated as a function of the distance away from the CBCT borders. The quality of all the registrations was visually verified. Results: Results were collected based on the average numbers from 10 patients. The extrapolation error increased linearly as a function of the distance (at a rate of 0.7mm per 1 cm) away from the CBCT borders in the S/I direction. The errors (μ±σ) at the superior and inferior boarders were 0.8 ± 0.5mm and 3.0 ± 1.5mm respectively, and increased to 2.7 ± 2.2mm and 5.9 ± 1.9mm at 4.2cm away. The mean error within CBCT borders was 1.16 ± 0.54mm . The overall errors within 4.2cm error expansion were 2.0 ± 1.2mm (sup) and 4.5 ± 1.6mm (inf). Conclusion: The overall error in inf direction is larger due to more large unpredictable deformations in the chest. The error introduced by extrapolation is plan dependent. The mean error in the expanded region can be large, and must be considered during implementation. This work is supported in part by Varian Medical Systems, Palo Alto, CA.

  17. Low-cost extrapolation method for maximal lte radio base station exposure estimation: Test and validation

    International Nuclear Information System (INIS)

    Verloock, L.; Joseph, W.; Gati, A.; Varsier, N.; Flach, B.; Wiart, J.; Martens, L.

    2013-01-01

    An experimental validation of a low-cost method for extrapolation and estimation of the maximal electromagnetic-field exposure from long-term evolution (LTE) radio base station installations are presented. No knowledge on down-link band occupation or service characteristics is required for the low-cost method. The method is applicable in situ. It only requires a basic spectrum analyser with appropriate field probes without the need of expensive dedicated LTE decoders. The method is validated both in laboratory and in situ, for a single-input single-output antenna LTE system and a 2x2 multiple-input multiple-output system, with low deviations in comparison with signals measured using dedicated LTE decoders. (authors)

  18. Characterization of low energy X-rays beams with an extrapolation chamber

    International Nuclear Information System (INIS)

    Bastos, Fernanda Martins

    2015-01-01

    In laboratories involving Radiological Protection practices, it is usual to use reference radiations for calibrating dosimeters and to study their response in terms of energy dependence. The International Organization for Standardization (ISO) established four series of reference X-rays beams in the ISO- 4037 standard: the L and H series, as low and high air Kerma rates, respectively, the N series of narrow spectrum and W series of wide spectrum. The X-rays beams with tube potential below 30 kV, called 'low energy beams' are, in most cases, critical as far as the determination of their parameters for characterization purpose, such as half-value layer. Extrapolation chambers are parallel plate ionization chambers that have one mobile electrode that allows variation of the air volume in its interior. These detectors are commonly used to measure the quantity Absorbed Dose, mostly in the medium surface, based on the extrapolation of the linear ionization current as a function of the distance between the electrodes. In this work, a characterization of a model 23392 PTW extrapolation chamber was done in low energy X-rays beams of the ISO- 4037 standard, by determining the polarization voltage range through the saturation curves and the value of the true null electrode spacing. In addition, the metrological reliability of the extrapolation chamber was studied with measurements of the value of leakage current and repeatability tests; limit values were established for the proper use of the chamber. The PTW23392 extrapolation chamber was calibrated in terms of air Kerma in some of the ISO radiation series of low energy; the traceability of the chamber to the National Standard Dosimeter was established. The study of energy dependency of the extrapolation chamber and the assessment of the uncertainties related to the calibration coefficient were also done; it was shown that the energy dependence was reduced to 4% when the extrapolation technique was used. Finally, the first

  19. Performance of a prototype of an extrapolation minichamber in various radiation beams

    International Nuclear Information System (INIS)

    Oliveira, M.L.; Caldas, L.V.E.

    2007-01-01

    An extrapolation minichamber was developed for measuring doses from weakly penetrating types of radiation. The chamber was tested at the radiotherapeutic dose level in a beam from a 90 Sr+ 90 Y check source, in a beam from a plane 90 Sr+ 90 Y ophthalmic applicator, and in several reference beams from an X-ray tube. Saturation, ion collection efficiency, stabilization time, extrapolation curves, linearity of chamber response vs. air kerma rate, and dependences of the response on the energy and irradiation angle were characterized. The results are satisfactory; they show that the chamber can be used in the dosimetry of 90 Sr+ 90 Y beta particles and low-energy X-ray beams

  20. Piecewise linear regression splines with hyperbolic covariates

    International Nuclear Information System (INIS)

    Cologne, John B.; Sposto, Richard

    1992-09-01

    Consider the problem of fitting a curve to data that exhibit a multiphase linear response with smooth transitions between phases. We propose substituting hyperbolas as covariates in piecewise linear regression splines to obtain curves that are smoothly joined. The method provides an intuitive and easy way to extend the two-phase linear hyperbolic response model of Griffiths and Miller and Watts and Bacon to accommodate more than two linear segments. The resulting regression spline with hyperbolic covariates may be fit by nonlinear regression methods to estimate the degree of curvature between adjoining linear segments. The added complexity of fitting nonlinear, as opposed to linear, regression models is not great. The extra effort is particularly worthwhile when investigators are unwilling to assume that the slope of the response changes abruptly at the join points. We can also estimate the join points (the values of the abscissas where the linear segments would intersect if extrapolated) if their number and approximate locations may be presumed known. An example using data on changing age at menarche in a cohort of Japanese women illustrates the use of the method for exploratory data analysis. (author)

  1. Acceleration of nodal diffusion code by Chebychev polynomial extrapolation method; Ubrzanje spoljasnjih iteracija difuzionog nodalnog proracuna Chebisevijevom ekstrapolacionom metodom

    Energy Technology Data Exchange (ETDEWEB)

    Zmijarevic, I; Tomashevic, Dj [Institut za Nuklearne Nauke Boris Kidric, Belgrade (Yugoslavia)

    1988-07-01

    This paper presents Chebychev acceleration of outer iterations of a nodal diffusion code of high accuracy. Extrapolation parameters, unique for all moments are calculated using the node integrated distribution of fission source. Sample calculations are presented indicating the efficiency of method. (author)

  2. Explorative methods in linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    2004-01-01

    The author has developed the H-method of mathematical modeling that builds up the model by parts, where each part is optimized with respect to prediction. Besides providing with better predictions than traditional methods, these methods provide with graphic procedures for analyzing different feat...... features in data. These graphic methods extend the well-known methods and results of Principal Component Analysis to any linear model. Here the graphic procedures are applied to linear regression and Ridge Regression....

  3. Seismic wave extrapolation using lowrank symbol approximation

    KAUST Repository

    Fomel, Sergey

    2012-04-30

    We consider the problem of constructing a wave extrapolation operator in a variable and possibly anisotropic medium. Our construction involves Fourier transforms in space combined with the help of a lowrank approximation of the space-wavenumber wave-propagator matrix. A lowrank approximation implies selecting a small set of representative spatial locations and a small set of representative wavenumbers. We present a mathematical derivation of this method, a description of the lowrank approximation algorithm and numerical examples that confirm the validity of the proposed approach. Wave extrapolation using lowrank approximation can be applied to seismic imaging by reverse-time migration in 3D heterogeneous isotropic or anisotropic media. © 2012 European Association of Geoscientists & Engineers.

  4. Wavefield extrapolation in pseudodepth domain

    KAUST Repository

    Ma, Xuxin

    2013-02-01

    Wavefields are commonly computed in the Cartesian coordinate frame. Its efficiency is inherently limited due to spatial oversampling in deep layers, where the velocity is high and wavelengths are long. To alleviate this computational waste due to uneven wavelength sampling, we convert the vertical axis of the conventional domain from depth to vertical time or pseudodepth. This creates a nonorthognal Riemannian coordinate system. Isotropic and anisotropic wavefields can be extrapolated in the new coordinate frame with improved efficiency and good consistency with Cartesian domain extrapolation results. Prestack depth migrations are also evaluated based on the wavefield extrapolation in the pseudodepth domain.© 2013 Society of Exploration Geophysicists. All rights reserved.

  5. Proposition of Improved Methodology in Creep Life Extrapolation

    International Nuclear Information System (INIS)

    Kim, Woo Gon; Park, Jae Young; Jang, Jin Sung

    2016-01-01

    To design SFRs for a 60-year operation, it is desirable to have the experimental creep-rupture data for Gr. 91 steel close to 20 y, or at least rupture lives significantly higher than 10"5 h. This requirement arises from the fact that, for the creep design, a factor of 3 times for extrapolation is considered to be appropriate. However, obtaining experimental data close to 20 y would be expensive and also take considerable time. Therefore, reliable creep life extrapolation techniques become necessary for a safe design life of 60 y. In addition, it is appropriate to obtain experimental longterm creep-rupture data in the range 10"5 ∼ 2x10"5 h to improve the reliability of extrapolation. In the present investigation, a new function of a hyperbolic sine ('sinh') form for a master curve in time-temperature parameter (TTP) methods, was proposed to accurately extrapolate the long-term creep rupture stress of Gr. 91 steel. Constant values used for each parametric equation were optimized on the basis of the creep rupture data. Average stress values predicted for up to 60 y were evaluated and compared with those of French Nuclear Design Code, RCC-MRx. The results showed that the master curve of the 'sinh' function was a wider acceptance with good flexibility in the low stress ranges beyond the experimental data. It was clarified clarified that the 'sinh' function was reasonable in creep life extrapolation compared with polynomial forms, which have been used conventionally until now.

  6. Proposition of Improved Methodology in Creep Life Extrapolation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Woo Gon; Park, Jae Young; Jang, Jin Sung [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    To design SFRs for a 60-year operation, it is desirable to have the experimental creep-rupture data for Gr. 91 steel close to 20 y, or at least rupture lives significantly higher than 10{sup 5} h. This requirement arises from the fact that, for the creep design, a factor of 3 times for extrapolation is considered to be appropriate. However, obtaining experimental data close to 20 y would be expensive and also take considerable time. Therefore, reliable creep life extrapolation techniques become necessary for a safe design life of 60 y. In addition, it is appropriate to obtain experimental longterm creep-rupture data in the range 10{sup 5} ∼ 2x10{sup 5} h to improve the reliability of extrapolation. In the present investigation, a new function of a hyperbolic sine ('sinh') form for a master curve in time-temperature parameter (TTP) methods, was proposed to accurately extrapolate the long-term creep rupture stress of Gr. 91 steel. Constant values used for each parametric equation were optimized on the basis of the creep rupture data. Average stress values predicted for up to 60 y were evaluated and compared with those of French Nuclear Design Code, RCC-MRx. The results showed that the master curve of the 'sinh' function was a wider acceptance with good flexibility in the low stress ranges beyond the experimental data. It was clarified clarified that the 'sinh' function was reasonable in creep life extrapolation compared with polynomial forms, which have been used conventionally until now.

  7. Determination of the most appropriate method for extrapolating overall survival data from a placebo-controlled clinical trial of lenvatinib for progressive, radioiodine-refractory differentiated thyroid cancer.

    Science.gov (United States)

    Tremblay, Gabriel; Livings, Christopher; Crowe, Lydia; Kapetanakis, Venediktos; Briggs, Andrew

    2016-01-01

    Cost-effectiveness models for the treatment of long-term conditions often require information on survival beyond the period of available data. This paper aims to identify a robust and reliable method for the extrapolation of overall survival (OS) in patients with radioiodine-refractory differentiated thyroid cancer receiving lenvatinib or placebo. Data from 392 patients (lenvatinib: 261, placebo: 131) from the SELECT trial are used over a 34-month period of follow-up. A previously published criterion-based approach is employed to ascertain credible estimates of OS beyond the trial data. Parametric models with and without a treatment covariate and piecewise models are used to extrapolate OS, and a holistic approach, where a series of statistical and visual tests are considered collectively, is taken in determining the most appropriate extrapolation model. A piecewise model, in which the Kaplan-Meier survivor function is used over the trial period and an extrapolated tail is based on the Exponential distribution, is identified as the optimal model. In the absence of long-term survival estimates from clinical trials, survival estimates often need to be extrapolated from the available data. The use of a systematic method based on a priori determined selection criteria provides a transparent approach and reduces the risk of bias. The extrapolated OS estimates will be used to investigate the potential long-term benefits of lenvatinib in the treatment of radioiodine-refractory differentiated thyroid cancer patients and populate future cost-effectiveness analyses.

  8. Dose rates from a C-14 source using extrapolation chamber and MC calculations

    International Nuclear Information System (INIS)

    Borg, J.

    1996-05-01

    The extrapolation chamber technique and the Monte Carlo (MC) calculation technique based on the EGS4 system have been studied for application for determination of dose rates in a low-energy β radiation field e.g., that from a 14 C source. The extrapolation chamber measurement method is the basic method for determination of dose rates in β radiation fields. Applying a number of correction factors and the stopping power ratio, tissue to air, the measured dose rate in an air volume surrounded by tissue equivalent material is converted into dose to tissue. Various details of the extrapolation chamber measurement method and evaluation procedure have been studied and further developed, and a complete procedure for the experimental determination of dose rates from a 14 C source is presented. A number of correction factors and other parameters used in the evaluation procedure for the measured data have been obtained by MC calculations. The whole extrapolation chamber measurement procedure was simulated using the MC method. The measured dose rates showed an increasing deviation from the MC calculated dose rates as the absorber thickness increased. This indicates that the EGS4 code may have some limitations for transport of very low-energy electrons. i.e., electrons with estimated energies less than 10 - 20 keV. MC calculations of dose to tissue were performed using two models: a cylindrical tissue phantom and a computer model of the extrapolation chamber. The dose to tissue in the extrapolation chamber model showed an additional buildup dose compared to the dose in the tissue model. (au) 10 tabs., 11 ills., 18 refs

  9. Motion extrapolation in the central fovea.

    Directory of Open Access Journals (Sweden)

    Zhuanghua Shi

    Full Text Available Neural transmission latency would introduce a spatial lag when an object moves across the visual field, if the latency was not compensated. A visual predictive mechanism has been proposed, which overcomes such spatial lag by extrapolating the position of the moving object forward. However, a forward position shift is often absent if the object abruptly stops moving (motion-termination. A recent "correction-for-extrapolation" hypothesis suggests that the absence of forward shifts is caused by sensory signals representing 'failed' predictions. Thus far, this hypothesis has been tested only for extra-foveal retinal locations. We tested this hypothesis using two foveal scotomas: scotoma to dim light and scotoma to blue light. We found that the perceived position of a dim dot is extrapolated into the fovea during motion-termination. Next, we compared the perceived position shifts of a blue versus a green moving dot. As predicted the extrapolation at motion-termination was only found with the blue moving dot. The results provide new evidence for the correction-for-extrapolation hypothesis for the region with highest spatial acuity, the fovea.

  10. Determination of the most appropriate method for extrapolating overall survival data from a placebo-controlled clinical trial of lenvatinib for progressive, radioiodine-refractory differentiated thyroid cancer

    Directory of Open Access Journals (Sweden)

    Tremblay G

    2016-06-01

    Full Text Available Gabriel Tremblay,1 Christopher Livings,2 Lydia Crowe,2 Venediktos Kapetanakis,2 Andrew Briggs3 1Global Health Economics and Health Technology Assessment, Eisai Inc., Woodcliff Lake, NJ, USA; 2Health Economics, Decision Resources Group, Bicester, Oxfordshire, 3Health Economics and Health Technology Assessment, Institute of Health and Wellbeing, University of Glasgow, Glasgow, UK Background: Cost-effectiveness models for the treatment of long-term conditions often require information on survival beyond the period of available data. Objectives: This paper aims to identify a robust and reliable method for the extrapolation of overall survival (OS in patients with radioiodine-refractory differentiated thyroid cancer receiving lenvatinib or placebo. Methods: Data from 392 patients (lenvatinib: 261, placebo: 131 from the SELECT trial are used over a 34-month period of follow-up. A previously published criterion-based approach is employed to ascertain credible estimates of OS beyond the trial data. Parametric models with and without a treatment covariate and piecewise models are used to extrapolate OS, and a holistic approach, where a series of statistical and visual tests are considered collectively, is taken in determining the most appropriate extrapolation model. Results: A piecewise model, in which the Kaplan–Meier survivor function is used over the trial period and an extrapolated tail is based on the Exponential distribution, is identified as the optimal model. Conclusion: In the absence of long-term survival estimates from clinical trials, survival estimates often need to be extrapolated from the available data. The use of a systematic method based on a priori determined selection criteria provides a transparent approach and reduces the risk of bias. The extrapolated OS estimates will be used to investigate the potential long-term benefits of lenvatinib in the treatment of radioiodine-refractory differentiated thyroid cancer patients and

  11. SU-F-T-579: Extrapolation Techniques for Small Field Dosimetry Using Gafchromic EBT3 Film

    Energy Technology Data Exchange (ETDEWEB)

    Morales, J [Chris OBrien Lifehouse, Camperdown, NSW (Australia)

    2016-06-15

    Purpose: The purpose of this project is to test an experimental approach using an extrapolation technique for Gafchromic EBT3 film for small field x-ray dosimetry. Methods: Small fields from a Novalis Tx linear accelerator with HD Multileaf Collimators with 6 MV was used. The field sizes ranged from 5 × 5 to 50 × 50 mm2 MLC fields and a range of circular cones of 4 to 30 mm2 diameters. All measurements were performed in water at an SSD of 100 cm and at a depth of 10 cm. The relative output factors (ROFs) were determined from an extrapolation technique developed to eliminate the effects of partial volume averaging in film scan by scanning films with high resolution (1200 DPI). The size of the regions of interest (ROI) was varied to produce a plot of ROFs versus ROI which was then extrapolated to zero ROI to determine the relative output factor. The results were compared with other solid state detectors with proper correction, namely, IBA SFD diode, PTW 60008 and PTW 60012 diode. Results: For the 4 mm cone, the extrapolated ROF had a value of 0.658 ± 0.014 as compared to 0.642 and 0.636 for 0.5 mm and 1 mm2 ROI analysis, respectively. This showed a change in output factor of 2.4% and 3.3% at this comparative ROI sizes. In comparison, the 25 mm cone had a difference in measured output factor of 0.3% and 0.5% between 0.5 and 1.0 mm, respectively compared to zero volume. For the fields defined by MLCs a difference of up to 2% for 5×5 mm2 was observed. Conclusion: A measureable difference can be seen in ROF based on the ROI when radiochromic film is used. Using extrapolation technique from high resolution scanning a good agreement can be achieved.

  12. Effective wavefield extrapolation in anisotropic media: Accounting for resolvable anisotropy

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-04-30

    Spectral methods provide artefact-free and generally dispersion-free wavefield extrapolation in anisotropic media. Their apparent weakness is in accessing the medium-inhomogeneity information in an efficient manner. This is usually handled through a velocity-weighted summation (interpolation) of representative constant-velocity extrapolated wavefields, with the number of these extrapolations controlled by the effective rank of the original mixed-domain operator or, more specifically, by the complexity of the velocity model. Conversely, with pseudo-spectral methods, because only the space derivatives are handled in the wavenumber domain, we obtain relatively efficient access to the inhomogeneity in isotropic media, but we often resort to weak approximations to handle the anisotropy efficiently. Utilizing perturbation theory, I isolate the contribution of anisotropy to the wavefield extrapolation process. This allows us to factorize as much of the inhomogeneity in the anisotropic parameters as possible out of the spectral implementation, yielding effectively a pseudo-spectral formulation. This is particularly true if the inhomogeneity of the dimensionless anisotropic parameters are mild compared with the velocity (i.e., factorized anisotropic media). I improve on the accuracy by using the Shanks transformation to incorporate a denominator in the expansion that predicts the higher-order omitted terms; thus, we deal with fewer terms for a high level of accuracy. In fact, when we use this new separation-based implementation, the anisotropy correction to the extrapolation can be applied separately as a residual operation, which provides a tool for anisotropic parameter sensitivity analysis. The accuracy of the approximation is high, as demonstrated in a complex tilted transversely isotropic model. © 2014 European Association of Geoscientists & Engineers.

  13. Effective wavefield extrapolation in anisotropic media: Accounting for resolvable anisotropy

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-01-01

    Spectral methods provide artefact-free and generally dispersion-free wavefield extrapolation in anisotropic media. Their apparent weakness is in accessing the medium-inhomogeneity information in an efficient manner. This is usually handled through a velocity-weighted summation (interpolation) of representative constant-velocity extrapolated wavefields, with the number of these extrapolations controlled by the effective rank of the original mixed-domain operator or, more specifically, by the complexity of the velocity model. Conversely, with pseudo-spectral methods, because only the space derivatives are handled in the wavenumber domain, we obtain relatively efficient access to the inhomogeneity in isotropic media, but we often resort to weak approximations to handle the anisotropy efficiently. Utilizing perturbation theory, I isolate the contribution of anisotropy to the wavefield extrapolation process. This allows us to factorize as much of the inhomogeneity in the anisotropic parameters as possible out of the spectral implementation, yielding effectively a pseudo-spectral formulation. This is particularly true if the inhomogeneity of the dimensionless anisotropic parameters are mild compared with the velocity (i.e., factorized anisotropic media). I improve on the accuracy by using the Shanks transformation to incorporate a denominator in the expansion that predicts the higher-order omitted terms; thus, we deal with fewer terms for a high level of accuracy. In fact, when we use this new separation-based implementation, the anisotropy correction to the extrapolation can be applied separately as a residual operation, which provides a tool for anisotropic parameter sensitivity analysis. The accuracy of the approximation is high, as demonstrated in a complex tilted transversely isotropic model. © 2014 European Association of Geoscientists & Engineers.

  14. Outlier robustness for wind turbine extrapolated extreme loads

    DEFF Research Database (Denmark)

    Natarajan, Anand; Verelst, David Robert

    2012-01-01

    . Stochastic identification of numerical artifacts in simulated loads is demonstrated using the method of principal component analysis. The extrapolation methodology is made robust to outliers through a weighted loads approach, whereby the eigenvalues of the correlation matrix obtained using the loads with its...

  15. Preface: Introductory Remarks: Linear Scaling Methods

    Science.gov (United States)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up

  16. Effective orthorhombic anisotropic models for wavefield extrapolation

    KAUST Repository

    Ibanez-Jacome, W.

    2014-07-18

    Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth\\'s subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the firstarrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.

  17. Effective orthorhombic anisotropic models for wavefield extrapolation

    KAUST Repository

    Ibanez-Jacome, W.; Alkhalifah, Tariq Ali; Waheed, Umair bin

    2014-01-01

    Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models to reproduce wave propagation phenomena in the Earth's subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, we generate effective isotropic inhomogeneous models that are capable of reproducing the firstarrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, we develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic ones, is represented by a sixth order polynomial equation with the fastest solution corresponding to outgoing P waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, and using them to explicitly evaluate the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. We extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the more expensive anisotropic extrapolator.

  18. Extrapolation of Extreme Response for Wind Turbines based on FieldMeasurements

    DEFF Research Database (Denmark)

    Toft, Henrik Stensgaard; Sørensen, John Dalsgaard

    2009-01-01

    extrapolation are presented. The first method is based on the same assumptions as the existing method but the statistical extrapolation is only performed for a limited number of mean wind speeds where the extreme load is likely to occur. For the second method the mean wind speeds are divided into storms which......The characteristic loads on wind turbines during operation are among others dependent on the mean wind speed, the turbulence intensity and the type and settings of the control system. These parameters must be taken into account in the assessment of the characteristic load. The characteristic load...... are assumed independent and the characteristic loads are determined from the extreme load in each storm....

  19. Interior Point Method for Solving Fuzzy Number Linear Programming Problems Using Linear Ranking Function

    Directory of Open Access Journals (Sweden)

    Yi-hua Zhong

    2013-01-01

    Full Text Available Recently, various methods have been developed for solving linear programming problems with fuzzy number, such as simplex method and dual simplex method. But their computational complexities are exponential, which is not satisfactory for solving large-scale fuzzy linear programming problems, especially in the engineering field. A new method which can solve large-scale fuzzy number linear programming problems is presented in this paper, which is named a revised interior point method. Its idea is similar to that of interior point method used for solving linear programming problems in crisp environment before, but its feasible direction and step size are chosen by using trapezoidal fuzzy numbers, linear ranking function, fuzzy vector, and their operations, and its end condition is involved in linear ranking function. Their correctness and rationality are proved. Moreover, choice of the initial interior point and some factors influencing the results of this method are also discussed and analyzed. The result of algorithm analysis and example study that shows proper safety factor parameter, accuracy parameter, and initial interior point of this method may reduce iterations and they can be selected easily according to the actual needs. Finally, the method proposed in this paper is an alternative method for solving fuzzy number linear programming problems.

  20. Linear methods in band theory

    DEFF Research Database (Denmark)

    Andersen, O. Krogh

    1975-01-01

    of Korringa-Kohn-Rostoker, linear-combination-of-atomic-orbitals, and cellular methods; the secular matrix is linear in energy, the overlap integrals factorize as potential parameters and structure constants, the latter are canonical in the sense that they neither depend on the energy nor the cell volume...

  1. Ecotoxicological effects extrapolation models

    Energy Technology Data Exchange (ETDEWEB)

    Suter, G.W. II

    1996-09-01

    One of the central problems of ecological risk assessment is modeling the relationship between test endpoints (numerical summaries of the results of toxicity tests) and assessment endpoints (formal expressions of the properties of the environment that are to be protected). For example, one may wish to estimate the reduction in species richness of fishes in a stream reach exposed to an effluent and have only a fathead minnow 96 hr LC50 as an effects metric. The problem is to extrapolate from what is known (the fathead minnow LC50) to what matters to the decision maker, the loss of fish species. Models used for this purpose may be termed Effects Extrapolation Models (EEMs) or Activity-Activity Relationships (AARs), by analogy to Structure-Activity Relationships (SARs). These models have been previously reviewed in Ch. 7 and 9 of and by an OECD workshop. This paper updates those reviews and attempts to further clarify the issues involved in the development and use of EEMs. Although there is some overlap, this paper does not repeat those reviews and the reader is referred to the previous reviews for a more complete historical perspective, and for treatment of additional extrapolation issues.

  2. Predicting structural properties of fluids by thermodynamic extrapolation

    Science.gov (United States)

    Mahynski, Nathan A.; Jiao, Sally; Hatch, Harold W.; Blanco, Marco A.; Shen, Vincent K.

    2018-05-01

    We describe a methodology for extrapolating the structural properties of multicomponent fluids from one thermodynamic state to another. These properties generally include features of a system that may be computed from an individual configuration such as radial distribution functions, cluster size distributions, or a polymer's radius of gyration. This approach is based on the principle of using fluctuations in a system's extensive thermodynamic variables, such as energy, to construct an appropriate Taylor series expansion for these structural properties in terms of intensive conjugate variables, such as temperature. Thus, one may extrapolate these properties from one state to another when the series is truncated to some finite order. We demonstrate this extrapolation for simple and coarse-grained fluids in both the canonical and grand canonical ensembles, in terms of both temperatures and the chemical potentials of different components. The results show that this method is able to reasonably approximate structural properties of such fluids over a broad range of conditions. Consequently, this methodology may be employed to increase the computational efficiency of molecular simulations used to measure the structural properties of certain fluid systems, especially those used in high-throughput or data-driven investigations.

  3. A thermal extrapolation method for the effective temperatures and internal energies of activated ions

    Science.gov (United States)

    Meot-Ner (Mautner), Michael; Somogyi, Árpád

    2007-11-01

    The internal energies of dissociating ions, activated chemically or collisionally, can be estimated using the kinetics of thermal dissociation. The thermal Arrhenius parameters can be combined with the observed dissociation rate of the activated ions using kdiss = Athermalexp(-Ea,thermal/RTeff). This Arrhenius-type relation yields the effective temperature, Teff, at which the ions would dissociate thermally at the same rate, or yield the same product distributions, as the activated ions. In turn, Teff is used to calculate the internal energy of the ions and the energy deposited by the activation process. The method yields an energy deposition efficiency of 10% for a chemical ionization proton transfer reaction and 8-26% for the surface collisions of various peptide ions. Internal energies of ions activated by chemical ionization or by gas phase collisions, and of ions produced by desorption methods such as fast atom bombardment, can be also evaluated. Thermal extrapolation is especially useful for ion-molecule reaction products and for biological ions, where other methods to evaluate internal energies are laborious or unavailable.

  4. Strong-stability-preserving additive linear multistep methods

    KAUST Repository

    Hadjimichael, Yiannis

    2018-02-20

    The analysis of strong-stability-preserving (SSP) linear multistep methods is extended to semi-discretized problems for which different terms on the right-hand side satisfy different forward Euler (or circle) conditions. Optimal perturbed and additive monotonicity-preserving linear multistep methods are studied in the context of such problems. Optimal perturbed methods attain larger monotonicity-preserving step sizes when the different forward Euler conditions are taken into account. On the other hand, we show that optimal SSP additive methods achieve a monotonicity-preserving step-size restriction no better than that of the corresponding nonadditive SSP linear multistep methods.

  5. Direct observations of the viscosity of Earth's outer core and extrapolation of measurements of the viscosity of liquid iron

    International Nuclear Information System (INIS)

    Smylie, D E; Brazhkin, Vadim V; Palmer, Andrew

    2009-01-01

    Estimates vary widely as to the viscosity of Earth's outer fluid core. Directly observed viscosity is usually orders of magnitude higher than the values extrapolated from high-pressure high-temperature laboratory experiments, which are close to those for liquid iron at atmospheric pressure. It turned out that this discrepancy can be removed by extrapolating via the widely known Arrhenius activation model modified by lifting the commonly used assumption of pressure-independent activation volume (which is possible due to the discovery that at high pressures the activation volume increases strongly with pressure, resulting in 10 2 Pa s at the top of the fluid core, and in 10 11 Pa s at its bottom). There are of course many uncertainties affecting this extrapolation process. This paper reviews two viscosity determination methods, one for the top and the other for the bottom of the outer core, the former of which relies on the decay of free core nutations and yields 2371 ± 1530 Pa s, while the other relies on the reduction in the rotational splitting of the two equatorial translational modes of the solid inner core oscillations and yields an average of 1.247 ± 0.035 Pa s. Encouraged by the good performance of the Arrhenius extrapolation, a differential form of the Arrhenius activation model is used to interpolate along the melting temperature curve and to find the viscosity profile across the entire outer core. The viscosity variation is found to be nearly log-linear between the measured boundary values. (methodological notes)

  6. Bayes linear statistics, theory & methods

    CERN Document Server

    Goldstein, Michael

    2007-01-01

    Bayesian methods combine information available from data with any prior information available from expert knowledge. The Bayes linear approach follows this path, offering a quantitative structure for expressing beliefs, and systematic methods for adjusting these beliefs, given observational data. The methodology differs from the full Bayesian methodology in that it establishes simpler approaches to belief specification and analysis based around expectation judgements. Bayes Linear Statistics presents an authoritative account of this approach, explaining the foundations, theory, methodology, and practicalities of this important field. The text provides a thorough coverage of Bayes linear analysis, from the development of the basic language to the collection of algebraic results needed for efficient implementation, with detailed practical examples. The book covers:The importance of partial prior specifications for complex problems where it is difficult to supply a meaningful full prior probability specification...

  7. Optimal control linear quadratic methods

    CERN Document Server

    Anderson, Brian D O

    2007-01-01

    This augmented edition of a respected text teaches the reader how to use linear quadratic Gaussian methods effectively for the design of control systems. It explores linear optimal control theory from an engineering viewpoint, with step-by-step explanations that show clearly how to make practical use of the material.The three-part treatment begins with the basic theory of the linear regulator/tracker for time-invariant and time-varying systems. The Hamilton-Jacobi equation is introduced using the Principle of Optimality, and the infinite-time problem is considered. The second part outlines the

  8. The extrapolation of creep rupture data by PD6605 - An independent case study

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65 Fisher Avenue, Rugby, Warks CV22 5HW (United Kingdom)

    2011-04-15

    The worked example presented in BSI document PD6605-1:1998, to illustrate the selection, validation and extrapolation of a creep rupture model using statistical analysis, was independently examined. Alternative rupture models were formulated and analysed by the same statistical methods, and were shown to represent the test data more accurately than the original model. Median rupture lives extrapolated from the original and alternative models were found to diverge widely under some conditions of practical interest. The tests prescribed in PD6605 and employed to validate the original model were applied to the better of the alternative models. But the tests were unable to discriminate between the two, demonstrating that these tests fail to ensure reliability in extrapolation. The difficulties of determining when a model is sufficiently reliable for use in extrapolation are discussed and some proposals are made.

  9. Source-receiver two-way wave extrapolation for prestack exploding-reflector modelling and migration

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-10-08

    Most modern seismic imaging methods separate input data into parts (shot gathers). We develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield forward or backward in time. This approach has the potential for generating accurate images free of artiefacts associated with conventional approaches. We derive novel high-order partial differential equations in the source-receiver time domain. The fourth-order nature of the extrapolation in time leads to four solutions, two of which correspond to the incoming and outgoing P-waves and reduce to the zero-offset exploding-reflector solutions when the source coincides with the receiver. A challenge for implementing two-way time extrapolation is an essential singularity for horizontally travelling waves. This singularity can be avoided by limiting the range of wavenumbers treated in a spectral-based extrapolation. Using spectral methods based on the low-rank approximation of the propagation symbol, we extrapolate only the desired solutions in an accurate and efficient manner with reduced dispersion artiefacts. Applications to synthetic data demonstrate the accuracy of the new prestack modelling and migration approach.

  10. Effective Orthorhombic Anisotropic Models for Wave field Extrapolation

    KAUST Repository

    Ibanez Jacome, Wilson

    2013-05-01

    Wavefield extrapolation in orthorhombic anisotropic media incorporates complicated but realistic models, to reproduce wave propagation phenomena in the Earth\\'s subsurface. Compared with the representations used for simpler symmetries, such as transversely isotropic or isotropic, orthorhombic models require an extended and more elaborated formulation that also involves more expensive computational processes. The acoustic assumption yields more efficient description of the orthorhombic wave equation that also provides a simplified representation for the orthorhombic dispersion relation. However, such representation is hampered by the sixth-order nature of the acoustic wave equation, as it also encompasses the contribution of shear waves. To reduce the computational cost of wavefield extrapolation in such media, I generate effective isotropic inhomogeneous models that are capable of reproducing the first-arrival kinematic aspects of the orthorhombic wavefield. First, in order to compute traveltimes in vertical orthorhombic media, I develop a stable, efficient and accurate algorithm based on the fast marching method. The derived orthorhombic acoustic dispersion relation, unlike the isotropic or transversely isotropic one, is represented by a sixth order polynomial equation that includes the fastest solution corresponding to outgoing P-waves in acoustic media. The effective velocity models are then computed by evaluating the traveltime gradients of the orthorhombic traveltime solution, which is done by explicitly solving the isotropic eikonal equation for the corresponding inhomogeneous isotropic velocity field. The inverted effective velocity fields are source dependent and produce equivalent first-arrival kinematic descriptions of wave propagation in orthorhombic media. I extrapolate wavefields in these isotropic effective velocity models using the more efficient isotropic operator, and the results compare well, especially kinematically, with those obtained from the

  11. An efficient wave extrapolation method for tilted orthorhombic media using effective ellipsoidal models

    KAUST Repository

    Waheed, Umair bin

    2014-08-01

    The wavefield extrapolation operator for ellipsoidally anisotropic (EA) media offers significant cost reduction compared to that for the orthorhombic case, especially when the symmetry planes are tilted and/or rotated. However, ellipsoidal anisotropy does not provide accurate focusing for media of orthorhombic anisotropy. Therefore, we develop effective EA models that correctly capture the kinematic behavior of the wavefield for tilted orthorhombic (TOR) media. Specifically, we compute effective source-dependent velocities for the EA model using kinematic high-frequency representation of the TOR wavefield. The effective model allows us to use the cheaper EA wavefield extrapolation operator to obtain approximate wavefield solutions for a TOR model. Despite the fact that the effective EA models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TOR media, particularly for media of low to moderate complexity. We demonstrate applicability of the proposed approach on a layered TOR model.

  12. An efficient wave extrapolation method for tilted orthorhombic media using effective ellipsoidal models

    KAUST Repository

    Waheed, Umair bin; Alkhalifah, Tariq Ali

    2014-01-01

    The wavefield extrapolation operator for ellipsoidally anisotropic (EA) media offers significant cost reduction compared to that for the orthorhombic case, especially when the symmetry planes are tilted and/or rotated. However, ellipsoidal anisotropy does not provide accurate focusing for media of orthorhombic anisotropy. Therefore, we develop effective EA models that correctly capture the kinematic behavior of the wavefield for tilted orthorhombic (TOR) media. Specifically, we compute effective source-dependent velocities for the EA model using kinematic high-frequency representation of the TOR wavefield. The effective model allows us to use the cheaper EA wavefield extrapolation operator to obtain approximate wavefield solutions for a TOR model. Despite the fact that the effective EA models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TOR media, particularly for media of low to moderate complexity. We demonstrate applicability of the proposed approach on a layered TOR model.

  13. An experimental extrapolation technique using the Gafchromic EBT3 film for relative output factor measurements in small x-ray fields

    Energy Technology Data Exchange (ETDEWEB)

    Morales, Johnny E., E-mail: johnny.morales@lh.org.au [Department of Radiation Oncology, Chris O’Brien Lifehouse, 119-143 Missenden Road, Camperdown, NSW 2050, Australia and School of Chemistry, Physics, and Mechanical Engineering, Queensland University of Technology, Level 4 O Block, Garden’s Point, QLD 4001 (Australia); Butson, Martin; Hill, Robin [Department of Radiation Oncology, Chris O’Brien Lifehouse, 119-143 Missenden Road, Camperdown, NSW 2050, Australia and Institute of Medical Physics, University of Sydney, NSW 2006 (Australia); Crowe, Scott B. [School of Chemistry, Physics, and Mechanical Engineering, Queensland University of Technology, Level 4 O Block, Garden’s Point, QLD 4001, Australia and Cancer Care Services, Royal Brisbane and Women’s Hospital, Butterfield Street, Herston, QLD 4029 (Australia); Trapp, J. V. [School of Chemistry, Physics, and Mechanical Engineering, Queensland University of Technology, Level 4 O Block, Garden’s Point, QLD 4001 (Australia)

    2016-08-15

    Purpose: An experimental extrapolation technique is presented, which can be used to determine the relative output factors for very small x-ray fields using the Gafchromic EBT3 film. Methods: Relative output factors were measured for the Brainlab SRS cones ranging in diameters from 4 to 30 mm{sup 2} on a Novalis Trilogy linear accelerator with 6 MV SRS x-rays. The relative output factor was determined from an experimental reducing circular region of interest (ROI) extrapolation technique developed to remove the effects of volume averaging. This was achieved by scanning the EBT3 film measurements with a high scanning resolution of 1200 dpi. From the high resolution scans, the size of the circular regions of interest was varied to produce a plot of relative output factors versus area of analysis. The plot was then extrapolated to zero to determine the relative output factor corresponding to zero volume. Results: Results have shown that for a 4 mm field size, the extrapolated relative output factor was measured as a value of 0.651 ± 0.018 as compared to 0.639 ± 0.019 and 0.633 ± 0.021 for 0.5 and 1.0 mm diameter of analysis values, respectively. This showed a change in the relative output factors of 1.8% and 2.8% at these comparative regions of interest sizes. In comparison, the 25 mm cone had negligible differences in the measured output factor between zero extrapolation, 0.5 and 1.0 mm diameter ROIs, respectively. Conclusions: This work shows that for very small fields such as 4.0 mm cone sizes, a measureable difference can be seen in the relative output factor based on the circular ROI and the size of the area of analysis using radiochromic film dosimetry. The authors recommend to scan the Gafchromic EBT3 film at a resolution of 1200 dpi for cone sizes less than 7.5 mm and to utilize an extrapolation technique for the output factor measurements of very small field dosimetry.

  14. An Extrapolation of a Radical Equation More Accurately Predicts Shelf Life of Frozen Biological Matrices.

    Science.gov (United States)

    De Vore, Karl W; Fatahi, Nadia M; Sass, John E

    2016-08-01

    Arrhenius modeling of analyte recovery at increased temperatures to predict long-term colder storage stability of biological raw materials, reagents, calibrators, and controls is standard practice in the diagnostics industry. Predicting subzero temperature stability using the same practice is frequently criticized but nevertheless heavily relied upon. We compared the ability to predict analyte recovery during frozen storage using 3 separate strategies: traditional accelerated studies with Arrhenius modeling, and extrapolation of recovery at 20% of shelf life using either ordinary least squares or a radical equation y = B1x(0.5) + B0. Computer simulations were performed to establish equivalence of statistical power to discern the expected changes during frozen storage or accelerated stress. This was followed by actual predictive and follow-up confirmatory testing of 12 chemistry and immunoassay analytes. Linear extrapolations tended to be the most conservative in the predicted percent recovery, reducing customer and patient risk. However, the majority of analytes followed a rate of change that slowed over time, which was fit best to a radical equation of the form y = B1x(0.5) + B0. Other evidence strongly suggested that the slowing of the rate was not due to higher-order kinetics, but to changes in the matrix during storage. Predicting shelf life of frozen products through extrapolation of early initial real-time storage analyte recovery should be considered the most accurate method. Although in this study the time required for a prediction was longer than a typical accelerated testing protocol, there are less potential sources of error, reduced costs, and a lower expenditure of resources. © 2016 American Association for Clinical Chemistry.

  15. On the Linear Stability of the Fifth-Order WENO Discretization

    KAUST Repository

    Motamed, Mohammad

    2010-10-03

    We study the linear stability of the fifth-order Weighted Essentially Non-Oscillatory spatial discretization (WENO5) combined with explicit time stepping applied to the one-dimensional advection equation. We show that it is not necessary for the stability domain of the time integrator to include a part of the imaginary axis. In particular, we show that the combination of WENO5 with either the forward Euler method or a two-stage, second-order Runge-Kutta method is linearly stable provided very small time step-sizes are taken. We also consider fifth-order multistep time discretizations whose stability domains do not include the imaginary axis. These are found to be linearly stable with moderate time steps when combined with WENO5. In particular, the fifth-order extrapolated BDF scheme gave superior results in practice to high-order Runge-Kutta methods whose stability domain includes the imaginary axis. Numerical tests are presented which confirm the analysis. © Springer Science+Business Media, LLC 2010.

  16. Runge-Kutta Methods for Linear Ordinary Differential Equations

    Science.gov (United States)

    Zingg, David W.; Chisholm, Todd T.

    1997-01-01

    Three new Runge-Kutta methods are presented for numerical integration of systems of linear inhomogeneous ordinary differential equations (ODES) with constant coefficients. Such ODEs arise in the numerical solution of the partial differential equations governing linear wave phenomena. The restriction to linear ODEs with constant coefficients reduces the number of conditions which the coefficients of the Runge-Kutta method must satisfy. This freedom is used to develop methods which are more efficient than conventional Runge-Kutta methods. A fourth-order method is presented which uses only two memory locations per dependent variable, while the classical fourth-order Runge-Kutta method uses three. This method is an excellent choice for simulations of linear wave phenomena if memory is a primary concern. In addition, fifth- and sixth-order methods are presented which require five and six stages, respectively, one fewer than their conventional counterparts, and are therefore more efficient. These methods are an excellent option for use with high-order spatial discretizations.

  17. A linearized dispersion relation for orthorhombic pseudo-acoustic modeling

    KAUST Repository

    Song, Xiaolei; Alkhalifah, Tariq Ali

    2012-01-01

    Wavefield extrapolation in acoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We introduce a linearized form of the dispersion relation for acoustic orthorhombic media to model acoustic wavefields. We apply the lowrank approximation approach to handle the corresponding space-wavenumber mixed-domain operator. Numerical experiments show that the proposed wavefield extrapolator is accurate and practically free of dispersions. Further, there is no coupling of qSv and qP waves, because we use the analytical dispersion relation. No constraints on Thomsen's parameters are required for stability. The linearized expression may provide useful application for parameter estimation in orthorhombic media.

  18. A linearized dispersion relation for orthorhombic pseudo-acoustic modeling

    KAUST Repository

    Song, Xiaolei

    2012-11-04

    Wavefield extrapolation in acoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We introduce a linearized form of the dispersion relation for acoustic orthorhombic media to model acoustic wavefields. We apply the lowrank approximation approach to handle the corresponding space-wavenumber mixed-domain operator. Numerical experiments show that the proposed wavefield extrapolator is accurate and practically free of dispersions. Further, there is no coupling of qSv and qP waves, because we use the analytical dispersion relation. No constraints on Thomsen\\'s parameters are required for stability. The linearized expression may provide useful application for parameter estimation in orthorhombic media.

  19. Line-of-sight extrapolation noise in dust polarization

    Energy Technology Data Exchange (ETDEWEB)

    Poh, Jason; Dodelson, Scott

    2017-05-19

    The B-modes of polarization at frequencies ranging from 50-1000 GHz are produced by Galactic dust, lensing of primordial E-modes in the cosmic microwave background (CMB) by intervening large scale structure, and possibly by primordial B-modes in the CMB imprinted by gravitational waves produced during inflation. The conventional method used to separate the dust component of the signal is to assume that the signal at high frequencies (e.g., 350 GHz) is due solely to dust and then extrapolate the signal down to lower frequency (e.g., 150 GHz) using the measured scaling of the polarized dust signal amplitude with frequency. For typical Galactic thermal dust temperatures of about 20K, these frequencies are not fully in the Rayleigh-Jeans limit. Therefore, deviations in the dust cloud temperatures from cloud to cloud will lead to different scaling factors for clouds of different temperatures. Hence, when multiple clouds of different temperatures and polarization angles contribute to the integrated line-of-sight polarization signal, the relative contribution of individual clouds to the integrated signal can change between frequencies. This can cause the integrated signal to be decorrelated in both amplitude and direction when extrapolating in frequency. Here we carry out a Monte Carlo analysis on the impact of this line-of-sight extrapolation noise, enabling us to quantify its effect. Using results from the Planck experiment, we find that this effect is small, more than an order of magnitude smaller than the current uncertainties. However, line-of-sight extrapolation noise may be a significant source of uncertainty in future low-noise primordial B-mode experiments. Scaling from Planck results, we find that accounting for this uncertainty becomes potentially important when experiments are sensitive to primordial B-mode signals with amplitude r < 0.0015 .

  20. Variational linear algebraic equations method

    International Nuclear Information System (INIS)

    Moiseiwitsch, B.L.

    1982-01-01

    A modification of the linear algebraic equations method is described which ensures a variational bound on the phaseshifts for potentials having a definite sign at all points. The method is illustrated by the elastic scattering of s-wave electrons by the static field of atomic hydrogen. (author)

  1. Measurement of the surface field on open magnetic samples by the extrapolation method

    Czech Academy of Sciences Publication Activity Database

    Perevertov, Oleksiy

    2005-01-01

    Roč. 76, - (2005), 104701/1-104701/7 ISSN 0034-6748 R&D Projects: GA ČR(CZ) GP202/04/P010; GA AV ČR(CZ) 1QS100100508 Institutional research plan: CEZ:AV0Z10100520 Keywords : magnetic field measurement * extrapolation * air gaps * magnetic permeability Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.235, year: 2005

  2. A simple extrapolation of thermodynamic perturbation theory to infinite order

    International Nuclear Information System (INIS)

    Ghobadi, Ahmadreza F.; Elliott, J. Richard

    2015-01-01

    Recent analyses of the third and fourth order perturbation contributions to the equations of state for square well spheres and Lennard-Jones chains show trends that persist across orders and molecular models. In particular, the ratio between orders (e.g., A 3 /A 2 , where A i is the ith order perturbation contribution) exhibits a peak when plotted with respect to density. The trend resembles a Gaussian curve with the peak near the critical density. This observation can form the basis for a simple recursion and extrapolation from the highest available order to infinite order. The resulting extrapolation is analytic and therefore cannot fully characterize the critical region, but it remarkably improves accuracy, especially for the binodal curve. Whereas a second order theory is typically accurate for the binodal at temperatures within 90% of the critical temperature, the extrapolated result is accurate to within 99% of the critical temperature. In addition to square well spheres and Lennard-Jones chains, we demonstrate how the method can be applied semi-empirically to the Perturbed Chain - Statistical Associating Fluid Theory (PC-SAFT)

  3. Efficient anisotropic wavefield extrapolation using effective isotropic models

    KAUST Repository

    Alkhalifah, Tariq Ali; Ma, X.; Waheed, Umair bin; Zuberi, Mohammad

    2013-01-01

    Isotropic wavefield extrapolation is more efficient than anisotropic extrapolation, and this is especially true when the anisotropy of the medium is tilted (from the vertical). We use the kinematics of the wavefield, appropriately represented

  4. Wavefield extrapolation in pseudo-depth domain

    KAUST Repository

    Ma, Xuxin; Alkhalifah, Tariq Ali

    2012-01-01

    Extrapolating seismic waves in Cartesian coordinate is prone to uneven spatial sampling, because the seismic wavelength tends to grow with depth, as velocity increase. We transform the vertical depth axis to a pseudo one using a velocity weighted mapping, which can effectively mitigate this wavelength variation. We derive acoustic wave equations in this new domain based on the direct transformation of the Laplacian derivatives, which admits solutions that are more accurate and stable than those derived from the kinematic transformation. The anisotropic versions of these equations allow us to isolate the vertical velocity influence and reduce its impact on modeling and imaging. The major benefit of extrapolating wavefields in pseudo-depth space is its near uniform wavelength as opposed to the normally dramatic change of wavelength with the conventional approach. Time wavefield extrapolation on a complex velocity shows some of the features of this approach.

  5. Cosmogony as an extrapolation of magnetospheric research

    International Nuclear Information System (INIS)

    Alfven, H.

    1984-03-01

    A theory of the origin and evolution of the Solar System (Alfven and Arrhenius, 1975: 1976) which considered electromagnetic forces and plasma effects is revised in the light of new information supplied by space research. In situ measurements in the magnetospheres and solar wind have changed our views of basic properties of cosmic plasmas. These results can be extrapolated both outwards in space, to interstellar clouds, backwards in time, to the formation of the solar system. The first extrapolation leads to a revision of some cloud properties which are essential for the early phases in the formation of stars and solar nebule. The latter extrapolation makes possible to approach the cosmogonic processes by extrapolation of (rather) well-known magnetospheric phenomena. Pioneer-Voyager observations of the Saturnian rings indicate that essential parts of their structure are fossils from cosmogonic times. By using detailed information from these space missions, it seems possible to reconstruct certain events 4-5 billion years ago with an accuracy of a few percent. This will cause a change in our views of the evolution of the solar system.(author)

  6. Computation of Optimal Monotonicity Preserving General Linear Methods

    KAUST Repository

    Ketcheson, David I.

    2009-07-01

    Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.

  7. Two linearization methods for atmospheric remote sensing

    International Nuclear Information System (INIS)

    Doicu, A.; Trautmann, T.

    2009-01-01

    We present two linearization methods for a pseudo-spherical atmosphere and general viewing geometries. The first approach is based on an analytical linearization of the discrete ordinate method with matrix exponential and incorporates two models for matrix exponential calculation: the matrix eigenvalue method and the Pade approximation. The second method referred to as the forward-adjoint approach is based on the adjoint radiative transfer for a pseudo-spherical atmosphere. We provide a compact description of the proposed methods as well as a numerical analysis of their accuracy and efficiency.

  8. Two-dimensional differential transform method for solving linear and non-linear Schroedinger equations

    International Nuclear Information System (INIS)

    Ravi Kanth, A.S.V.; Aruna, K.

    2009-01-01

    In this paper, we propose a reliable algorithm to develop exact and approximate solutions for the linear and nonlinear Schroedinger equations. The approach rest mainly on two-dimensional differential transform method which is one of the approximate methods. The method can easily be applied to many linear and nonlinear problems and is capable of reducing the size of computational work. Exact solutions can also be achieved by the known forms of the series solutions. Several illustrative examples are given to demonstrate the effectiveness of the present method.

  9. Problems in the extrapolation of laboratory rheological data

    Science.gov (United States)

    Paterson, M. S.

    1987-02-01

    The many types of variables and deformation regimes that need to be taken into account in extrapolating rheological behaviour from the laboratory to the earth are reviewed. The problems of extrapolation are then illustrated with two particular cases. In the case of divine-rich rocks, recent experimental work indicates that, within present uncertainties of extrapolation, the flow in the upper mantle could be either grain size dependent and near-Newtonian or grain size independent and distinctly non-Newtonian. Both types of behaviour would be influenced by the present of trace amounts of water. In the case of quartz-rich rocks, the uncertainties are even greater and it is still premature to attempt any extrapolation to geological conditions except as an upper bound; the fugacity and the scale of dispersion of the water are probably two important variables but the quantitative laws governing their influence are not yet clear.

  10. Extrapolated HPGe efficiency estimates based on a single calibration measurement

    International Nuclear Information System (INIS)

    Winn, W.G.

    1994-01-01

    Gamma spectroscopists often must analyze samples with geometries for which their detectors are not calibrated. The effort to experimentally recalibrate a detector for a new geometry can be quite time consuming, causing delay in reporting useful results. Such concerns have motivated development of a method for extrapolating HPGe efficiency estimates from an existing single measured efficiency. Overall, the method provides useful preliminary results for analyses that do not require exceptional accuracy, while reliably bracketing the credible range. The estimated efficiency element-of for a uniform sample in a geometry with volume V is extrapolated from the measured element-of 0 of the base sample of volume V 0 . Assuming all samples are centered atop the detector for maximum efficiency, element-of decreases monotonically as V increases about V 0 , and vice versa. Extrapolation of high and low efficiency estimates element-of h and element-of L provides an average estimate of element-of = 1/2 [element-of h + element-of L ] ± 1/2 [element-of h - element-of L ] (general) where an uncertainty D element-of = 1/2 (element-of h - element-of L ] brackets limits for a maximum possible error. The element-of h and element-of L both diverge from element-of 0 as V deviates from V 0 , causing D element-of to increase accordingly. The above concepts guided development of both conservative and refined estimates for element-of

  11. Calibration of the 90Sr+90Y ophthalmic and dermatological applicators with an extrapolation ionization minichamber

    International Nuclear Information System (INIS)

    Antonio, Patrícia L.; Oliveira, Mércia L.; Caldas, Linda V.E.

    2014-01-01

    90 Sr+ 90 Y clinical applicators are used for brachytherapy in Brazilian clinics even though they are not manufactured anymore. Such sources must be calibrated periodically, and one of the calibration methods in use is ionometry with extrapolation ionization chambers. 90 Sr+ 90 Y clinical applicators were calibrated using an extrapolation minichamber developed at the Calibration Laboratory at IPEN. The obtained results agree satisfactorily with the data provided in calibration certificates of the sources. - Highlights: • 90 Sr+ 90 Y clinical applicators were calibrated using a mini-extrapolation chamber. • An extrapolation curve was obtained for each applicator during its calibration. • The results were compared with those provided by the calibration certificates. • All results of the dermatological applicators presented lower differences than 5%

  12. 40 CFR 86.435-78 - Extrapolated emission values.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Extrapolated emission values. 86.435-78 Section 86.435-78 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Regulations for 1978 and Later New Motorcycles, General Provisions § 86.435-78 Extrapolated emission values...

  13. Community assessment techniques and the implications for rarefaction and extrapolation with Hill numbers.

    Science.gov (United States)

    Cox, Kieran D; Black, Morgan J; Filip, Natalia; Miller, Matthew R; Mohns, Kayla; Mortimor, James; Freitas, Thaise R; Greiter Loerzer, Raquel; Gerwing, Travis G; Juanes, Francis; Dudas, Sarah E

    2017-12-01

    Diversity estimates play a key role in ecological assessments. Species richness and abundance are commonly used to generate complex diversity indices that are dependent on the quality of these estimates. As such, there is a long-standing interest in the development of monitoring techniques, their ability to adequately assess species diversity, and the implications for generated indices. To determine the ability of substratum community assessment methods to capture species diversity, we evaluated four methods: photo quadrat, point intercept, random subsampling, and full quadrat assessments. Species density, abundance, richness, Shannon diversity, and Simpson diversity were then calculated for each method. We then conducted a method validation at a subset of locations to serve as an indication for how well each method captured the totality of the diversity present. Density, richness, Shannon diversity, and Simpson diversity estimates varied between methods, despite assessments occurring at the same locations, with photo quadrats detecting the lowest estimates and full quadrat assessments the highest. Abundance estimates were consistent among methods. Sample-based rarefaction and extrapolation curves indicated that differences between Hill numbers (richness, Shannon diversity, and Simpson diversity) were significant in the majority of cases, and coverage-based rarefaction and extrapolation curves confirmed that these dissimilarities were due to differences between the methods, not the sample completeness. Method validation highlighted the inability of the tested methods to capture the totality of the diversity present, while further supporting the notion of extrapolating abundances. Our results highlight the need for consistency across research methods, the advantages of utilizing multiple diversity indices, and potential concerns and considerations when comparing data from multiple sources.

  14. Linear Methods for Image Interpolation

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  15. Seismic analysis of equipment system with non-linearities such as gap and friction using equivalent linearization method

    International Nuclear Information System (INIS)

    Murakami, H.; Hirai, T.; Nakata, M.; Kobori, T.; Mizukoshi, K.; Takenaka, Y.; Miyagawa, N.

    1989-01-01

    Many of the equipment systems of nuclear power plants contain a number of non-linearities, such as gap and friction, due to their mechanical functions. It is desirable to take such non-linearities into account appropriately for the evaluation of the aseismic soundness. However, in usual design works, linear analysis method with rough assumptions is applied from engineering point of view. An equivalent linearization method is considered to be one of the effective analytical techniques to evaluate non-linear responses, provided that errors to a certain extent are tolerated, because it has greater simplicity in analysis and economization in computing time than non-linear analysis. The objective of this paper is to investigate the applicability of the equivalent linearization method to evaluate the maximum earthquake response of equipment systems such as the CANDU Fuelling Machine which has multiple non- linearities

  16. Linear Methods for Image Interpolation

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  17. A special mini-extrapolation chamber for calibration of 90Sr+90Y sources

    International Nuclear Information System (INIS)

    Oliveira, Mercia L; Caldas, Linda V E

    2005-01-01

    90 Sr+ 90 Y applicators are commonly utilized in brachytherapy, including ophthalmic procedures. The recommended instruments for the calibration of these applicators are extrapolation chambers, which are ionization chambers that allow the variation of their sensitive volume. Using the extrapolation method, the absorbed dose rate at the applicator surface can be determined. The aim of the present work was to develop a mini-extrapolation chamber for the calibration of 90 Sr+ 90 Y beta ray applicators. The developed mini-chamber has a 3.0 cm outer diameter and is 11.3 cm in length. An aluminized polyester foil is used as the entrance window while the collecting electrode is made of graphited polymethylmethacrylate. This mini-chamber was tested in 90 Sr+ 90 Y radiation beams from a beta particle check source and with a plane ophthalmic applicator, showing adequate results

  18. Non-linearities in Holocene floodplain sediment storage

    Science.gov (United States)

    Notebaert, Bastiaan; Nils, Broothaerts; Jean-François, Berger; Gert, Verstraeten

    2013-04-01

    Floodplain sediment storage is an important part of the sediment cascade model, buffering sediment delivery between hillslopes and oceans, which is hitherto not fully quantified in contrast to other global sediment budget components. Quantification and dating of floodplain sediment storage is data and financially demanding, limiting contemporary estimates for larger spatial units to simple linear extrapolations from a number of smaller catchments. In this paper we will present non-linearities in both space and time for floodplain sediment budgets in three different catchments. Holocene floodplain sediments of the Dijle catchment in the Belgian loess region, show a clear distinction between morphological stages: early Holocene peat accumulation, followed by mineral floodplain aggradation from the start of the agricultural period on. Contrary to previous assumptions, detailed dating of this morphological change at different shows an important non-linearity in geomorphologic changes of the floodplain, both between and within cross sections. A second example comes from the Pre-Alpine French Valdaine region, where non-linearities and complex system behavior exists between (temporal) patterns of soil erosion and floodplain sediment deposition. In this region Holocene floodplain deposition is characterized by different cut-and-fill phases. The quantification of these different phases shows a complicated image of increasing and decreasing floodplain sediment storage, which hampers the image of increasing sediment accumulation over time. Although fill stages may correspond with large quantities of deposited sediment and traditionally calculated sedimentation rates for such stages are high, they do not necessary correspond with a long-term net increase in floodplain deposition. A third example is based on the floodplain sediment storage in the Amblève catchment, located in the Belgian Ardennes uplands. Detailed floodplain sediment quantification for this catchments shows

  19. Comparison of various state equations for approximation and extrapolation of experimental hydrogen molar volumes in wide temperature and pressure intervals

    International Nuclear Information System (INIS)

    Didyk, A.Yu.; Altynov, V.A.; Wisniewski, R.

    2009-01-01

    The numerical analysis of practically all existing formulae such as expansion series, Tait, logarithm, Van der Waals and virial equations for interpolation of experimental molar volumes versus high pressure was carried out. One can conclude that extrapolating dependences of molar volumes versus pressure and temperature can be valid. It was shown that virial equations can be used for fitting experimental data at relatively low pressures P<3 kbar too in distinction to other equations. Direct solving of a linear equation of the third order relatively to volume using extrapolated virial coefficients allows us to obtain good agreement between existing experimental data for high pressure and calculated values

  20. Extrapolating Satellite Winds to Turbine Operating Heights

    DEFF Research Database (Denmark)

    Badger, Merete; Pena Diaz, Alfredo; Hahmann, Andrea N.

    2016-01-01

    Ocean wind retrievals from satellite sensors are typically performed for the standard level of 10 m. This restricts their full exploitation for wind energy planning, which requires wind information at much higher levels where wind turbines operate. A new method is presented for the vertical...... extrapolation of satellitebased wind maps. Winds near the sea surface are obtained from satellite data and used together with an adaptation of the Monin–Obukhov similarity theory to estimate the wind speed at higher levels. The thermal stratification of the atmosphere is taken into account through a long...

  1. The ATLAS Track Extrapolation Package

    CERN Document Server

    Salzburger, A

    2007-01-01

    The extrapolation of track parameters and their associated covariances to destination surfaces of different types is a very frequent process in the event reconstruction of high energy physics experiments. This is amongst other reasons due to the fact that most track and vertex fitting techniques are based on the first and second momentum of the underlying probability density distribution. The correct stochastic or deterministic treatment of interactions with the traversed detector material is hereby crucial for high quality track reconstruction throughout the entire momentum range of final state particles that are produced in high energy physics collision experiments. This document presents the main concepts, the algorithms and the implementation of the newly developed, powerful ATLAS track extrapolation engine. It also emphasises on validation procedures, timing measurements and the integration into the ATLAS offline reconstruction software.

  2. Second-order kinetic model for the sorption of cadmium onto tree fern: a comparison of linear and non-linear methods.

    Science.gov (United States)

    Ho, Yuh-Shan

    2006-01-01

    A comparison was made of the linear least-squares method and a trial-and-error non-linear method of the widely used pseudo-second-order kinetic model for the sorption of cadmium onto ground-up tree fern. Four pseudo-second-order kinetic linear equations are discussed. Kinetic parameters obtained from the four kinetic linear equations using the linear method differed but they were the same when using the non-linear method. A type 1 pseudo-second-order linear kinetic model has the highest coefficient of determination. Results show that the non-linear method may be a better way to obtain the desired parameters.

  3. SU-D-204-02: BED Consistent Extrapolation of Mean Dose Tolerances

    Energy Technology Data Exchange (ETDEWEB)

    Perko, Z; Bortfeld, T; Hong, T; Wolfgang, J; Unkelbach, J [Massachusetts General Hospital, Boston, MA (United States)

    2016-06-15

    Purpose: The safe use of radiotherapy requires the knowledge of tolerable organ doses. For experimental fractionation schemes (e.g. hypofractionation) these are typically extrapolated from traditional fractionation schedules using the Biologically Effective Dose (BED) model. This work demonstrates that using the mean dose in the standard BED equation may overestimate tolerances, potentially leading to unsafe treatments. Instead, extrapolation of mean dose tolerances should take the spatial dose distribution into account. Methods: A formula has been derived to extrapolate mean physical dose constraints such that they are mean BED equivalent. This formula constitutes a modified BED equation where the influence of the spatial dose distribution is summarized in a single parameter, the dose shape factor. To quantify effects we analyzed 14 liver cancer patients previously treated with proton therapy in 5 or 15 fractions, for whom also photon IMRT plans were available. Results: Our work has two main implications. First, in typical clinical plans the dose distribution can have significant effects. When mean dose tolerances are extrapolated from standard fractionation towards hypofractionation they can be overestimated by 10–15%. Second, the shape difference between photon and proton dose distributions can cause 30–40% differences in mean physical dose for plans having the same mean BED. The combined effect when extrapolating proton doses to mean BED equivalent photon doses in traditional 35 fraction regimens resulted in up to 7–8 Gy higher doses than when applying the standard BED formula. This can potentially lead to unsafe treatments (in 1 of the 14 analyzed plans the liver mean dose was above its 32 Gy tolerance). Conclusion: The shape effect should be accounted for to avoid unsafe overestimation of mean dose tolerances, particularly when estimating constraints for hypofractionated regimens. In addition, tolerances established for a given treatment modality cannot

  4. Sparsity Prevention Pivoting Method for Linear Programming

    DEFF Research Database (Denmark)

    Li, Peiqiang; Li, Qiyuan; Li, Canbing

    2018-01-01

    When the simplex algorithm is used to calculate a linear programming problem, if the matrix is a sparse matrix, it will be possible to lead to many zero-length calculation steps, and even iterative cycle will appear. To deal with the problem, a new pivoting method is proposed in this paper....... The principle of this method is avoided choosing the row which the value of the element in the b vector is zero as the row of the pivot element to make the matrix in linear programming density and ensure that most subsequent steps will improve the value of the objective function. One step following...... this principle is inserted to reselect the pivot element in the existing linear programming algorithm. Both the conditions for inserting this step and the maximum number of allowed insertion steps are determined. In the case study, taking several numbers of linear programming problems as examples, the results...

  5. Sparsity Prevention Pivoting Method for Linear Programming

    DEFF Research Database (Denmark)

    Li, Peiqiang; Li, Qiyuan; Li, Canbing

    2018-01-01

    . The principle of this method is avoided choosing the row which the value of the element in the b vector is zero as the row of the pivot element to make the matrix in linear programming density and ensure that most subsequent steps will improve the value of the objective function. One step following......When the simplex algorithm is used to calculate a linear programming problem, if the matrix is a sparse matrix, it will be possible to lead to many zero-length calculation steps, and even iterative cycle will appear. To deal with the problem, a new pivoting method is proposed in this paper...... this principle is inserted to reselect the pivot element in the existing linear programming algorithm. Both the conditions for inserting this step and the maximum number of allowed insertion steps are determined. In the case study, taking several numbers of linear programming problems as examples, the results...

  6. Non-linear M -sequences Generation Method

    Directory of Open Access Journals (Sweden)

    Z. R. Garifullina

    2011-06-01

    Full Text Available The article deals with a new method for modeling a pseudorandom number generator based on R-blocks. The gist of the method is the replacement of a multi digit XOR element by a stochastic adder in a parallel binary linear feedback shift register scheme.

  7. Studying the method of linearization of exponential calibration curves

    International Nuclear Information System (INIS)

    Bunzh, Z.A.

    1989-01-01

    The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given

  8. A meta-analysis of cambium phenology and growth: linear and non-linear patterns in conifers of the northern hemisphere

    OpenAIRE

    Rossi, Sergio; Anfodillo, Tommaso; Čufar, Katarina; Cuny, Henri E.; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gričar, Jožica; Gruber, Andreas; King, Gregory M.; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B. K.

    2017-01-01

    Background and Aims Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-l...

  9. Accelerating Monte Carlo Molecular Simulations Using Novel Extrapolation Schemes Combined with Fast Database Generation on Massively Parallel Machines

    KAUST Repository

    Amir, Sahar Z.

    2013-05-01

    We introduce an efficient thermodynamically consistent technique to extrapolate and interpolate normalized Canonical NVT ensemble averages like pressure and energy for Lennard-Jones (L-J) fluids. Preliminary results show promising applicability in oil and gas modeling, where accurate determination of thermodynamic properties in reservoirs is challenging. The thermodynamic interpolation and thermodynamic extrapolation schemes predict ensemble averages at different thermodynamic conditions from expensively simulated data points. The methods reweight and reconstruct previously generated database values of Markov chains at neighboring temperature and density conditions. To investigate the efficiency of these methods, two databases corresponding to different combinations of normalized density and temperature are generated. One contains 175 Markov chains with 10,000,000 MC cycles each and the other contains 3000 Markov chains with 61,000,000 MC cycles each. For such massive database creation, two algorithms to parallelize the computations have been investigated. The accuracy of the thermodynamic extrapolation scheme is investigated with respect to classical interpolation and extrapolation. Finally, thermodynamic interpolation benefiting from four neighboring Markov chains points is implemented and compared with previous schemes. The thermodynamic interpolation scheme using knowledge from the four neighboring points proves to be more accurate than the thermodynamic extrapolation from the closest point only, while both thermodynamic extrapolation and thermodynamic interpolation are more accurate than the classical interpolation and extrapolation. The investigated extrapolation scheme has great potential in oil and gas reservoir modeling.That is, such a scheme has the potential to speed up the MCMC thermodynamic computation to be comparable with conventional Equation of State approaches in efficiency. In particular, this makes it applicable to large-scale optimization of L

  10. On Extrapolating Past the Range of Observed Data When Making Statistical Predictions in Ecology.

    Directory of Open Access Journals (Sweden)

    Paul B Conn

    Full Text Available Ecologists are increasingly using statistical models to predict animal abundance and occurrence in unsampled locations. The reliability of such predictions depends on a number of factors, including sample size, how far prediction locations are from the observed data, and similarity of predictive covariates in locations where data are gathered to locations where predictions are desired. In this paper, we propose extending Cook's notion of an independent variable hull (IVH, developed originally for application with linear regression models, to generalized regression models as a way to help assess the potential reliability of predictions in unsampled areas. Predictions occurring inside the generalized independent variable hull (gIVH can be regarded as interpolations, while predictions occurring outside the gIVH can be regarded as extrapolations worthy of additional investigation or skepticism. We conduct a simulation study to demonstrate the usefulness of this metric for limiting the scope of spatial inference when conducting model-based abundance estimation from survey counts. In this case, limiting inference to the gIVH substantially reduces bias, especially when survey designs are spatially imbalanced. We also demonstrate the utility of the gIVH in diagnosing problematic extrapolations when estimating the relative abundance of ribbon seals in the Bering Sea as a function of predictive covariates. We suggest that ecologists routinely use diagnostics such as the gIVH to help gauge the reliability of predictions from statistical models (such as generalized linear, generalized additive, and spatio-temporal regression models.

  11. Dynamic Aperture Extrapolation in Presence of Tune Modulation

    CERN Document Server

    Giovannozzi, Massimo; Todesco, Ezio

    1998-01-01

    In hadron colliders, such as the Large Hadron Collider (LHC) to be built at CERN, the long-term stability of the single-particle motion is mostly determined by the field-shape quality of the superconducting magnets. The mechanism of particle loss may be largely enhanced by modulation of betatron tunes, induced either by synchro-betatron coupling (via the residual uncorrected chromaticity), or by unavoidable power supply ripple. This harmful effect is investigated in a simple dynamical system model, the Henon map with modulated linear frequencies. Then, a realistic accelerator model describing the injection optics of the LHC lattice is analyzed. Orbital data obtained with long-term tracking simulations ($10^5$-$10^7$ turns) are post-processed to obtain the dynamic aperture. It turns out that the dynamic aperture can be interpolated using a simple mpirical formula, and it decays proportionally to a power of the inverse logarithm of the number of turns. Furthermore, the extrapolation of tracking data at $10^5$ t...

  12. Comparison of equivalent linear and non linear methods on ground response analysis: case study at West Bangka site

    International Nuclear Information System (INIS)

    Eko Rudi Iswanto; Eric Yee

    2016-01-01

    Within the framework of identifying NPP sites, site surveys are performed in West Bangka (WB), Bangka-Belitung Island Province. Ground response analysis of a potential site has been carried out using peak strain profiles and peak ground acceleration. The objective of this research is to compare Equivalent Linear (EQL) and Non Linear (NL) methods of ground response analysis on the selected NPP site (West Bangka) using Deep Soil software. Equivalent linear method is widely used because requires soil data in simple way and short time of computational process. On the other hand, non linear method is capable of representing the actual soil behaviour by considering non linear soil parameter. The results showed that EQL method has similar trends to NL method. At surface layer, the acceleration values for EQL and NL methods are resulted as 0.425 g and 0.375 g respectively. NL method is more reliable in capturing higher frequencies of spectral acceleration compared to EQL method. (author)

  13. Application of the EXtrapolated Efficiency Method (EXEM) to infer the gamma-cascade detection efficiency in the actinide region

    Energy Technology Data Exchange (ETDEWEB)

    Ducasse, Q. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); CEA-Cadarache, DEN/DER/SPRC/LEPh, 13108 Saint Paul lez Durance (France); Jurado, B., E-mail: jurado@cenbg.in2p3.fr [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); Mathieu, L.; Marini, P. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); Morillon, B. [CEA DAM DIF, 91297 Arpajon (France); Aiche, M.; Tsekhanovich, I. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France)

    2016-08-01

    The study of transfer-induced gamma-decay probabilities is very useful for understanding the surrogate-reaction method and, more generally, for constraining statistical-model calculations. One of the main difficulties in the measurement of gamma-decay probabilities is the determination of the gamma-cascade detection efficiency. In Boutoux et al. (2013) [10] we developed the EXtrapolated Efficiency Method (EXEM), a new method to measure this quantity. In this work, we have applied, for the first time, the EXEM to infer the gamma-cascade detection efficiency in the actinide region. In particular, we have considered the {sup 238}U(d,p){sup 239}U and {sup 238}U({sup 3}He,d){sup 239}Np reactions. We have performed Hauser–Feshbach calculations to interpret our results and to verify the hypothesis on which the EXEM is based. The determination of fission and gamma-decay probabilities of {sup 239}Np below the neutron separation energy allowed us to validate the EXEM.

  14. Application of the EXtrapolated Efficiency Method (EXEM) to infer the gamma-cascade detection efficiency in the actinide region

    International Nuclear Information System (INIS)

    Ducasse, Q.; Jurado, B.; Mathieu, L.; Marini, P.; Morillon, B.; Aiche, M.; Tsekhanovich, I.

    2016-01-01

    The study of transfer-induced gamma-decay probabilities is very useful for understanding the surrogate-reaction method and, more generally, for constraining statistical-model calculations. One of the main difficulties in the measurement of gamma-decay probabilities is the determination of the gamma-cascade detection efficiency. In Boutoux et al. (2013) [10] we developed the EXtrapolated Efficiency Method (EXEM), a new method to measure this quantity. In this work, we have applied, for the first time, the EXEM to infer the gamma-cascade detection efficiency in the actinide region. In particular, we have considered the "2"3"8U(d,p)"2"3"9U and "2"3"8U("3He,d)"2"3"9Np reactions. We have performed Hauser–Feshbach calculations to interpret our results and to verify the hypothesis on which the EXEM is based. The determination of fission and gamma-decay probabilities of "2"3"9Np below the neutron separation energy allowed us to validate the EXEM.

  15. The simplex method of linear programming

    CERN Document Server

    Ficken, Frederick A

    1961-01-01

    This concise but detailed and thorough treatment discusses the rudiments of the well-known simplex method for solving optimization problems in linear programming. Geared toward undergraduate students, the approach offers sufficient material for readers without a strong background in linear algebra. Many different kinds of problems further enrich the presentation. The text begins with examinations of the allocation problem, matrix notation for dual problems, feasibility, and theorems on duality and existence. Subsequent chapters address convex sets and boundedness, the prepared problem and boun

  16. Mathematical methods linear algebra normed spaces distributions integration

    CERN Document Server

    Korevaar, Jacob

    1968-01-01

    Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector

  17. Finite lattice extrapolation algorithms

    International Nuclear Information System (INIS)

    Henkel, M.; Schuetz, G.

    1987-08-01

    Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)

  18. An extended GS method for dense linear systems

    Science.gov (United States)

    Niki, Hiroshi; Kohno, Toshiyuki; Abe, Kuniyoshi

    2009-09-01

    Davey and Rosindale [K. Davey, I. Rosindale, An iterative solution scheme for systems of boundary element equations, Internat. J. Numer. Methods Engrg. 37 (1994) 1399-1411] derived the GSOR method, which uses an upper triangular matrix [Omega] in order to solve dense linear systems. By applying functional analysis, the authors presented an expression for the optimum [Omega]. Moreover, Davey and Bounds [K. Davey, S. Bounds, A generalized SOR method for dense linear systems of boundary element equations, SIAM J. Comput. 19 (1998) 953-967] also introduced further interesting results. In this note, we employ a matrix analysis approach to investigate these schemes, and derive theorems that compare these schemes with existing preconditioners for dense linear systems. We show that the convergence rate of the Gauss-Seidel method with preconditioner PG is superior to that of the GSOR method. Moreover, we define some splittings associated with the iterative schemes. Some numerical examples are reported to confirm the theoretical analysis. We show that the EGS method with preconditioner produces an extremely small spectral radius in comparison with the other schemes considered.

  19. Linear regression methods a ccording to objective functions

    OpenAIRE

    Yasemin Sisman; Sebahattin Bektas

    2012-01-01

    The aim of the study is to explain the parameter estimation methods and the regression analysis. The simple linear regressionmethods grouped according to the objective function are introduced. The numerical solution is achieved for the simple linear regressionmethods according to objective function of Least Squares and theLeast Absolute Value adjustment methods. The success of the appliedmethods is analyzed using their objective function values.

  20. A NEW CODE FOR NONLINEAR FORCE-FREE FIELD EXTRAPOLATION OF THE GLOBAL CORONA

    International Nuclear Information System (INIS)

    Jiang Chaowei; Feng Xueshang; Xiang Changqing

    2012-01-01

    Reliable measurements of the solar magnetic field are still restricted to the photosphere, and our present knowledge of the three-dimensional coronal magnetic field is largely based on extrapolations from photospheric magnetograms using physical models, e.g., the nonlinear force-free field (NLFFF) model that is usually adopted. Most of the currently available NLFFF codes have been developed with computational volume such as a Cartesian box or a spherical wedge, while a global full-sphere extrapolation is still under development. A high-performance global extrapolation code is in particular urgently needed considering that the Solar Dynamics Observatory can provide a full-disk magnetogram with resolution up to 4096 × 4096. In this work, we present a new parallelized code for global NLFFF extrapolation with the photosphere magnetogram as input. The method is based on the magnetohydrodynamics relaxation approach, the CESE-MHD numerical scheme, and a Yin-Yang spherical grid that is used to overcome the polar problems of the standard spherical grid. The code is validated by two full-sphere force-free solutions from Low and Lou's semi-analytic force-free field model. The code shows high accuracy and fast convergence, and can be ready for future practical application if combined with an adaptive mesh refinement technique.

  1. Radiographic film: surface dose extrapolation techniques

    International Nuclear Information System (INIS)

    Cheung, T.; Yu, P.K.N.; Butson, M.J.; Cancer Services, Wollongong, NSW; Currie, M.

    2004-01-01

    Full text: Assessment of surface dose delivered from radiotherapy x-ray beams for optimal results should be performed both inside and outside the prescribed treatment fields An extrapolation technique can be used with radiographic film to perform surface dose assessment for open field high energy x-ray beams. This can produce an accurate 2 dimensional map of surface dose if required. Results have shown that surface % dose can be estimated within ±3% of parallel plate ionisation chamber results with radiographic film using a series of film layers to produce an extrapolated result. Extrapolated percentage dose assessment for 10cm, 20cmand 30cm square fields was estimated to be 15% ± 2%, 29% ± 3% and 38% ± 3% at the central axis and relatively uniform across the treatment field. Corresponding parallel plate ionisation chamber measurement are 16%, 27% and 37% respectively. Surface doses are also measured outside the treatment field which are mainly due to scattered electron contamination. To achieve this result, film calibration curves must be irradiated to similar x-ray field sizes as the experimental film to minimize quantitative variations in film optical density caused by varying x-ray spectrum with field size. Copyright (2004) Australasian College of Physical Scientists and Engineers in Medicine

  2. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    Energy Technology Data Exchange (ETDEWEB)

    Spackman, Peter R.; Karton, Amir, E-mail: amir.karton@uwa.edu.au [School of Chemistry and Biochemistry, The University of Western Australia, Perth, WA 6009 (Australia)

    2015-05-15

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L{sup α} two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol{sup –1}. The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol{sup –1}.

  3. Estimating the CCSD basis-set limit energy from small basis sets: basis-set extrapolations vs additivity schemes

    International Nuclear Information System (INIS)

    Spackman, Peter R.; Karton, Amir

    2015-01-01

    Coupled cluster calculations with all single and double excitations (CCSD) converge exceedingly slowly with the size of the one-particle basis set. We assess the performance of a number of approaches for obtaining CCSD correlation energies close to the complete basis-set limit in conjunction with relatively small DZ and TZ basis sets. These include global and system-dependent extrapolations based on the A + B/L α two-point extrapolation formula, and the well-known additivity approach that uses an MP2-based basis-set-correction term. We show that the basis set convergence rate can change dramatically between different systems(e.g.it is slower for molecules with polar bonds and/or second-row elements). The system-dependent basis-set extrapolation scheme, in which unique basis-set extrapolation exponents for each system are obtained from lower-cost MP2 calculations, significantly accelerates the basis-set convergence relative to the global extrapolations. Nevertheless, we find that the simple MP2-based basis-set additivity scheme outperforms the extrapolation approaches. For example, the following root-mean-squared deviations are obtained for the 140 basis-set limit CCSD atomization energies in the W4-11 database: 9.1 (global extrapolation), 3.7 (system-dependent extrapolation), and 2.4 (additivity scheme) kJ mol –1 . The CCSD energy in these approximations is obtained from basis sets of up to TZ quality and the latter two approaches require additional MP2 calculations with basis sets of up to QZ quality. We also assess the performance of the basis-set extrapolations and additivity schemes for a set of 20 basis-set limit CCSD atomization energies of larger molecules including amino acids, DNA/RNA bases, aromatic compounds, and platonic hydrocarbon cages. We obtain the following RMSDs for the above methods: 10.2 (global extrapolation), 5.7 (system-dependent extrapolation), and 2.9 (additivity scheme) kJ mol –1

  4. Video error concealment using block matching and frequency selective extrapolation algorithms

    Science.gov (United States)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  5. A feasible DY conjugate gradient method for linear equality constraints

    Science.gov (United States)

    LI, Can

    2017-09-01

    In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.

  6. Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis

    Science.gov (United States)

    Freund, Roland W.

    1991-01-01

    We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  7. Statistical validation of engineering and scientific models : bounds, calibration, and extrapolation.

    Energy Technology Data Exchange (ETDEWEB)

    Dowding, Kevin J.; Hills, Richard Guy (New Mexico State University, Las Cruces, NM)

    2005-04-01

    Numerical models of complex phenomena often contain approximations due to our inability to fully model the underlying physics, the excessive computational resources required to fully resolve the physics, the need to calibrate constitutive models, or in some cases, our ability to only bound behavior. Here we illustrate the relationship between approximation, calibration, extrapolation, and model validation through a series of examples that use the linear transient convective/dispersion equation to represent the nonlinear behavior of Burgers equation. While the use of these models represents a simplification relative to the types of systems we normally address in engineering and science, the present examples do support the tutorial nature of this document without obscuring the basic issues presented with unnecessarily complex models.

  8. A Linear Birefringence Measurement Method for an Optical Fiber Current Sensor.

    Science.gov (United States)

    Xu, Shaoyi; Shao, Haiming; Li, Chuansheng; Xing, Fangfang; Wang, Yuqiao; Li, Wei

    2017-07-03

    In this work, a linear birefringence measurement method is proposed for an optical fiber current sensor (OFCS). First, the optical configuration of the measurement system is presented. Then, the elimination method of the effect of the azimuth angles between the sensing fiber and the two polarizers is demonstrated. Moreover, the relationship of the linear birefringence, the Faraday rotation angle and the final output is determined. On these bases, the multi-valued problem on the linear birefringence is simulated and its solution is illustrated when the linear birefringence is unknown. Finally, the experiments are conducted to prove the feasibility of the proposed method. When the numbers of turns of the sensing fiber in the OFCS are about 15, 19, 23, 27, 31, 35, and 39, the measured linear birefringence obtained by the proposed method are about 1.3577, 1.8425, 2.0983, 2.5914, 2.7891, 3.2003 and 3.5198 rad. Two typical methods provide the references for the proposed method. The proposed method is proven to be suitable for the linear birefringence measurement in the full range without the limitation that the linear birefringence must be smaller than π/2.

  9. Uzawa method for fuzzy linear system

    OpenAIRE

    Ke Wang

    2013-01-01

    An Uzawa method is presented for solving fuzzy linear systems whose coefficient matrix is crisp and the right-hand side column is arbitrary fuzzy number vector. The explicit iterative scheme is given. The convergence is analyzed with convergence theorems and the optimal parameter is obtained. Numerical examples are given to illustrate the procedure and show the effectiveness and efficiency of the method.

  10. Smooth extrapolation of unknown anatomy via statistical shape models

    Science.gov (United States)

    Grupp, R. B.; Chiang, H.; Otake, Y.; Murphy, R. J.; Gordon, C. R.; Armand, M.; Taylor, R. H.

    2015-03-01

    Several methods to perform extrapolation of unknown anatomy were evaluated. The primary application is to enhance surgical procedures that may use partial medical images or medical images of incomplete anatomy. Le Fort-based, face-jaw-teeth transplant is one such procedure. From CT data of 36 skulls and 21 mandibles separate Statistical Shape Models of the anatomical surfaces were created. Using the Statistical Shape Models, incomplete surfaces were projected to obtain complete surface estimates. The surface estimates exhibit non-zero error in regions where the true surface is known; it is desirable to keep the true surface and seamlessly merge the estimated unknown surface. Existing extrapolation techniques produce non-smooth transitions from the true surface to the estimated surface, resulting in additional error and a less aesthetically pleasing result. The three extrapolation techniques evaluated were: copying and pasting of the surface estimate (non-smooth baseline), a feathering between the patient surface and surface estimate, and an estimate generated via a Thin Plate Spline trained from displacements between the surface estimate and corresponding vertices of the known patient surface. Feathering and Thin Plate Spline approaches both yielded smooth transitions. However, feathering corrupted known vertex values. Leave-one-out analyses were conducted, with 5% to 50% of known anatomy removed from the left-out patient and estimated via the proposed approaches. The Thin Plate Spline approach yielded smaller errors than the other two approaches, with an average vertex error improvement of 1.46 mm and 1.38 mm for the skull and mandible respectively, over the baseline approach.

  11. Dose and dose rate extrapolation factors for malignant and non-malignant health endpoints after exposure to gamma and neutron radiation

    Energy Technology Data Exchange (ETDEWEB)

    Tran, Van; Little, Mark P. [National Cancer Institute, Radiation Epidemiology Branch, Rockville, MD (United States)

    2017-11-15

    Murine experiments were conducted at the JANUS reactor in Argonne National Laboratory from 1970 to 1992 to study the effect of acute and protracted radiation dose from gamma rays and fission neutron whole body exposure. The present study reports the reanalysis of the JANUS data on 36,718 mice, of which 16,973 mice were irradiated with neutrons, 13,638 were irradiated with gamma rays, and 6107 were controls. Mice were mostly Mus musculus, but one experiment used Peromyscus leucopus. For both types of radiation exposure, a Cox proportional hazards model was used, using age as timescale, and stratifying on sex and experiment. The optimal model was one with linear and quadratic terms in cumulative lagged dose, with adjustments to both linear and quadratic dose terms for low-dose rate irradiation (<5 mGy/h) and with adjustments to the dose for age at exposure and sex. After gamma ray exposure there is significant non-linearity (generally with upward curvature) for all tumours, lymphoreticular, respiratory, connective tissue and gastrointestinal tumours, also for all non-tumour, other non-tumour, non-malignant pulmonary and non-malignant renal diseases (p < 0.001). Associated with this the low-dose extrapolation factor, measuring the overestimation in low-dose risk resulting from linear extrapolation is significantly elevated for lymphoreticular tumours 1.16 (95% CI 1.06, 1.31), elevated also for a number of non-malignant endpoints, specifically all non-tumour diseases, 1.63 (95% CI 1.43, 2.00), non-malignant pulmonary disease, 1.70 (95% CI 1.17, 2.76) and other non-tumour diseases, 1.47 (95% CI 1.29, 1.82). However, for a rather larger group of malignant endpoints the low-dose extrapolation factor is significantly less than 1 (implying downward curvature), with central estimates generally ranging from 0.2 to 0.8, in particular for tumours of the respiratory system, vasculature, ovary, kidney/urinary bladder and testis. For neutron exposure most endpoints, malignant and

  12. A Proposed Method for Solving Fuzzy System of Linear Equations

    Directory of Open Access Journals (Sweden)

    Reza Kargar

    2014-01-01

    Full Text Available This paper proposes a new method for solving fuzzy system of linear equations with crisp coefficients matrix and fuzzy or interval right hand side. Some conditions for the existence of a fuzzy or interval solution of m×n linear system are derived and also a practical algorithm is introduced in detail. The method is based on linear programming problem. Finally the applicability of the proposed method is illustrated by some numerical examples.

  13. An extrapolation scheme for solid-state NMR chemical shift calculations

    Science.gov (United States)

    Nakajima, Takahito

    2017-06-01

    Conventional quantum chemical and solid-state physical approaches include several problems to accurately calculate solid-state nuclear magnetic resonance (NMR) properties. We propose a reliable computational scheme for solid-state NMR chemical shifts using an extrapolation scheme that retains the advantages of these approaches but reduces their disadvantages. Our scheme can satisfactorily yield solid-state NMR magnetic shielding constants. The estimated values have only a small dependence on the low-level density functional theory calculation with the extrapolation scheme. Thus, our approach is efficient because the rough calculation can be performed in the extrapolation scheme.

  14. An explicit method in non-linear soil-structure interaction

    International Nuclear Information System (INIS)

    Kunar, R.R.

    1981-01-01

    The explicit method of analysis in the time domain is ideally suited for the solution of transient dynamic non-linear problems. Though the method is not new, its application to seismic soil-structure interaction is relatively new and deserving of public discussion. This paper describes the principles of the explicit approach in soil-structure interaction and it presents a simple algorithm that can be used in the development of explicit computer codes. The paper also discusses some of the practical considerations like non-reflecting boundaries and time steps. The practicality of the method is demonstrated using a computer code, PRESS, which is used to compare the treatment of strain-dependent properties using average strain levels over the whole time history (the equivalent linear method) and using the actual strain levels at every time step to modify the soil properties (non-linear method). (orig.)

  15. Non linear dynamics of magnetic islands in fusion plasmas

    International Nuclear Information System (INIS)

    Meshcheriakov, D.

    2012-10-01

    In this thesis we investigate the issues of linear stability of the tearing modes in a presence of both curvature and diamagnetic rotation using the non linear full-MHD toroidal code XTOR-2F, which includes anisotropic heat transport, diamagnetic and geometrical effects. This analysis is applied to one of the fully non-inductive discharges on Tore-Supra. Such experiments are crucially important to demonstrate reactor scale steady state operation for the tokamak. The possibility of a full linear stabilization of the tearing modes by diamagnetic rotation in the presence of toroidal curvature is shown. The stabilization threshold does not follow the classical scaling law connecting the growth rate of islands to plasma conductivity, measured here by the Lundquist number (S). However, for numerical reasons, the conductivity used in the simulations is lower than that of the experiment, which raises the question of extrapolation of the obtained results to the experimental situation. The extrapolation of the obtained results requires simulations with several different conductivities. It predicts that the mode at q = 2 surface to be stable at value of diamagnetic frequency consistent with the experimental one at S = S(exp). In the linearly stable domain, the mode is metastable: saturation level depends on the seed island size. In the non linear regime, the saturation of n=1, m=2 mode is found to be strongly reduced by diamagnetic rotation and by Lundquist number. However, the extrapolation to the experimental situation shows that if the island is destabilized, it will saturate at a detectable level for the Tore Supra diagnostic. For a large plasma aspect ratio (i.e. weak curvature effects), the reduction of the saturated width by diamagnetic frequency takes the form of a jump reminiscent of multiple states evidenced in slab geometry case. The question of extrapolation of the obtained results towards future generation of fusion devices is also addressed. In particular, for

  16. Non-linear programming method in optimization of fast reactors

    International Nuclear Information System (INIS)

    Pavelesku, M.; Dumitresku, Kh.; Adam, S.

    1975-01-01

    Application of the non-linear programming methods on optimization of nuclear materials distribution in fast reactor is discussed. The programming task composition is made on the basis of the reactor calculation dependent on the fuel distribution strategy. As an illustration of this method application the solution of simple example is given. Solution of the non-linear program is done on the basis of the numerical method SUMT. (I.T.)

  17. Convergence of hybrid methods for solving non-linear partial ...

    African Journals Online (AJOL)

    This paper is concerned with the numerical solution and convergence analysis of non-linear partial differential equations using a hybrid method. The solution technique involves discretizing the non-linear system of PDE to obtain a corresponding non-linear system of algebraic difference equations to be solved at each time ...

  18. Nuclear lattice simulations using symmetry-sign extrapolation

    Energy Technology Data Exchange (ETDEWEB)

    Laehde, Timo A.; Luu, Thomas [Forschungszentrum Juelich, Institute for Advanced Simulation, Institut fuer Kernphysik, and Juelich Center for Hadron Physics, Juelich (Germany); Lee, Dean [North Carolina State University, Department of Physics, Raleigh, NC (United States); Meissner, Ulf G. [Universitaet Bonn, Helmholtz-Institut fuer Strahlen- und Kernphysik and Bethe Center for Theoretical Physics, Bonn (Germany); Forschungszentrum Juelich, Institute for Advanced Simulation, Institut fuer Kernphysik, and Juelich Center for Hadron Physics, Juelich (Germany); Forschungszentrum Juelich, JARA - High Performance Computing, Juelich (Germany); Epelbaum, Evgeny; Krebs, Hermann [Ruhr-Universitaet Bochum, Institut fuer Theoretische Physik II, Bochum (Germany); Rupak, Gautam [Mississippi State University, Department of Physics and Astronomy, Mississippi State, MS (United States)

    2015-07-15

    Projection Monte Carlo calculations of lattice Chiral Effective Field Theory suffer from sign oscillations to a varying degree dependent on the number of protons and neutrons. Hence, such studies have hitherto been concentrated on nuclei with equal numbers of protons and neutrons, and especially on the alpha nuclei where the sign oscillations are smallest. Here, we introduce the ''symmetry-sign extrapolation'' method, which allows us to use the approximate Wigner SU(4) symmetry of the nuclear interaction to systematically extend the Projection Monte Carlo calculations to nuclear systems where the sign problem is severe. We benchmark this method by calculating the ground-state energies of the {sup 12}C, {sup 6}He and {sup 6}Be nuclei, and discuss its potential for studies of neutron-rich halo nuclei and asymmetric nuclear matter. (orig.)

  19. Interior-Point Methods for Linear Programming: A Review

    Science.gov (United States)

    Singh, J. N.; Singh, D.

    2002-01-01

    The paper reviews some recent advances in interior-point methods for linear programming and indicates directions in which future progress can be made. Most of the interior-point methods belong to any of three categories: affine-scaling methods, potential reduction methods and central path methods. These methods are discussed together with…

  20. A logic circuit for solving linear function by digital method

    International Nuclear Information System (INIS)

    Ma Yonghe

    1986-01-01

    A mathematical method for determining the linear relation of physical quantity with rediation intensity is described. A logic circuit has been designed for solving linear function by digital method. Some applications and the circuit function are discussed

  1. RIFIFI: Analytical calculation method of the critical condition and flux in a varied regions reactor by two-group theory and one dimension developed for the Mercury-Ferranti computer; Rififi: methode de calcul analytique de la condition critique et des flux d'une pile a regions variees en theorie a deux groupes et a une dimension programmee pour le calculateur electronique Mercury (Ferranti)

    Energy Technology Data Exchange (ETDEWEB)

    Amouyal, A; Bacher, P; Lago, B; Mengin, F L; Parker, E [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1960-07-01

    The calculation method presented in this report has been developed for the Mercury-Ferranti computer of the C.E.N.S. This calculation method allows to resolve the diffusion equations and continuity equations of flux and flow with two groups of neutrons and one dimension in spherical, cylindrical and linear geometry. In the cylindrical and linear configurations, we can take the height and extrapolated radius into account. The critical condition can be realised by varying linearly one or more parameters: k{sub {infinity}}, medium frontier, height or extrapolated radius. The calculation method enables also to calculate the flux, adjoint flux and various integrals. In the first part, it explains what is needed to know before using the method: data presentation, method possibilities, results presentation with some information about restrictions, accuracy and calculation time. The complete formulation of the calculation method is given in the second part. (M.P.)

  2. -Error Estimates of the Extrapolated Crank-Nicolson Discontinuous Galerkin Approximations for Nonlinear Sobolev Equations

    Directory of Open Access Journals (Sweden)

    Lee HyunYoung

    2010-01-01

    Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.

  3. The linearization method in hydrodynamical stability theory

    CERN Document Server

    Yudovich, V I

    1989-01-01

    This book presents the theory of the linearization method as applied to the problem of steady-state and periodic motions of continuous media. The author proves infinite-dimensional analogues of Lyapunov's theorems on stability, instability, and conditional stability for a large class of continuous media. In addition, semigroup properties for the linearized Navier-Stokes equations in the case of an incompressible fluid are studied, and coercivity inequalities and completeness of a system of small oscillations are proved.

  4. EPMLR: sequence-based linear B-cell epitope prediction method using multiple linear regression.

    Science.gov (United States)

    Lian, Yao; Ge, Meng; Pan, Xian-Ming

    2014-12-19

    B-cell epitopes have been studied extensively due to their immunological applications, such as peptide-based vaccine development, antibody production, and disease diagnosis and therapy. Despite several decades of research, the accurate prediction of linear B-cell epitopes has remained a challenging task. In this work, based on the antigen's primary sequence information, a novel linear B-cell epitope prediction model was developed using the multiple linear regression (MLR). A 10-fold cross-validation test on a large non-redundant dataset was performed to evaluate the performance of our model. To alleviate the problem caused by the noise of negative dataset, 300 experiments utilizing 300 sub-datasets were performed. We achieved overall sensitivity of 81.8%, precision of 64.1% and area under the receiver operating characteristic curve (AUC) of 0.728. We have presented a reliable method for the identification of linear B cell epitope using antigen's primary sequence information. Moreover, a web server EPMLR has been developed for linear B-cell epitope prediction: http://www.bioinfo.tsinghua.edu.cn/epitope/EPMLR/ .

  5. On a linear method in bootstrap confidence intervals

    Directory of Open Access Journals (Sweden)

    Andrea Pallini

    2007-10-01

    Full Text Available A linear method for the construction of asymptotic bootstrap confidence intervals is proposed. We approximate asymptotically pivotal and non-pivotal quantities, which are smooth functions of means of n independent and identically distributed random variables, by using a sum of n independent smooth functions of the same analytical form. Errors are of order Op(n-3/2 and Op(n-2, respectively. The linear method allows a straightforward approximation of bootstrap cumulants, by considering the set of n independent smooth functions as an original random sample to be resampled with replacement.

  6. Mathematical methods for physical and analytical chemistry

    CERN Document Server

    Goodson, David Z

    2011-01-01

    Mathematical Methods for Physical and Analytical Chemistry presents mathematical and statistical methods to students of chemistry at the intermediate, post-calculus level. The content includes a review of general calculus; a review of numerical techniques often omitted from calculus courses, such as cubic splines and Newton's method; a detailed treatment of statistical methods for experimental data analysis; complex numbers; extrapolation; linear algebra; and differential equations. With numerous example problems and helpful anecdotes, this text gives chemistry students the mathematical

  7. Studying the Transient Thermal Contact Conductance Between the Exhaust Valve and Its Seat Using the Inverse Method

    Science.gov (United States)

    Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid

    2016-02-01

    In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.

  8. Higher Order Aitken Extrapolation with Application to Converging and Diverging Gauss-Seidel Iterations

    OpenAIRE

    Tiruneh, Ababu Teklemariam

    2013-01-01

    Aitken extrapolation normally applied to convergent fixed point iteration is extended to extrapolate the solution of a divergent iteration. In addition, higher order Aitken extrapolation is introduced that enables successive decomposition of high Eigen values of the iteration matrix to enable convergence. While extrapolation of a convergent fixed point iteration using a geometric series sum is a known form of Aitken acceleration, it is shown in this paper that the same formula can be used to ...

  9. Communication: Predicting virial coefficients and alchemical transformations by extrapolating Mayer-sampling Monte Carlo simulations

    Science.gov (United States)

    Hatch, Harold W.; Jiao, Sally; Mahynski, Nathan A.; Blanco, Marco A.; Shen, Vincent K.

    2017-12-01

    Virial coefficients are predicted over a large range of both temperatures and model parameter values (i.e., alchemical transformation) from an individual Mayer-sampling Monte Carlo simulation by statistical mechanical extrapolation with minimal increase in computational cost. With this extrapolation method, a Mayer-sampling Monte Carlo simulation of the SPC/E (extended simple point charge) water model quantitatively predicted the second virial coefficient as a continuous function spanning over four orders of magnitude in value and over three orders of magnitude in temperature with less than a 2% deviation. In addition, the same simulation predicted the second virial coefficient if the site charges were scaled by a constant factor, from an increase of 40% down to zero charge. This method is also shown to perform well for the third virial coefficient and the exponential parameter for a Lennard-Jones fluid.

  10. Approximate Method for Solving the Linear Fuzzy Delay Differential Equations

    Directory of Open Access Journals (Sweden)

    S. Narayanamoorthy

    2015-01-01

    Full Text Available We propose an algorithm of the approximate method to solve linear fuzzy delay differential equations using Adomian decomposition method. The detailed algorithm of the approach is provided. The approximate solution is compared with the exact solution to confirm the validity and efficiency of the method to handle linear fuzzy delay differential equation. To show this proper features of this proposed method, numerical example is illustrated.

  11. Surface dose extrapolation measurements with radiographic film

    International Nuclear Information System (INIS)

    Butson, Martin J; Cheung Tsang; Yu, Peter K N; Currie, Michael

    2004-01-01

    Assessment of surface dose delivered from radiotherapy x-ray beams for optimal results should be performed both inside and outside the prescribed treatment fields. An extrapolation technique can be used with radiographic film to perform surface dose assessment for open field high energy x-ray beams. This can produce an accurate two-dimensional map of surface dose if required. Results have shown that the surface percentage dose can be estimated within ±3% of parallel plate ionization chamber results with radiographic film using a series of film layers to produce an extrapolated result. Extrapolated percentage dose assessment for 10 cm, 20 cm and 30 cm square fields was estimated to be 15% ± 2%, 29% ± 3% and 38% ± 3% at the central axis and relatively uniform across the treatment field. The corresponding parallel plate ionization chamber measurements are 16%, 27% and 37%, respectively. Surface doses are also measured outside the treatment field which are mainly due to scattered electron contamination. To achieve this result, film calibration curves must be irradiated to similar x-ray field sizes as the experimental film to minimize quantitative variations in film optical density caused by varying x-ray spectrum with field size. (note)

  12. Generalization of the linear algebraic method to three dimensions

    International Nuclear Information System (INIS)

    Lynch, D.L.; Schneider, B.I.

    1991-01-01

    We present a numerical method for the solution of the Lippmann-Schwinger equation for electron-molecule collisions. By performing a three-dimensional numerical quadrature, this approach avoids both a basis-set representation of the wave function and a partial-wave expansion of the scattering potential. The resulting linear equations, analogous in form to the one-dimensional linear algebraic method, are solved with the direct iteration-variation method. Several numerical examples are presented. The prospect for using this numerical quadrature scheme for electron-polyatomic molecules is discussed

  13. Higher order methods for burnup calculations with Bateman solutions

    International Nuclear Information System (INIS)

    Isotalo, A.E.; Aarnio, P.A.

    2011-01-01

    Highlights: → Average microscopic reaction rates need to be estimated at each step. → Traditional predictor-corrector methods use zeroth and first order predictions. → Increasing predictor order greatly improves results. → Increasing corrector order does not improve results. - Abstract: A group of methods for burnup calculations solves the changes in material compositions by evaluating an explicit solution to the Bateman equations with constant microscopic reaction rates. This requires predicting representative averages for the one-group cross-sections and flux during each step, which is usually done using zeroth and first order predictions for their time development in a predictor-corrector calculation. In this paper we present the results of using linear, rather than constant, extrapolation on the predictor and quadratic, rather than linear, interpolation on the corrector. Both of these are done by using data from the previous step, and thus do not affect the stepwise running time. The methods were tested by implementing them into the reactor physics code Serpent and comparing the results from four test cases to accurate reference results obtained with very short steps. Linear extrapolation greatly improved results for thermal spectra and should be preferred over the constant one currently used in all Bateman solution based burnup calculations. The effects of using quadratic interpolation on the corrector were, on the other hand, predominantly negative, although not enough so to conclusively decide between the linear and quadratic variants.

  14. General extrapolation model for an important chemical dose-rate effect

    International Nuclear Information System (INIS)

    Gillen, K.T.; Clough, R.L.

    1984-12-01

    In order to extrapolate material accelerated aging data, methodologies must be developed based on sufficient understanding of the processes leading to material degradation. One of the most important mechanisms leading to chemical dose-rate effects in polymers involves the breakdown of intermediate hydroperoxide species. A general model for this mechanism is derived based on the underlying chemical steps. The results lead to a general formalism for understanding dose rate and sequential aging effects when hydroperoxide breakdown is important. We apply the model to combined radiation/temperature aging data for a PVC material and show that this data is consistent with the model and that model extrapolations are in excellent agreement with 12-year real-time aging results from an actual nuclear plant. This model and other techniques discussed in this report can aid in the selection of appropriate accelerated aging methods and can also be used to compare and select materials for use in safety-related components. This will result in increased assurance that equipment qualification procedures are adequate

  15. Processing radioactive effluents with ion-exchanging resins: study of result extrapolation; Traitement des effluents radioactifs par resines echangeuses d'ions: etude de l'extrapolation des resultats

    Energy Technology Data Exchange (ETDEWEB)

    Wormser, G.

    1960-05-03

    As a previous study showed the ion-exchanging resins could be used in Saclay for the treatment of radioactive effluents, the author reports a study which aimed at investigating to which extent thus obtained results could be extrapolated to the case of higher industrial columns. The author reports experiments which aimed at determining extrapolation modes which could be used for columns of organic resin used for radioactive effluent decontamination. He notably studied whether the Hiester and Vermeulen extrapolation law could be applied. Experiments are performed at constant percolation flow rate, at varying flow rate, and at constant flow rate [French] Plusieurs etudes ont ete faites dans le but d'examiner les possibilites d'emploi des resines echangeuses d'ions pour le traitement des effluents radioactifs. Dans un rapport preliminaire, nous avons montre dans quelles limites un tel procede pouvait etre utilise au Centre d'Etudes Nucleaires de Saclay. Les essais ont ete effectues sur des petites colonnes de resine au laboratoire; il est apparu ensuite necessaire de prevoir dans quelle mesure les resultats ainsi obtenus peuvent etre extrapoles a des colonnes industrielles, de plus grande hauteur. Les experiences dont les resultats sont exposes dans ce rapport, ont pour but de determiner les modes d'extrapolation qui pourraient etre employes pour des colonnes de resine organique utilisees pour la decontamination d'effluents radioactifs. Nous avons en particulier recherche si la loi d'extrapolation de Hiester et Vermeulen qui donne de bons resultats dans le cas de fixation d'ions radioactifs en presence d'un ion macrocomposant sur des terres, pouvait etre appliquee. Les experiences, en nombre limite, ont montre que la loi d'extrapolation de Hiester et Vermeulen pouvait s'appliquer dans le cas de l'effluent considere quand les debits de percolation sont tres faibles; quand ils sont plus forts, les volumes de liquide percoles, a fixation egale, sont proportionnels aux

  16. Strong-stability-preserving additive linear multistep methods

    KAUST Repository

    Hadjimichael, Yiannis; Ketcheson, David I.

    2018-01-01

    The analysis of strong-stability-preserving (SSP) linear multistep methods is extended to semi-discretized problems for which different terms on the right-hand side satisfy different forward Euler (or circle) conditions. Optimal perturbed

  17. An introduction to fuzzy linear programming problems theory, methods and applications

    CERN Document Server

    Kaur, Jagdeep

    2016-01-01

    The book presents a snapshot of the state of the art in the field of fully fuzzy linear programming. The main focus is on showing current methods for finding the fuzzy optimal solution of fully fuzzy linear programming problems in which all the parameters and decision variables are represented by non-negative fuzzy numbers. It presents new methods developed by the authors, as well as existing methods developed by others, and their application to real-world problems, including fuzzy transportation problems. Moreover, it compares the outcomes of the different methods and discusses their advantages/disadvantages. As the first work to collect at one place the most important methods for solving fuzzy linear programming problems, the book represents a useful reference guide for students and researchers, providing them with the necessary theoretical and practical knowledge to deal with linear programming problems under uncertainty.

  18. Linear-scaling quantum mechanical methods for excited states.

    Science.gov (United States)

    Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua

    2012-05-21

    The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and

  19. A method for evaluating dynamical friction in linear ball bearings.

    Science.gov (United States)

    Fujii, Yusaku; Maru, Koichi; Jin, Tao; Yupapin, Preecha P; Mitatha, Somsak

    2010-01-01

    A method is proposed for evaluating the dynamical friction of linear bearings, whose motion is not perfectly linear due to some play in its internal mechanism. In this method, the moving part of a linear bearing is made to move freely, and the force acting on the moving part is measured as the inertial force given by the product of its mass and the acceleration of its centre of gravity. To evaluate the acceleration of its centre of gravity, the acceleration of two different points on it is measured using a dual-axis optical interferometer.

  20. On some properties of the block linear multi-step methods | Chollom ...

    African Journals Online (AJOL)

    The convergence, stability and order of Block linear Multistep methods have been determined in the past based on individual members of the block. In this paper, methods are proposed to examine the properties of the entire block. Some Block Linear Multistep methods have been considered, their convergence, stability and ...

  1. Source‐receiver two‐way wave extrapolation for prestack exploding‐reflector modeling and migration

    KAUST Repository

    Alkhalifah, Tariq Ali; Fomel, Sergey

    2010-01-01

    While most of the modern seismic imaging methods perform imaging by separating input data into parts (shot gathers), we develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield backward in time. While computationally extensive, this approach has the potential of generating accurate images, free of artifacts associated with conventional approaches. We derive novel high‐order partial differential equations in source‐receiver‐time domain. The fourth order nature of the extrapolation in time has four solutions two of which correspond to the ingoing and outgoing P‐waves and reduces to the zero‐offset exploding‐reflector solutions when the source coincides with the receiver. Using asymptotic approximations, we develop an approach to extrapolating the full prestack wavefield forward or backward in time.

  2. Source‐receiver two‐way wave extrapolation for prestack exploding‐reflector modeling and migration

    KAUST Repository

    Alkhalifah, Tariq Ali

    2010-10-17

    While most of the modern seismic imaging methods perform imaging by separating input data into parts (shot gathers), we develop a formulation that is able to incorporate all available data at once while numerically propagating the recorded multidimensional wavefield backward in time. While computationally extensive, this approach has the potential of generating accurate images, free of artifacts associated with conventional approaches. We derive novel high‐order partial differential equations in source‐receiver‐time domain. The fourth order nature of the extrapolation in time has four solutions two of which correspond to the ingoing and outgoing P‐waves and reduces to the zero‐offset exploding‐reflector solutions when the source coincides with the receiver. Using asymptotic approximations, we develop an approach to extrapolating the full prestack wavefield forward or backward in time.

  3. Lattice Boltzmann methods for global linear instability analysis

    Science.gov (United States)

    Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis

    2017-12-01

    Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.

  4. Properties of an extrapolation chamber for beta radiation dosimetry

    International Nuclear Information System (INIS)

    Caldas, L.V.E.

    The properties of a commercial extrapolation chamber were studied, and the possibility is shown of its use in beta radiation dosimetry. The chamber calibration factors were determined for several sources ( 90 Sr, 90 Y- 204 Tl and 147 Pm) making known the dependence of its response on the energy of the incident radiation. Extrapolation curves allow to obtain independence on energy for each source. One of such curves, shown for the 90 Sr- 90 Y source at 50 cm from the detector, is obtained through the variation of the chamber window thickness and the extrapolation to the null distance (determined graphically). Different curves shown also: 1) the dependence of the calibration factor on the average energy of beta radiation; 2) the variation of ionization current with the distance between the chamber and the sources; 3) the effect of the collecting electrode area on the value of calibration factors for the different sources. (I.C.R.) [pt

  5. Extension of the linear nodal method to large concrete building calculations

    International Nuclear Information System (INIS)

    Childs, R.L.; Rhoades, W.A.

    1985-01-01

    The implementation of the linear nodal method in the TORT code is described, and the results of a mesh refinement study to test the effectiveness of the linear nodal and weighted diamond difference methods available in TORT are presented

  6. Sodium flow rate measurement method of annular linear induction pumps

    International Nuclear Information System (INIS)

    Araseki, Hideo; Kirillov, Igor R.; Preslitsky, Gennady V.

    2012-01-01

    Highlights: ► We found a new method of flow rate monitoring of electromagnetic pump. ► The method is very simple and does not require a large space. ► The method was verified with an experiment and a numerical analysis. ► The experimental data and the numerical results are in good agreement. - Abstract: The present paper proposes a method for measuring sodium flow rate of annular linear induction pumps. The feature of the method lies in measuring the leaked magnetic field with measuring coils near the stator end on the outlet side and in correlating it with the sodium flow rate. This method is verified through an experiment and a numerical analysis. The data obtained in the experiment reveals that the correlation between the leaked magnetic field and the sodium flow rate is almost linear. The result of the numerical analysis agrees with the experimental data. The present method will be particularly effective to sodium flow rate monitoring of each one of plural annular linear induction pumps arranged in parallel in a vessel which forms a large-scale pump unit.

  7. Solution of the fully fuzzy linear systems using iterative techniques

    International Nuclear Information System (INIS)

    Dehghan, Mehdi; Hashemi, Behnam; Ghatee, Mehdi

    2007-01-01

    This paper mainly intends to discuss the iterative solution of fully fuzzy linear systems which we call FFLS. We employ Dubois and Prade's approximate arithmetic operators on LR fuzzy numbers for finding a positive fuzzy vector x-tilde which satisfies A-tildex-tilde=b, where A-tilde and b-tilde are a fuzzy matrix and a fuzzy vector, respectively. Please note that the positivity assumption is not so restrictive in applied problems. We transform FFLS and propose iterative techniques such as Richardson, Jacobi, Jacobi overrelaxation (JOR), Gauss-Seidel, successive overrelaxation (SOR), accelerated overrelaxation (AOR), symmetric and unsymmetric SOR (SSOR and USSOR) and extrapolated modified Aitken (EMA) for solving FFLS. In addition, the methods of Newton, quasi-Newton and conjugate gradient are proposed from nonlinear programming for solving a fully fuzzy linear system. Various numerical examples are also given to show the efficiency of the proposed schemes

  8. Estimation and Extrapolation of Tree Parameters Using Spectral Correlation between UAV and Pléiades Data

    Directory of Open Access Journals (Sweden)

    Azadeh Abdollahnejad

    2018-02-01

    Full Text Available The latest technological advances in space-borne imagery have significantly enhanced the acquisition of high-quality data. With the availability of very high-resolution satellites, such as Pléiades, it is now possible to estimate tree parameters at the individual level with high fidelity. Despite innovative advantages on high-precision satellites, data acquisition is not yet available to the public at a reasonable cost. Unmanned aerial vehicles (UAVs have the practical advantage of data acquisition at a higher spatial resolution than that of satellites. This study is divided into two main parts: (1 we describe the estimation of basic tree attributes, such as tree height, crown diameter, diameter at breast height (DBH, and stem volume derived from UAV data based on structure from motion (SfM algorithms; and (2 we consider the extrapolation of the UAV data to a larger area, using correlation between satellite and UAV observations as an economically viable approach. Results have shown that UAVs can be used to predict tree characteristics with high accuracy (i.e., crown projection, stem volume, cross-sectional area (CSA, and height. We observed a significant relation between extracted data from UAV and ground data with R2 = 0.71 for stem volume, R2 = 0.87 for height, and R2 = 0.60 for CSA. In addition, our results showed a high linear relation between spectral data from the UAV and the satellite (R2 = 0.94. Overall, the accuracy of the results between UAV and Pléiades was reasonable and showed that the used methods are feasible for extrapolation of extracted data from UAV to larger areas.

  9. Chiral and continuum extrapolation of partially-quenched hadron masses

    International Nuclear Information System (INIS)

    Chris Allton; Wes Armour; Derek Leinweber; Anthony Thomas; Ross Young

    2005-01-01

    Using the finite-range regularization (FRR) of chiral effective field theory, the chiral extrapolation formula for the vector meson mass is derived for the case of partially-quenched QCD. We re-analyze the dynamical fermion QCD data for the vector meson mass from the CP-PACS collaboration. A global fit, including finite lattice spacing effects, of all 16 of their ensembles is performed. We study the FRR method together with a naive polynomial approach and find excellent agreement (∼1%) with the experimental value of M ρ from the former approach. These results are extended to the case of the nucleon mass

  10. Chiral and continuum extrapolation of partially-quenched hadron masses

    Energy Technology Data Exchange (ETDEWEB)

    Chris Allton; Wes Armour; Derek Leinweber; Anthony Thomas; Ross Young

    2005-09-29

    Using the finite-range regularization (FRR) of chiral effective field theory, the chiral extrapolation formula for the vector meson mass is derived for the case of partially-quenched QCD. We re-analyze the dynamical fermion QCD data for the vector meson mass from the CP-PACS collaboration. A global fit, including finite lattice spacing effects, of all 16 of their ensembles is performed. We study the FRR method together with a naive polynomial approach and find excellent agreement ({approx}1%) with the experimental value of M{sub {rho}} from the former approach. These results are extended to the case of the nucleon mass.

  11. Extrapolation of zircon fission-track annealing models

    International Nuclear Information System (INIS)

    Palissari, R.; Guedes, S.; Curvo, E.A.C.; Moreira, P.A.F.P.; Tello, C.A.; Hadler, J.C.

    2013-01-01

    One of the purposes of this study is to give further constraints on the temperature range of the zircon partial annealing zone over a geological time scale using data from borehole zircon samples, which have experienced stable temperatures for ∼1 Ma. In this way, the extrapolation problem is explicitly addressed by fitting the zircon annealing models with geological timescale data. Several empirical model formulations have been proposed to perform these calibrations and have been compared in this work. The basic form proposed for annealing models is the Arrhenius-type model. There are other annealing models, that are based on the same general formulation. These empirical model equations have been preferred due to the great number of phenomena from track formation to chemical etching that are not well understood. However, there are two other models, which try to establish a direct correlation between their parameters and the related phenomena. To compare the response of the different annealing models, thermal indexes, such as closure temperature, total annealing temperature and the partial annealing zone, have been calculated and compared with field evidence. After comparing the different models, it was concluded that the fanning curvilinear models yield the best agreement between predicted index temperatures and field evidence. - Highlights: ► Geological data were used along with lab data for improving model extrapolation. ► Index temperatures were simulated for testing model extrapolation. ► Curvilinear Arrhenius models produced better geological temperature predictions

  12. Endangered species toxicity extrapolation using ICE models

    Science.gov (United States)

    The National Research Council’s (NRC) report on assessing pesticide risks to threatened and endangered species (T&E) included the recommendation of using interspecies correlation models (ICE) as an alternative to general safety factors for extrapolating across species. ...

  13. Determination of the bulk melting temperature of nickel using Monte Carlo simulations: Inaccuracy of extrapolation from cluster melting temperatures

    Science.gov (United States)

    Los, J. H.; Pellenq, R. J. M.

    2010-02-01

    We have determined the bulk melting temperature Tm of nickel according to a recent interatomic interaction model via Monte Carlo simulation by two methods: extrapolation from cluster melting temperatures based on the Pavlov model (a variant of the Gibbs-Thompson model) and by calculation of the liquid and solid Gibbs free energies via thermodynamic integration. The result of the latter, which is the most reliable method, gives Tm=2010±35K , to be compared to the experimental value of 1726 K. The cluster extrapolation method, however, gives a 325° higher value of Tm=2335K . This remarkable result is shown to be due to a barrier for melting, which is associated with a nonwetting behavior.

  14. Comparison of linear, mixed integer and non-linear programming methods in energy system dispatch modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2014-01-01

    In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...

  15. Polarized atomic orbitals for linear scaling methods

    Science.gov (United States)

    Berghold, Gerd; Parrinello, Michele; Hutter, Jürg

    2002-02-01

    We present a modified version of the polarized atomic orbital (PAO) method [M. S. Lee and M. Head-Gordon, J. Chem. Phys. 107, 9085 (1997)] to construct minimal basis sets optimized in the molecular environment. The minimal basis set derives its flexibility from the fact that it is formed as a linear combination of a larger set of atomic orbitals. This approach significantly reduces the number of independent variables to be determined during a calculation, while retaining most of the essential chemistry resulting from the admixture of higher angular momentum functions. Furthermore, we combine the PAO method with linear scaling algorithms. We use the Chebyshev polynomial expansion method, the conjugate gradient density matrix search, and the canonical purification of the density matrix. The combined scheme overcomes one of the major drawbacks of standard approaches for large nonorthogonal basis sets, namely numerical instabilities resulting from ill-conditioned overlap matrices. We find that the condition number of the PAO overlap matrix is independent from the condition number of the underlying extended basis set, and consequently no numerical instabilities are encountered. Various applications are shown to confirm this conclusion and to compare the performance of the PAO method with extended basis-set calculations.

  16. Efficient decomposition and linearization methods for the stochastic transportation problem

    International Nuclear Information System (INIS)

    Holmberg, K.

    1993-01-01

    The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)

  17. Uniform irradiation using rotational-linear scanning method for narrow synchrotron radiation beam

    International Nuclear Information System (INIS)

    Nariyama, N.; Ohnishi, S.; Odano, N.

    2004-01-01

    At SPring-8, photon intensity monitors for synchrotron radiation have been developed. Using these monitors, the responses of radiation detectors and dosimeters to monoenergetic photons can be measured. In most cases, uniform irradiation to the sample is necessary. Here, two scanning methods are proposed. One is an XZ-linear scanning method, which moves the sample simultaneously in both the X and Z direction, that is, in zigzag fashion. The other is a rotational-linear scanning method, which rotates the sample moving in the X direction. To investigate the validity of the two methods, thermoluminescent dosimeters were irradiated with a broad synchrotron-radiation beam, and the readings from the two methods were compared with that of the dosimeters fixed in the beam. The results for both scanning methods virtually agreed with that of the fixed method. The advantages of the rotational-linear scanning method are that low- and medium-dose irradiation is possible, uniformity is excellent and the load to the scanning equipment is light: hence, this method is superior to the XZ-linear scanning method for most applications. (author)

  18. Straightening the Hierarchical Staircase for Basis Set Extrapolations: A Low-Cost Approach to High-Accuracy Computational Chemistry

    Science.gov (United States)

    Varandas, António J. C.

    2018-04-01

    Because the one-electron basis set limit is difficult to reach in correlated post-Hartree-Fock ab initio calculations, the low-cost route of using methods that extrapolate to the estimated basis set limit attracts immediate interest. The situation is somewhat more satisfactory at the Hartree-Fock level because numerical calculation of the energy is often affordable at nearly converged basis set levels. Still, extrapolation schemes for the Hartree-Fock energy are addressed here, although the focus is on the more slowly convergent and computationally demanding correlation energy. Because they are frequently based on the gold-standard coupled-cluster theory with single, double, and perturbative triple excitations [CCSD(T)], correlated calculations are often affordable only with the smallest basis sets, and hence single-level extrapolations from one raw energy could attain maximum usefulness. This possibility is examined. Whenever possible, this review uses raw data from second-order Møller-Plesset perturbation theory, as well as CCSD, CCSD(T), and multireference configuration interaction methods. Inescapably, the emphasis is on work done by the author's research group. Certain issues in need of further research or review are pinpointed.

  19. Direct Linear Transformation Method for Three-Dimensional Cinematography

    Science.gov (United States)

    Shapiro, Robert

    1978-01-01

    The ability of Direct Linear Transformation Method for three-dimensional cinematography to locate points in space was shown to meet the accuracy requirements associated with research on human movement. (JD)

  20. Resolution enhancement in digital holography by self-extrapolation of holograms.

    Science.gov (United States)

    Latychevskaia, Tatiana; Fink, Hans-Werner

    2013-03-25

    It is generally believed that the resolution in digital holography is limited by the size of the captured holographic record. Here, we present a method to circumvent this limit by self-extrapolating experimental holograms beyond the area that is actually captured. This is done by first padding the surroundings of the hologram and then conducting an iterative reconstruction procedure. The wavefront beyond the experimentally detected area is thus retrieved and the hologram reconstruction shows enhanced resolution. To demonstrate the power of this concept, we apply it to simulated as well as experimental holograms.

  1. On the existence of the optimal order for wavefunction extrapolation in Born-Oppenheimer molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Jun; Wang, Han, E-mail: wang-han@iapcm.ac.cn [Institute of Applied Physics and Computational Mathematics, Beijing (China); CAEP Software Center for High Performance Numerical Simulation, Beijing (China); Gao, Xingyu; Song, Haifeng [Institute of Applied Physics and Computational Mathematics, Beijing (China); CAEP Software Center for High Performance Numerical Simulation, Beijing (China); Laboratory of Computational Physics, Beijing (China)

    2016-06-28

    Wavefunction extrapolation greatly reduces the number of self-consistent field (SCF) iterations and thus the overall computational cost of Born-Oppenheimer molecular dynamics (BOMD) that is based on the Kohn–Sham density functional theory. Going against the intuition that the higher order of extrapolation possesses a better accuracy, we demonstrate, from both theoretical and numerical perspectives, that the extrapolation accuracy firstly increases and then decreases with respect to the order, and an optimal extrapolation order in terms of minimal number of SCF iterations always exists. We also prove that the optimal order tends to be larger when using larger MD time steps or more strict SCF convergence criteria. By example BOMD simulations of a solid copper system, we show that the optimal extrapolation order covers a broad range when varying the MD time step or the SCF convergence criterion. Therefore, we suggest the necessity for BOMD simulation packages to open the user interface and to provide more choices on the extrapolation order. Another factor that may influence the extrapolation accuracy is the alignment scheme that eliminates the discontinuity in the wavefunctions with respect to the atomic or cell variables. We prove the equivalence between the two existing schemes, thus the implementation of either of them does not lead to essential difference in the extrapolation accuracy.

  2. UFOs: Observations, Studies and Extrapolations

    CERN Document Server

    Baer, T; Barnes, M J; Bartmann, W; Bracco, C; Carlier, E; Cerutti, F; Dehning, B; Ducimetière, L; Ferrari, A; Ferro-Luzzi, M; Garrel, N; Gerardin, A; Goddard, B; Holzer, E B; Jackson, S; Jimenez, J M; Kain, V; Zimmermann, F; Lechner, A; Mertens, V; Misiowiec, M; Nebot Del Busto, E; Morón Ballester, R; Norderhaug Drosdal, L; Nordt, A; Papotti, G; Redaelli, S; Uythoven, J; Velghe, B; Vlachoudis, V; Wenninger, J; Zamantzas, C; Zerlauth, M; Fuster Martinez, N

    2012-01-01

    UFOs (“ Unidentified Falling Objects”) could be one of the major performance limitations for nominal LHC operation. Therefore, in 2011, the diagnostics for UFO events were significantly improved, dedicated experiments and measurements in the LHC and in the laboratory were made and complemented by FLUKA simulations and theoretical studies. The state of knowledge is summarized and extrapolations for LHC operation in 2012 and beyond are presented. Mitigation strategies are proposed and related tests and measures for 2012 are specified.

  3. A Cost-effective Method for Resolution Increase of the Twostage Piecewise Linear ADC Used for Sensor Linearization

    Directory of Open Access Journals (Sweden)

    Jovanović Jelena

    2016-02-01

    Full Text Available A cost-effective method for resolution increase of a two-stage piecewise linear analog-to-digital converter used for sensor linearization is proposed in this paper. In both conversion stages flash analog-to-digital converters are employed. Resolution increase by one bit per conversion stage is performed by introducing one additional comparator in front of each of two flash analog-to-digital converters, while the converters’ resolutions remain the same. As a result, the number of employed comparators, as well as the circuit complexity and the power consumption originating from employed comparators are for almost 50 % lower in comparison to the same parameters referring to the linearization circuit of the conventional design and of the same resolution. Since the number of employed comparators is significantly reduced according to the proposed method, special modifications of the linearization circuit are needed in order to properly adjust reference voltages of employed comparators.

  4. Extrapolation bias and the predictability of stock returns by price-scaled variables

    NARCIS (Netherlands)

    Cassella, Stefano; Gulen, H.

    Using survey data on expectations of future stock returns, we recursively estimate the degree of extrapolative weighting in investors' beliefs (DOX). In an extrapolation framework, DOX determines the relative weight investors place on recent-versus-distant past returns. DOX varies considerably over

  5. Chosen interval methods for solving linear interval systems with special type of matrix

    Science.gov (United States)

    Szyszka, Barbara

    2013-10-01

    The paper is devoted to chosen direct interval methods for solving linear interval systems with special type of matrix. This kind of matrix: band matrix with a parameter, from finite difference problem is obtained. Such linear systems occur while solving one dimensional wave equation (Partial Differential Equations of hyperbolic type) by using the central difference interval method of the second order. Interval methods are constructed so as the errors of method are enclosed in obtained results, therefore presented linear interval systems contain elements that determining the errors of difference method. The chosen direct algorithms have been applied for solving linear systems because they have no errors of method. All calculations were performed in floating-point interval arithmetic.

  6. Libraries for spectrum identification: Method of normalized coordinates versus linear correlation

    International Nuclear Information System (INIS)

    Ferrero, A.; Lucena, P.; Herrera, R.G.; Dona, A.; Fernandez-Reyes, R.; Laserna, J.J.

    2008-01-01

    In this work it is proposed that an easy solution based directly on linear algebra in order to obtain the relation between a spectrum and a spectrum base. This solution is based on the algebraic determination of an unknown spectrum coordinates with respect to a spectral library base. The identification capacity comparison between this algebraic method and the linear correlation method has been shown using experimental spectra of polymers. Unlike the linear correlation (where the presence of impurities may decrease the discrimination capacity), this method allows to detect quantitatively the existence of a mixture of several substances in a sample and, consequently, to beer in mind impurities for improving the identification

  7. Weibull and lognormal Taguchi analysis using multiple linear regression

    International Nuclear Information System (INIS)

    Piña-Monarrez, Manuel R.; Ortiz-Yañez, Jesús F.

    2015-01-01

    The paper provides to reliability practitioners with a method (1) to estimate the robust Weibull family when the Taguchi method (TM) is applied, (2) to estimate the normal operational Weibull family in an accelerated life testing (ALT) analysis to give confidence to the extrapolation and (3) to perform the ANOVA analysis to both the robust and the normal operational Weibull family. On the other hand, because the Weibull distribution neither has the normal additive property nor has a direct relationship with the normal parameters (µ, σ), in this paper, the issues of estimating a Weibull family by using a design of experiment (DOE) are first addressed by using an L_9 (3"4) orthogonal array (OA) in both the TM and in the Weibull proportional hazard model approach (WPHM). Then, by using the Weibull/Gumbel and the lognormal/normal relationships and multiple linear regression, the direct relationships between the Weibull and the lifetime parameters are derived and used to formulate the proposed method. Moreover, since the derived direct relationships always hold, the method is generalized to the lognormal and ALT analysis. Finally, the method’s efficiency is shown through its application to the used OA and to a set of ALT data. - Highlights: • It gives the statistical relations and steps to use the Taguchi Method (TM) to analyze Weibull data. • It gives the steps to determine the unknown Weibull family to both the robust TM setting and the normal ALT level. • It gives a method to determine the expected lifetimes and to perform its ANOVA analysis in TM and ALT analysis. • It gives a method to give confidence to the extrapolation in an ALT analysis by using the Weibull family of the normal level.

  8. NON-LINEAR MODELING OF THE RHIC INTERACTION REGIONS

    International Nuclear Information System (INIS)

    TOMAS, R.; FISCHER, W.; JAIN, A.; LUO, Y.; PILAT, F.

    2004-01-01

    For RHIC's collision lattices the dominant sources of transverse non-linearities are located in the interaction regions. The field quality is available for most of the magnets in the interaction regions from the magnetic measurements, or from extrapolations of these measurements. We discuss the implementation of these measurements in the MADX models of the Blue and the Yellow rings and their impact on beam stability

  9. Development of pre-critical excore detector linear subchannel calibration method

    International Nuclear Information System (INIS)

    Choi, Yoo Sun; Goo, Bon Seung; Cha, Kyun Ho; Lee, Chang Seop; Kim, Yong Hee; Ahn, Chul Soo; Kim, Man Soo

    2001-01-01

    The improved pre-critical excore detector linear subchannel calibration method has been developed to improve the applicability of pre-critical calibration method. The existing calibration method does not always guarantee the accuracy of pre-critical calibration because the calibration results of the previous cycle are not reflected into the current cycle calibration. The developed method has a desirable feature that calibration error would not be propagated in the following cycles since the calibration data determined in previous cycle is incorporated in the current cycle calibration. The pre-critical excore detector linear calibration is tested for YGN unit 3 and UCN unit 3 to evaluate its characteristics and accuracy

  10. Hybrid Method for Solving Inventory Problems with a Linear ...

    African Journals Online (AJOL)

    Osagiede and Omosigho (2004) proposed a direct search method for identifying the number of replenishment when the demand pattern is linearly increasing. The main computational task in this direct search method was associated with finding the optimal number of replenishments. To accelerate the use of this method, the ...

  11. Numerical Methods for Solution of the Extended Linear Quadratic Control Problem

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Frison, Gianluca; Gade-Nielsen, Nicolai Fog

    2012-01-01

    In this paper we present the extended linear quadratic control problem, its efficient solution, and a discussion of how it arises in the numerical solution of nonlinear model predictive control problems. The extended linear quadratic control problem is the optimal control problem corresponding...... to the Karush-Kuhn-Tucker system that constitute the majority of computational work in constrained nonlinear and linear model predictive control problems solved by efficient MPC-tailored interior-point and active-set algorithms. We state various methods of solving the extended linear quadratic control problem...... and discuss instances in which it arises. The methods discussed in the paper have been implemented in efficient C code for both CPUs and GPUs for a number of test examples....

  12. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    KAUST Repository

    Zhang, Zhendong

    2017-12-17

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyze the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artifacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration (RTM) applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modeling engine performs better than an isotropic migration.

  13. Standardization of low energy beta and beta-gamma complex emitters by the tracer and the efficiency extrapolation methods

    International Nuclear Information System (INIS)

    Sahagia, M.

    1978-01-01

    The absolute standardization of radioactive solutions of low energy beta emitters and beta-gamma emitters with a high probability of disintegration to the ground state is described; the tracer and the efficiency extrapolation methods were used. Both types of radionuclides were mathematically and physically treated in an unified manner. The theoretical relations between different beta spectra were calculated according to Williams' model and experimentally verified for: 35 S + 60 Co, 35 S + 95 Nb, 147 Pm + 60 Co, 14 C + 95 Nb and two beta branches of 99 Mo. The optimum range of beta efficiency variation was indicated. The basic supposition that all beta efficieny tend to unity in the same time was experimentally verified, using two 192 Ir beta branches. Four computer programs, written in the FORTRAN IV language, were elaborated, for the adequate processing of the experimental data. Good precision coefficients according to international standards were obtained in the absolute standardization of 35 S, 147 Pm, 99 Mo solutions. (author)

  14. Combining extrapolation with ghost interaction correction in range-separated ensemble density functional theory for excited states

    Science.gov (United States)

    Alam, Md. Mehboob; Deur, Killian; Knecht, Stefan; Fromager, Emmanuel

    2017-11-01

    The extrapolation technique of Savin [J. Chem. Phys. 140, 18A509 (2014)], which was initially applied to range-separated ground-state-density-functional Hamiltonians, is adapted in this work to ghost-interaction-corrected (GIC) range-separated ensemble density-functional theory (eDFT) for excited states. While standard extrapolations rely on energies that decay as μ-2 in the large range-separation-parameter μ limit, we show analytically that (approximate) range-separated GIC ensemble energies converge more rapidly (as μ-3) towards their pure wavefunction theory values (μ → +∞ limit), thus requiring a different extrapolation correction. The purpose of such a correction is to further improve on the convergence and, consequently, to obtain more accurate excitation energies for a finite (and, in practice, relatively small) μ value. As a proof of concept, we apply the extrapolation method to He and small molecular systems (viz., H2, HeH+, and LiH), thus considering different types of excitations such as Rydberg, charge transfer, and double excitations. Potential energy profiles of the first three and four singlet Σ+ excitation energies in HeH+ and H2, respectively, are studied with a particular focus on avoided crossings for the latter. Finally, the extraction of individual state energies from the ensemble energy is discussed in the context of range-separated eDFT, as a perspective.

  15. L2-Error Estimates of the Extrapolated Crank-Nicolson Discontinuous Galerkin Approximations for Nonlinear Sobolev Equations

    Directory of Open Access Journals (Sweden)

    Hyun Young Lee

    2010-01-01

    Full Text Available We analyze discontinuous Galerkin methods with penalty terms, namely, symmetric interior penalty Galerkin methods, to solve nonlinear Sobolev equations. We construct finite element spaces on which we develop fully discrete approximations using extrapolated Crank-Nicolson method. We adopt an appropriate elliptic-type projection, which leads to optimal ℓ∞(L2 error estimates of discontinuous Galerkin approximations in both spatial direction and temporal direction.

  16. Sodium flow rate measurement method of annular linear induction pump

    International Nuclear Information System (INIS)

    Araseki, Hideo

    2011-01-01

    This report describes a method for measuring sodium flow rate of annular linear induction pumps arranged in parallel and its verification result obtained through an experiment and a numerical analysis. In the method, the leaked magnetic field is measured with measuring coils at the stator end on the outlet side and is correlated with the sodium flow rate. The experimental data and the numerical result indicate that the leaked magnetic field at the stator edge keeps almost constant when the sodium flow rate changes and that the leaked magnetic field change arising from the flow rate change is small compared with the overall leaked magnetic field. It is shown that the correlation between the leaked magnetic field and the sodium flow rate is almost linear due to this feature of the leaked magnetic field, which indicates the applicability of the method to small-scale annular linear induction pumps. (author)

  17. Study of energy dependence of a extrapolation chamber in low energy X-rays beams

    International Nuclear Information System (INIS)

    Bastos, Fernanda M.; Silva, Teogenes A. da

    2014-01-01

    This work was with the main objective to study the energy dependence of extrapolation chamber in low energy X-rays to determine the value of the uncertainty associated with the variation of the incident radiation energy in the measures in which it is used. For studying the dependence of energy, were conducted comparative ionization current measurements between the extrapolation chamber and two ionization chambers: a chamber mammography, RC6M model, Radcal with energy dependence less than 5% and a 2575 model radioprotection chamber NE Technology; both chambers have very thin windows, allowing its application in low power beams. Measurements were made at four different depths of 1.0 to 4.0 mm extrapolation chamber, 1.0 mm interval, for each reference radiation. The study showed that there is a variable energy dependence on the volume of the extrapolation chamber. In other analysis, it is concluded that the energy dependence of extrapolation chamber becomes smaller when using the slope of the ionization current versus depth for the different radiation reference; this shows that the extrapolation technique, used for the absorbed dose calculation, reduces the uncertainty associated with the influence of the response variation with energy radiation

  18. Making the most of what we have: application of extrapolation approaches in radioecological wildlife transfer models

    International Nuclear Information System (INIS)

    Beresford, Nicholas A.; Wood, Michael D.; Vives i Batlle, Jordi; Yankovich, Tamara L.; Bradshaw, Clare; Willey, Neil

    2016-01-01

    We will never have data to populate all of the potential radioecological modelling parameters required for wildlife assessments. Therefore, we need robust extrapolation approaches which allow us to make best use of our available knowledge. This paper reviews and, in some cases, develops, tests and validates some of the suggested extrapolation approaches. The concentration ratio (CR_p_r_o_d_u_c_t_-_d_i_e_t or CR_w_o_-_d_i_e_t) is shown to be a generic (trans-species) parameter which should enable the more abundant data for farm animals to be applied to wild species. An allometric model for predicting the biological half-life of radionuclides in vertebrates is further tested and generally shown to perform acceptably. However, to fully exploit allometry we need to understand why some elements do not scale to expected values. For aquatic ecosystems, the relationship between log_1_0(a) (a parameter from the allometric relationship for the organism-water concentration ratio) and log(K_d) presents a potential opportunity to estimate concentration ratios using K_d values. An alternative approach to the CR_w_o_-_m_e_d_i_a model proposed for estimating the transfer of radionuclides to freshwater fish is used to satisfactorily predict activity concentrations in fish of different species from three lakes. We recommend that this approach (REML modelling) be further investigated and developed for other radionuclides and across a wider range of organisms and ecosystems. Ecological stoichiometry shows potential as an extrapolation method in radioecology, either from one element to another or from one species to another. Although some of the approaches considered require further development and testing, we demonstrate the potential to significantly improve predictions of radionuclide transfer to wildlife by making better use of available data. - Highlights: • Robust extrapolation approaches allowing best use of available knowledge are needed. • Extrapolation approaches are

  19. Loop integration results using numerical extrapolation for a non-scalar integral

    International Nuclear Information System (INIS)

    Doncker, E. de; Shimizu, Y.; Fujimoto, J.; Yuasa, F.; Kaugars, K.; Cucos, L.; Van Voorst, J.

    2004-01-01

    Loop integration results have been obtained using numerical integration and extrapolation. An extrapolation to the limit is performed with respect to a parameter in the integrand which tends to zero. Results are given for a non-scalar four-point diagram. Extensions to accommodate loop integration by existing integration packages are also discussed. These include: using previously generated partitions of the domain and roundoff error guards

  20. Galerkin projection methods for solving multiple related linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Chan, T.F.; Ng, M.; Wan, W.L.

    1996-12-31

    We consider using Galerkin projection methods for solving multiple related linear systems A{sup (i)}x{sup (i)} = b{sup (i)} for 1 {le} i {le} s, where A{sup (i)} and b{sup (i)} are different in general. We start with the special case where A{sup (i)} = A and A is symmetric positive definite. The method generates a Krylov subspace from a set of direction vectors obtained by solving one of the systems, called the seed system, by the CG method and then projects the residuals of other systems orthogonally onto the generated Krylov subspace to get the approximate solutions. The whole process is repeated with another unsolved system as a seed until all the systems are solved. We observe in practice a super-convergence behaviour of the CG process of the seed system when compared with the usual CG process. We also observe that only a small number of restarts is required to solve all the systems if the right-hand sides are close to each other. These two features together make the method particularly effective. In this talk, we give theoretical proof to justify these observations. Furthermore, we combine the advantages of this method and the block CG method and propose a block extension of this single seed method. The above procedure can actually be modified for solving multiple linear systems A{sup (i)}x{sup (i)} = b{sup (i)}, where A{sup (i)} are now different. We can also extend the previous analytical results to this more general case. Applications of this method to multiple related linear systems arising from image restoration and recursive least squares computations are considered as examples.

  1. Transport equation solving methods

    International Nuclear Information System (INIS)

    Granjean, P.M.

    1984-06-01

    This work is mainly devoted to Csub(N) and Fsub(N) methods. CN method: starting from a lemma stated by Placzek, an equivalence is established between two problems: the first one is defined in a finite medium bounded by a surface S, the second one is defined in the whole space. In the first problem the angular flux on the surface S is shown to be the solution of an integral equation. This equation is solved by Galerkin's method. The Csub(N) method is applied here to one-velocity problems: in plane geometry, slab albedo and transmission with Rayleigh scattering, calculation of the extrapolation length; in cylindrical geometry, albedo and extrapolation length calculation with linear scattering. Fsub(N) method: the basic integral transport equation of the Csub(N) method is integrated on Case's elementary distributions; another integral transport equation is obtained: this equation is solved by a collocation method. The plane problems solved by the Csub(N) method are also solved by the Fsub(N) method. The Fsub(N) method is extended to any polynomial scattering law. Some simple spherical problems are also studied. Chandrasekhar's method, collision probability method, Case's method are presented for comparison with Csub(N) and Fsub(N) methods. This comparison shows the respective advantages of the two methods: a) fast convergence and possible extension to various geometries for Csub(N) method; b) easy calculations and easy extension to polynomial scattering for Fsub(N) method [fr

  2. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    Science.gov (United States)

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  3. Linear finite element method for one-dimensional diffusion problems

    Energy Technology Data Exchange (ETDEWEB)

    Brandao, Michele A.; Dominguez, Dany S.; Iglesias, Susana M., E-mail: micheleabrandao@gmail.com, E-mail: dany@labbi.uesc.br, E-mail: smiglesias@uesc.br [Universidade Estadual de Santa Cruz (LCC/DCET/UESC), Ilheus, BA (Brazil). Departamento de Ciencias Exatas e Tecnologicas. Laboratorio de Computacao Cientifica

    2011-07-01

    We describe in this paper the fundamentals of Linear Finite Element Method (LFEM) applied to one-speed diffusion problems in slab geometry. We present the mathematical formulation to solve eigenvalue and fixed source problems. First, we discretized a calculus domain using a finite set of elements. At this point, we obtain the spatial balance equations for zero order and first order spatial moments inside each element. Then, we introduce the linear auxiliary equations to approximate neutron flux and current inside the element and architect a numerical scheme to obtain the solution. We offer numerical results for fixed source typical model problems to illustrate the method's accuracy for coarse-mesh calculations in homogeneous and heterogeneous domains. Also, we compare the accuracy and computational performance of LFEM formulation with conventional Finite Difference Method (FDM). (author)

  4. Direct optical band gap measurement in polycrystalline semiconductors: A critical look at the Tauc method

    International Nuclear Information System (INIS)

    Dolgonos, Alex; Mason, Thomas O.; Poeppelmeier, Kenneth R.

    2016-01-01

    The direct optical band gap of semiconductors is traditionally measured by extrapolating the linear region of the square of the absorption curve to the x-axis, and a variation of this method, developed by Tauc, has also been widely used. The application of the Tauc method to crystalline materials is rooted in misconception–and traditional linear extrapolation methods are inappropriate for use on degenerate semiconductors, where the occupation of conduction band energy states cannot be ignored. A new method is proposed for extracting a direct optical band gap from absorption spectra of degenerately-doped bulk semiconductors. This method was applied to pseudo-absorption spectra of Sn-doped In 2 O 3 (ITO)—converted from diffuse-reflectance measurements on bulk specimens. The results of this analysis were corroborated by room-temperature photoluminescence excitation measurements, which yielded values of optical band gap and Burstein–Moss shift that are consistent with previous studies on In 2 O 3 single crystals and thin films. - Highlights: • The Tauc method of band gap measurement is re-evaluated for crystalline materials. • Graphical method proposed for extracting optical band gaps from absorption spectra. • The proposed method incorporates an energy broadening term for energy transitions. • Values for ITO were self-consistent between two different measurement methods.

  5. Biosimilars: From Extrapolation into Off Label Use.

    Science.gov (United States)

    Zhao, Sizheng; Nair, Jagdish R; Moots, Robert J

    2017-01-01

    Biologic drugs have revolutionised the management of many inflammatory conditions. Patent expirations have stimulated development of highly similar but non-identical molecules, the biosimilars. Extrapolation of indications is a key concept in the development of biosimilars. However, this has been met with concerns around mechanisms of action, equivalence in efficacy and immunogenicity, which are reviewed in this article. Narrative overview composed from literature search and the authors' experience. Literature search included Pubmed, Web of Science, and online document archives of the Food and Drug Administration and European Medicines Agency. The concepts of biosimilarity and extrapolation of indications are revisited. Concerns around extrapolation are exemplified using the biosimilar infliximab, CT-P13, focusing on mechanisms of action, immunogenicity and trial design. The opportunities and cautions for using biologics and biosimilars in unlicensed inflammatory conditions are reviewed. Biosimilars offer many potential opportunities in improving treatment access and increasing treatment options. The high cost associated with marketing approval means that many bio-originators may never become licenced for rarer inflammatory conditions, despite clinical efficacy. Biosimilars, with lower acquisition cost, may improve access for off-label use of biologics in the management of these patients. They may also provide opportunities to explore off-label treatment of conditions where biologic therapy is less established. However, this potential advantage must be balanced with the awareness that off-label prescribing can potentially expose patients to risky and ineffective treatments. Post-marketing surveillance is critical to developing long-term evidence to provide assurances on efficacy as well as safety. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  6. Characterization of an extrapolation chamber in a 90Sr/90Y beta radiation field

    International Nuclear Information System (INIS)

    Oramas Polo, I.; Tamayo Garcia, J. A.

    2015-01-01

    The extrapolation chamber is a parallel plate chamber and variable volume based on the Bragg-Gray theory. It determines in absolute mode, with high accuracy the dose absorbed by the extrapolation of the ionization current measured for a null distance between the electrodes. This camera is used for dosimetry of external beta rays for radiation protection. This paper presents the characterization of an extrapolation chamber in a 90 Sr/ 90 Y beta radiation field. The absorbed dose rate to tissue at a depth of 0.07 mm was calculated and is (0.13206±0.0028) μGy. The extrapolation chamber null depth was determined and its value is 60 μm. The influence of temperature, pressure and humidity on the value of the corrected current was also evaluated. Temperature is the parameter that has more influence on this value and the influence of pressure and the humidity is not very significant. Extrapolation curves were obtained. (Author)

  7. Effect of extrapolation length on the phase transformation of epitaxial ferroelectric thin films

    International Nuclear Information System (INIS)

    Hu, Z.S.; Tang, M.H.; Wang, J.B.; Zheng, X.J.; Zhou, Y.C.

    2008-01-01

    Effects of extrapolation length on the phase transformation of epitaxial ferroelectric thin films on dissimilar cubic substrates have been studied on the basis of the mean-field Landau-Ginzburg-Devonshire (LGD) thermodynamic theory by taking an uneven distribution of the interior stress with thickness into account. It was found that the polarization of epitaxial ferroelectric thin films is strongly dependent on the extrapolation length of films. The physical origin of the extrapolation length during the phase transformation from paraelectric to ferroelectric was revealed in the case of ferroelectric thin films

  8. Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods

    KAUST Repository

    Mozartova, A.; Savostianov, I.; Hundsdorfer, W.

    2015-01-01

    © 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.

  9. Comparison of boundedness and monotonicity properties of one-leg and linear multistep methods

    KAUST Repository

    Mozartova, A.

    2015-05-01

    © 2014 Elsevier B.V. All rights reserved. One-leg multistep methods have some advantage over linear multistep methods with respect to storage of the past results. In this paper boundedness and monotonicity properties with arbitrary (semi-)norms or convex functionals are analyzed for such multistep methods. The maximal stepsize coefficient for boundedness and monotonicity of a one-leg method is the same as for the associated linear multistep method when arbitrary starting values are considered. It will be shown, however, that combinations of one-leg methods and Runge-Kutta starting procedures may give very different stepsize coefficients for monotonicity than the linear multistep methods with the same starting procedures. Detailed results are presented for explicit two-step methods.

  10. Molecular Target Homology as a Basis for Species Extrapolation to Assess the Ecological Risk of Veterinary Drugs

    Science.gov (United States)

    Increased identification of veterinary pharmaceutical contaminants in aquatic environments has raised concerns regarding potential adverse effects of these chemicals on non-target organisms. The purpose of this work was to develop a method for predictive species extrapolation ut...

  11. On Extended Exponential General Linear Methods PSQ with S>Q ...

    African Journals Online (AJOL)

    This paper is concerned with the construction and Numerical Analysis of Extended Exponential General Linear Methods. These methods, in contrast to other methods in literatures, consider methods with the step greater than the stage order (S>Q).Numerical experiments in this study, indicate that Extended Exponential ...

  12. Melting of “non-magic” argon clusters and extrapolation to the bulk limit

    International Nuclear Information System (INIS)

    Senn, Florian; Wiebke, Jonas; Schumann, Ole; Gohr, Sebastian; Schwerdtfeger, Peter; Pahl, Elke

    2014-01-01

    The melting of argon clusters Ar N is investigated by applying a parallel-tempering Monte Carlo algorithm for all cluster sizes in the range from 55 to 309 atoms. Extrapolation to the bulk gives a melting temperature of 85.9 K in good agreement with the previous value of 88.9 K using only Mackay icosahedral clusters for the extrapolation [E. Pahl, F. Calvo, L. Koči, and P. Schwerdtfeger, “Accurate melting temperatures for neon and argon from ab initio Monte Carlo simulations,” Angew. Chem., Int. Ed. 47, 8207 (2008)]. Our results for argon demonstrate that for the extrapolation to the bulk one does not have to restrict to magic number cluster sizes in order to obtain good estimates for the bulk melting temperature. However, the extrapolation to the bulk remains a problem, especially for the systematic selection of suitable cluster sizes

  13. Application of the simplex method of linear programming model to ...

    African Journals Online (AJOL)

    This work discussed how the simplex method of linear programming could be used to maximize the profit of any business firm using Saclux Paint Company as a case study. It equally elucidated the effect variation in the optimal result obtained from linear programming model, will have on any given firm. It was demonstrated ...

  14. Relaxation Methods for Strictly Convex Regularizations of Piecewise Linear Programs

    International Nuclear Information System (INIS)

    Kiwiel, K. C.

    1998-01-01

    We give an algorithm for minimizing the sum of a strictly convex function and a convex piecewise linear function. It extends several dual coordinate ascent methods for large-scale linearly constrained problems that occur in entropy maximization, quadratic programming, and network flows. In particular, it may solve exact penalty versions of such (possibly inconsistent) problems, and subproblems of bundle methods for nondifferentiable optimization. It is simple, can exploit sparsity, and in certain cases is highly parallelizable. Its global convergence is established in the recent framework of B -functions (generalized Bregman functions)

  15. SNSEDextend: SuperNova Spectral Energy Distributions extrapolation toolkit

    Science.gov (United States)

    Pierel, Justin D. R.; Rodney, Steven A.; Avelino, Arturo; Bianco, Federica; Foley, Ryan J.; Friedman, Andrew; Hicken, Malcolm; Hounsell, Rebekah; Jha, Saurabh W.; Kessler, Richard; Kirshner, Robert; Mandel, Kaisey; Narayan, Gautham; Filippenko, Alexei V.; Scolnic, Daniel; Strolger, Louis-Gregory

    2018-05-01

    SNSEDextend extrapolates core-collapse and Type Ia Spectral Energy Distributions (SEDs) into the UV and IR for use in simulations and photometric classifications. The user provides a library of existing SED templates (such as those in the authors' SN SED Repository) along with new photometric constraints in the UV and/or NIR wavelength ranges. The software then extends the existing template SEDs so their colors match the input data at all phases. SNSEDextend can also extend the SALT2 spectral time-series model for Type Ia SN for a "first-order" extrapolation of the SALT2 model components, suitable for use in survey simulations and photometric classification tools; as the code does not do a rigorous re-training of the SALT2 model, the results should not be relied on for precision applications such as light curve fitting for cosmology.

  16. Conjugate gradient type methods for linear systems with complex symmetric coefficient matrices

    Science.gov (United States)

    Freund, Roland

    1989-01-01

    We consider conjugate gradient type methods for the solution of large sparse linear system Ax equals b with complex symmetric coefficient matrices A equals A(T). Such linear systems arise in important applications, such as the numerical solution of the complex Helmholtz equation. Furthermore, most complex non-Hermitian linear systems which occur in practice are actually complex symmetric. We investigate conjugate gradient type iterations which are based on a variant of the nonsymmetric Lanczos algorithm for complex symmetric matrices. We propose a new approach with iterates defined by a quasi-minimal residual property. The resulting algorithm presents several advantages over the standard biconjugate gradient method. We also include some remarks on the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.

  17. Comments on new iterative methods for solving linear systems

    Directory of Open Access Journals (Sweden)

    Wang Ke

    2017-06-01

    Full Text Available Some new iterative methods were presented by Du, Zheng and Wang for solving linear systems in [3], where it is shown that the new methods, comparing to the classical Jacobi or Gauss-Seidel method, can be applied to more systems and have faster convergence. This note shows that their methods are suitable for more matrices than positive matrices which the authors suggested through further analysis and numerical examples.

  18. Why does the Aitken extrapolation often help to attain convergence in self-consistent field calculations?

    International Nuclear Information System (INIS)

    Cioslowski, J.

    1988-01-01

    The Aitken (three-point) extrapolation is one of the most popular convergence accelerators in the SCF calculations. The conditions that guarantee the Aitken extrapolation to bring about an unconditional convergence in the SCF process are examined. Classification of the SCF divergences is presented and it is shown that the extrapolation can be expected to work properly only in the case of oscillatory divergence

  19. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Directory of Open Access Journals (Sweden)

    Gyungho Khim

    2015-01-01

    Full Text Available We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement.

  20. A Method of Calculating Motion Error in a Linear Motion Bearing Stage

    Science.gov (United States)

    Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok

    2015-01-01

    We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715

  1. Solution methods for large systems of linear equations in BACCHUS

    International Nuclear Information System (INIS)

    Homann, C.; Dorr, B.

    1993-05-01

    The computer programme BACCHUS is used to describe steady state and transient thermal-hydraulic behaviour of a coolant in a fuel element with intact geometry in a fast breeder reactor. In such computer programmes generally large systems of linear equations with sparse matrices of coefficients, resulting from discretization of coolant conservation equations, must be solved thousands of times giving rise to large demands of main storage and CPU time. Direct and iterative solution methods of the systems of linear equations, available in BACCHUS, are described, giving theoretical details and experience with their use in the programme. Besides use of a method of lines, a Runge-Kutta-method, for solution of the partial differential equation is outlined. (orig.) [de

  2. New Implicit General Linear Method | Ibrahim | Journal of the ...

    African Journals Online (AJOL)

    A New implicit general linear method is designed for the numerical olution of stiff differential Equations. The coefficients matrix is derived from the stability function. The method combines the single-implicitness or diagonal implicitness with property that the first two rows are implicit and third and fourth row are explicit.

  3. The application of metal artifact reduction (MAR) in CT scans for radiation oncology by monoenergetic extrapolation with a DECT scanner

    Energy Technology Data Exchange (ETDEWEB)

    Schwahofer, Andrea [German Cancer Research Center, Heidelberg (Germany). Dept. of Medical Physics in Radiation Oncology; Clinical Center Vivantes, Neukoelln (Germany). Dept. of Radiotherapy and Oncology; Baer, Esther [German Cancer Research Center, Heidelberg (Germany). Dept. of Medical Physics in Radiation Oncology; Kuchenbecker, Stefan; Kachelriess, Marc [German Cancer Research Center, Heidelberg (Germany). Dept. of Medical Physics in Radiology; Grossmann, J. Guenter [German Cancer Research Center, Heidelberg (Germany). Dept. of Medical Physics in Radiation Oncology; Ortenau Klinikum Offenburg-Gengenbach (Germany). Dept. of Radiooncology; Sterzing, Florian [Heidelberg Univ. (Germany). Dept. of Radiation Oncology; German Cancer Research Center, Heidelberg (Germany). Dept. of Radiotherapy

    2015-07-01

    Metal artifacts in computed tomography CT images are one of the main problems in radiation oncology as they introduce uncertainties to target and organ at risk delineation as well as dose calculation. This study is devoted to metal artifact reduction (MAR) based on the monoenergetic extrapolation of a dual energy CT (DECT) dataset. In a phantom study the CT artifacts caused by metals with different densities: aluminum (ρ{sub Al} = 2.7 g/cm{sup 3}), titanium (ρ{sub Ti} = 4.5 g/cm{sup 3}), steel (ρ{sub steel} = 7.9 g/cm{sup 3}) and tungsten (ρ{sub W} = 19.3 g/cm{sup 3}) have been investigated. Data were collected using a clinical dual source dual energy CT (DECT) scanner (Siemens Sector Healthcare, Forchheim, Germany) with tube voltages of 100 kV and 140 kV (Sn). For each tube voltage the data set in a given volume was reconstructed. Based on these two data sets a voxel by voxel linear combination was performed to obtain the monoenergetic data sets. The results were evaluated regarding the optical properties of the images as well as the CT values (HU) and the dosimetric consequences in computed treatment plans. A data set without metal substitute served as the reference. Also, a head and neck patient with dental fillings (amalgam ρ = 10 g/cm{sup 3}) was scanned with a single energy CT (SECT) protocol and a DECT protocol. The monoenergetic extrapolation was performed as described above and evaluated in the same way. Visual assessment of all data shows minor reductions of artifacts in the images with aluminum and titanium at a monoenergy of 105 keV. As expected, the higher the densities the more distinctive are the artifacts. For metals with higher densities such as steel or tungsten, no artifact reduction has been achieved. Likewise in the CT values, no improvement by use of the monoenergetic extrapolation can be detected. The dose was evaluated at a point 7 cm behind the isocenter of a static field. Small improvements (around 1%) can be seen with 105 ke

  4. The application of metal artifact reduction (MAR) in CT scans for radiation oncology by monoenergetic extrapolation with a DECT scanner

    International Nuclear Information System (INIS)

    Schwahofer, Andrea; Clinical Center Vivantes, Neukoelln; Baer, Esther; Kuchenbecker, Stefan; Kachelriess, Marc; Grossmann, J. Guenter; Ortenau Klinikum Offenburg-Gengenbach; Sterzing, Florian; German Cancer Research Center, Heidelberg

    2015-01-01

    Metal artifacts in computed tomography CT images are one of the main problems in radiation oncology as they introduce uncertainties to target and organ at risk delineation as well as dose calculation. This study is devoted to metal artifact reduction (MAR) based on the monoenergetic extrapolation of a dual energy CT (DECT) dataset. In a phantom study the CT artifacts caused by metals with different densities: aluminum (ρ Al = 2.7 g/cm 3 ), titanium (ρ Ti = 4.5 g/cm 3 ), steel (ρ steel = 7.9 g/cm 3 ) and tungsten (ρ W = 19.3 g/cm 3 ) have been investigated. Data were collected using a clinical dual source dual energy CT (DECT) scanner (Siemens Sector Healthcare, Forchheim, Germany) with tube voltages of 100 kV and 140 kV (Sn). For each tube voltage the data set in a given volume was reconstructed. Based on these two data sets a voxel by voxel linear combination was performed to obtain the monoenergetic data sets. The results were evaluated regarding the optical properties of the images as well as the CT values (HU) and the dosimetric consequences in computed treatment plans. A data set without metal substitute served as the reference. Also, a head and neck patient with dental fillings (amalgam ρ = 10 g/cm 3 ) was scanned with a single energy CT (SECT) protocol and a DECT protocol. The monoenergetic extrapolation was performed as described above and evaluated in the same way. Visual assessment of all data shows minor reductions of artifacts in the images with aluminum and titanium at a monoenergy of 105 keV. As expected, the higher the densities the more distinctive are the artifacts. For metals with higher densities such as steel or tungsten, no artifact reduction has been achieved. Likewise in the CT values, no improvement by use of the monoenergetic extrapolation can be detected. The dose was evaluated at a point 7 cm behind the isocenter of a static field. Small improvements (around 1%) can be seen with 105 keV. However, the dose uncertainty remains of the

  5. Extrapolation of π-meson form factor, zeros in the analyticity domain

    International Nuclear Information System (INIS)

    Morozov, P.T.

    1978-01-01

    The problem of a stable extrapolation from the cut to an arbitrary interior of the analyticity domain for the pion form factor is formulated and solved. As it is shown a stable solution can be derived if module representations with the Karleman weight function are used as the analyticity conditions. The case when the form factor has zeros is discussed. If there are zeros in the complex plane they must be taken into account when determining the extrapolation function

  6. Oral-to-inhalation route extrapolation in occupational health risk assessment: A critical assessment

    NARCIS (Netherlands)

    Rennen, M.A.J.; Bouwman, T.; Wilschut, A.; Bessems, J.G.M.; Heer, C.de

    2004-01-01

    Due to a lack of route-specific toxicity data, the health risks resulting from occupational exposure are frequently assessed by route-to-route (RtR) extrapolation based on oral toxicity data. Insight into the conditions for and the uncertainties connected with the application of RtR extrapolation

  7. NOLB: Nonlinear Rigid Block Normal Mode Analysis Method

    OpenAIRE

    Hoffmann , Alexandre; Grudinin , Sergei

    2017-01-01

    International audience; We present a new conceptually simple and computationally efficient method for nonlinear normal mode analysis called NOLB. It relies on the rotations-translations of blocks (RTB) theoretical basis developed by Y.-H. Sanejouand and colleagues. We demonstrate how to physically interpret the eigenvalues computed in the RTB basis in terms of angular and linear velocities applied to the rigid blocks and how to construct a nonlinear extrapolation of motion out of these veloci...

  8. The Embedding Method for Linear Partial Differential Equations

    Indian Academy of Sciences (India)

    The recently suggested embedding method to solve linear boundary value problems is here extended to cover situations where the domain of interest is unbounded or multiply connected. The extensions involve the use of complete sets of exterior and interior eigenfunctions on canonical domains. Applications to typical ...

  9. A Revised Piecewise Linear Recursive Convolution FDTD Method for Magnetized Plasmas

    International Nuclear Information System (INIS)

    Liu Song; Zhong Shuangying; Liu Shaobin

    2005-01-01

    The piecewise linear recursive convolution (PLRC) finite-different time-domain (FDTD) method improves accuracy over the original recursive convolution (RC) FDTD approach and current density convolution (JEC) but retains their advantages in speed and efficiency. This paper describes a revised piecewise linear recursive convolution PLRC-FDTD formulation for magnetized plasma which incorporates both anisotropy and frequency dispersion at the same time, enabling the transient analysis of magnetized plasma media. The technique is illustrated by numerical simulations of the reflection and transmission coefficients through a magnetized plasma layer. The results show that the revised PLRC-FDTD method has improved the accuracy over the original RC FDTD method and JEC FDTD method

  10. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis

    2016-09-08

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order two and three) with variable step size, and prove their optimality, stability, and convergence. The choice of step size for multistep SSP methods is an interesting problem because the allowable step size depends on the SSP coefficient, which in turn depends on the chosen step sizes. The description of the methods includes an optimal step-size strategy. We prove sharp upper bounds on the allowable step size for explicit SSP linear multistep methods and show the existence of methods with arbitrarily high order of accuracy. The effectiveness of the methods is demonstrated through numerical examples.

  11. An Online Method for Interpolating Linear Parametric Reduced-Order Models

    KAUST Repository

    Amsallem, David; Farhat, Charbel

    2011-01-01

    A two-step online method is proposed for interpolating projection-based linear parametric reduced-order models (ROMs) in order to construct a new ROM for a new set of parameter values. The first step of this method transforms each precomputed ROM into a consistent set of generalized coordinates. The second step interpolates the associated linear operators on their appropriate matrix manifold. Real-time performance is achieved by precomputing inner products between the reduced-order bases underlying the precomputed ROMs. The proposed method is illustrated by applications in mechanical and aeronautical engineering. In particular, its robustness is demonstrated by its ability to handle the case where the sampled parameter set values exhibit a mode veering phenomenon. © 2011 Society for Industrial and Applied Mathematics.

  12. Design and construction of an interface system for the extrapolation chamber from the beta secondary standard

    International Nuclear Information System (INIS)

    Jimenez C, L.F.

    1995-01-01

    The Interface System for the Extrapolation Chamber (SICE) contains several devices handled by a personal computer (PC), it is able to get the required data to calculate the absorbed dose due to Beta radiation. The main functions of the system are: a) Measures the ionization current or charge stored in the extrapolation chamber. b) Adjusts the distance between the plates of the extrapolation chamber automatically. c) Adjust the bias voltage of the extrapolation chamber automatically. d) Acquires the data of the temperature, atmospheric pressure, relative humidity of the environment and the voltage applied between the plates of the extrapolation chamber. e) Calculates the effective area of the plates of the extrapolation chamber and the real distance between them. f) Stores all the obtained information in hard disk or diskette. A comparison between the desired distance and the distance in the dial of the extrapolation chamber, show us that the resolution of the system is of 20 μm. The voltage can be changed between -399.9 V and +399.9 V with an error of less the 3 % with a resolution of 0.1 V. These uncertainties are between the accepted limits to be used in the determination of the absolute absorbed dose due to beta radiation. (Author)

  13. A linear multiple balance method for discrete ordinates neutron transport equations

    International Nuclear Information System (INIS)

    Park, Chang Je; Cho, Nam Zin

    2000-01-01

    A linear multiple balance method (LMB) is developed to provide more accurate and positive solutions for the discrete ordinates neutron transport equations. In this multiple balance approach, one mesh cell is divided into two subcells with quadratic approximation of angular flux distribution. Four multiple balance equations are used to relate center angular flux with average angular flux by Simpson's rule. From the analysis of spatial truncation error, the accuracy of the linear multiple balance scheme is ο(Δ 4 ) whereas that of diamond differencing is ο(Δ 2 ). To accelerate the linear multiple balance method, we also describe a simplified additive angular dependent rebalance factor scheme which combines a modified boundary projection acceleration scheme and the angular dependent rebalance factor acceleration schme. It is demonstrated, via fourier analysis of a simple model problem as well as numerical calculations, that the additive angular dependent rebalance factor acceleration scheme is unconditionally stable with spectral radius < 0.2069c (c being the scattering ration). The numerical results tested so far on slab-geometry discrete ordinates transport problems show that the solution method of linear multiple balance is effective and sufficiently efficient

  14. COMPARISON OF CORONAL EXTRAPOLATION METHODS FOR CYCLE 24 USING HMI DATA

    Energy Technology Data Exchange (ETDEWEB)

    Arden, William M. [University of Southern Queensland, Toowoomba, Queensland (Australia); Norton, Aimee A.; Sun, Xudong; Zhao, Xuepu [Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States)

    2016-05-20

    Two extrapolation models of the solar coronal magnetic field are compared using magnetogram data from the Solar Dynamics Observatory /Helioseismic and Magnetic Imager instrument. The two models, a horizontal current–current sheet–source surface (HCCSSS) model and a potential field–source surface (PFSS) model, differ in their treatment of coronal currents. Each model has its own critical variable, respectively, the radius of a cusp surface and a source surface, and it is found that adjusting these heights over the period studied allows for a better fit between the models and the solar open flux at 1 au as calculated from the Interplanetary Magnetic Field (IMF). The HCCSSS model provides the better fit for the overall period from 2010 November to 2015 May as well as for two subsets of the period: the minimum/rising part of the solar cycle and the recently identified peak in the IMF from mid-2014 to mid-2015 just after solar maximum. It is found that an HCCSSS cusp surface height of 1.7 R {sub ⊙} provides the best fit to the IMF for the overall period, while 1.7 and 1.9 R {sub ⊙} give the best fits for the two subsets. The corresponding values for the PFSS source surface height are 2.1, 2.2, and 2.0 R {sub ⊙} respectively. This means that the HCCSSS cusp surface rises as the solar cycle progresses while the PFSS source surface falls.

  15. The intelligence of dual simplex method to solve linear fractional fuzzy transportation problem.

    Science.gov (United States)

    Narayanamoorthy, S; Kalyani, S

    2015-01-01

    An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.

  16. Linear algebraic methods applied to intensity modulated radiation therapy.

    Science.gov (United States)

    Crooks, S M; Xing, L

    2001-10-01

    Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.

  17. Electrostatic Discharge Current Linear Approach and Circuit Design Method

    Directory of Open Access Journals (Sweden)

    Pavlos K. Katsivelis

    2010-11-01

    Full Text Available The Electrostatic Discharge phenomenon is a great threat to all electronic devices and ICs. An electric charge passing rapidly from a charged body to another can seriously harm the last one. However, there is a lack in a linear mathematical approach which will make it possible to design a circuit capable of producing such a sophisticated current waveform. The commonly accepted Electrostatic Discharge current waveform is the one set by the IEC 61000-4-2. However, the over-simplified circuit included in the same standard is incapable of producing such a waveform. Treating the Electrostatic Discharge current waveform of the IEC 61000-4-2 as reference, an approximation method, based on Prony’s method, is developed and applied in order to obtain a linear system’s response. Considering a known input, a method to design a circuit, able to generate this ESD current waveform in presented. The circuit synthesis assumes ideal active elements. A simulation is carried out using the PSpice software.

  18. Fundamental solution of the problem of linear programming and method of its determination

    Science.gov (United States)

    Petrunin, S. V.

    1978-01-01

    The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.

  19. Infeasible Interior-Point Methods for Linear Optimization Based on Large Neighborhood

    NARCIS (Netherlands)

    Asadi, A.R.; Roos, C.

    2015-01-01

    In this paper, we design a class of infeasible interior-point methods for linear optimization based on large neighborhood. The algorithm is inspired by a full-Newton step infeasible algorithm with a linear convergence rate in problem dimension that was recently proposed by the second author.

  20. Modifications of Steepest Descent Method and Conjugate Gradient Method Against Noise for Ill-posed Linear Systems

    Directory of Open Access Journals (Sweden)

    Chein-Shan Liu

    2012-04-01

    Full Text Available It is well known that the numerical algorithms of the steepest descent method (SDM, and the conjugate gradient method (CGM are effective for solving well-posed linear systems. However, they are vulnerable to noisy disturbance for solving ill-posed linear systems. We propose the modifications of SDM and CGM, namely the modified steepest descent method (MSDM, and the modified conjugate gradient method (MCGM. The starting point is an invariant manifold defined in terms of a minimum functional and a fictitious time-like variable; however, in the final stage we can derive a purely iterative algorithm including an acceleration parameter. Through the Hopf bifurcation, this parameter indeed plays a major role to switch the situation of slow convergence to a new situation that the functional is stepwisely decreased very fast. Several numerical examples are examined and compared with exact solutions, revealing that the new algorithms of MSDM and MCGM have good computational efficiency and accuracy, even for the highly ill-conditioned linear equations system with a large noise being imposed on the given data.

  1. The Intelligence of Dual Simplex Method to Solve Linear Fractional Fuzzy Transportation Problem

    Directory of Open Access Journals (Sweden)

    S. Narayanamoorthy

    2015-01-01

    Full Text Available An approach is presented to solve a fuzzy transportation problem with linear fractional fuzzy objective function. In this proposed approach the fractional fuzzy transportation problem is decomposed into two linear fuzzy transportation problems. The optimal solution of the two linear fuzzy transportations is solved by dual simplex method and the optimal solution of the fractional fuzzy transportation problem is obtained. The proposed method is explained in detail with an example.

  2. Linearly convergent stochastic heavy ball method for minimizing generalization error

    KAUST Repository

    Loizou, Nicolas; Richtarik, Peter

    2017-01-01

    In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss

  3. A discrete homotopy perturbation method for non-linear Schrodinger equation

    Directory of Open Access Journals (Sweden)

    H. A. Wahab

    2015-12-01

    Full Text Available A general analysis is made by homotopy perturbation method while taking the advantages of the initial guess, appearance of the embedding parameter, different choices of the linear operator to the approximated solution to the non-linear Schrodinger equation. We are not dependent upon the Adomian polynomials and find the linear forms of the components without these calculations. The discretised forms of the nonlinear Schrodinger equation allow us whether to apply any numerical technique on the discritisation forms or proceed for perturbation solution of the problem. The discretised forms obtained by constructed homotopy provide the linear parts of the components of the solution series and hence a new discretised form is obtained. The general discretised form for the NLSE allows us to choose any initial guess and the solution in the closed form.

  4. Ground-state inversion method applied to calculation of molecular photoionization cross-sections by atomic extrapolation: Interference effects at low energies

    International Nuclear Information System (INIS)

    Hilton, P.R.; Nordholm, S.; Hush, N.S.

    1980-01-01

    The ground-state inversion method, which we have previously developed for the calculation of atomic cross-sections, is applied to the calculation of molecular photoionization cross-sections. These are obtained as a weighted sum of atomic subshell cross-sections plus multi-centre interference terms. The atomic cross-sections are calculated directly for the atomic functions which when summed over centre and symmetry yield the molecular orbital wave function. The use of the ground-state inversion method for this allows the effect of the molecular environment on the atomic cross-sections to be calculated. Multi-centre terms are estimated on the basis of an effective plane-wave expression for this contribution to the total cross-section. Finally the method is applied to the range of photon energies from 0 to 44 eV where atomic extrapolation procedures have not previously been tested. Results obtained for H 2 , N 2 and CO show good agreement with experiment, particularly when interference effects and effects of the molecular environment on the atomic cross-sections are included. The accuracy is very much better than that of previous plane-wave and orthogonalized plane-wave methods, and can stand comparison with that of recent more sophisticated approaches. It is a feature of the method that calculation of cross-sections either of atoms or of large molecules requires very little computer time, provided that good quality wave functions are available, and it is then of considerable potential practical interest for photoelectorn spectroscopy. (orig.)

  5. Interior-Point Method for Non-Linear Non-Convex Optimization

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Matonoha, Ctirad; Vlček, Jan

    2004-01-01

    Roč. 11, č. 5-6 (2004), s. 431-453 ISSN 1070-5325 R&D Projects: GA AV ČR IAA1030103 Institutional research plan: CEZ:AV0Z1030915 Keywords : non-linear programming * interior point methods * indefinite systems * indefinite preconditioners * preconditioned conjugate gradient method * merit functions * algorithms * computational experiments Subject RIV: BA - General Mathematics Impact factor: 0.727, year: 2004

  6. The linear hypothesis and radiation carcinogenesis

    International Nuclear Information System (INIS)

    Roberts, P.B.

    1981-10-01

    An assumption central to most estimations of the carcinogenic potential of low levels of ionising radiation is that the risk always increases in direct proportion to the dose received. This assumption (the linear hypothesis) has been both strongly defended and attacked on several counts. It appears unlikely that conclusive, direct evidence on the validity of the hypothesis will be forthcoming. We review the major indirect arguments used in the debate. All of them are subject to objections that can seriously weaken their case. In the present situation, retention of the linear hypothesis as the basis of extrapolations from high to low dose levels can lead to excessive fears, over-regulation and unnecessarily expensive protection measures. To offset these possibilities, support is given to suggestions urging a cut-off dose, probably some fraction of natural background, below which risks can be deemed acceptable

  7. On extrapolation blowups in the $L_p$ scale

    Czech Academy of Sciences Publication Activity Database

    Capone, C.; Fiorenza, A.; Krbec, Miroslav

    2006-01-01

    Roč. 9, č. 4 (2006), s. 1-15 ISSN 1025-5834 R&D Projects: GA ČR(CZ) GA201/01/1201 Institutional research plan: CEZ:AV0Z10190503 Keywords : extrapolation * Lebesgue spaces * small Lebesgue spaces Subject RIV: BA - General Mathematics Impact factor: 0.349, year: 2004

  8. A six-hour extrapolated sampling strategy for monitoring mycophenolic acid in renal transplant patients in the Indian subcontinent

    Directory of Open Access Journals (Sweden)

    Fleming D

    2006-01-01

    Full Text Available Background : Therapeutic drug monitoring for mycophenolic acid (MPA is increasingly being advocated. Thepresent therapeutic range relates to the 12-hour area under the serum concentration time profile (AUC.However, this is a cumbersome, tedious, cost restricting procedure. Is it possible to reduce this samplingperiod? Aim : To compare the AUC from a reduced sampling strategy with the full 12-hour profile for MPA. Settings and Design : Clinical Pharmacology Unit of a tertiary care hospital in South India. Retrospective, paireddata. Materials and Methods : Thirty-four 12-hour profiles from post-renal transplant patients on Cellcept ® wereevaluated. Profiles were grouped according to steroid and immunosuppressant co-medication and the timeafter transplant. MPA was estimated by high performance liquid chromatography with UV detection. From the12-hour profiles the AUC up to only six hours was calculated by the trapezoidal rule and a correction factorapplied. These two AUCs were then compared. Statistical Analysis : Linear regression, intra-class correlations (ICC and a two-tailed paired t-test were appliedto the data. Results : Comparing the 12-hour AUC with the paired 6-hour extrapolated AUC, the ICC and linear regression(r2 were very good for all three groups. No statistical difference was found by a two-tailed paired t-test. Nobias was seen with a Bland Altman plot or by calculation. Conclusion : For patients on Cellcept ® with prednisolone ± cyclosporine the 6-hour corrected is an accuratemeasure of the full 12-hour AUC.

  9. Treating experimental data of inverse kinetic method by unitary linear regression analysis

    International Nuclear Information System (INIS)

    Zhao Yusen; Chen Xiaoliang

    2009-01-01

    The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)

  10. An Evaluation of Five Linear Equating Methods for the NEAT Design

    Science.gov (United States)

    Mroch, Andrew A.; Suh, Youngsuk; Kane, Michael T.; Ripkey, Douglas R.

    2009-01-01

    This study uses the results of two previous papers (Kane, Mroch, Suh, & Ripkey, this issue; Suh, Mroch, Kane, & Ripkey, this issue) and the literature on linear equating to evaluate five linear equating methods along several dimensions, including the plausibility of their assumptions and their levels of bias and root mean squared difference…

  11. Can Pearlite form Outside of the Hultgren Extrapolation of the Ae3 and Acm Phase Boundaries?

    Science.gov (United States)

    Aranda, M. M.; Rementeria, R.; Capdevila, C.; Hackenberg, R. E.

    2016-02-01

    It is usually assumed that ferrous pearlite can form only when the average austenite carbon concentration C 0 lies between the extrapolated Ae3 ( γ/ α) and Acm ( γ/ θ) phase boundaries (the "Hultgren extrapolation"). This "mutual supersaturation" criterion for cooperative lamellar nucleation and growth is critically examined from a historical perspective and in light of recent experiments on coarse-grained hypoeutectoid steels which show pearlite formation outside the Hultgren extrapolation. This criterion, at least as interpreted in terms of the average austenite composition, is shown to be unnecessarily restrictive. The carbon fluxes evaluated from Brandt's solution are sufficient to allow pearlite growth both inside and outside the Hultgren Extrapolation. As for the feasibility of the nucleation events leading to pearlite, the only criterion is that there are some local regions of austenite inside the Hultgren Extrapolation, even if the average austenite composition is outside.

  12. Biosimilars in Inflammatory Bowel Disease: Facts and Fears of Extrapolation.

    Science.gov (United States)

    Ben-Horin, Shomron; Vande Casteele, Niels; Schreiber, Stefan; Lakatos, Peter Laszlo

    2016-12-01

    Biologic drugs such as infliximab and other anti-tumor necrosis factor monoclonal antibodies have transformed the treatment of immune-mediated inflammatory conditions such as Crohn's disease and ulcerative colitis (collectively known as inflammatory bowel disease [IBD]). However, the complex manufacturing processes involved in producing these drugs mean their use in clinical practice is expensive. Recent or impending expiration of patents for several biologics has led to development of biosimilar versions of these drugs, with the aim of providing substantial cost savings and increased accessibility to treatment. Biosimilars undergo an expedited regulatory process. This involves proving structural, functional, and biological biosimilarity to the reference product (RP). It is also expected that clinical equivalency/comparability will be demonstrated in a clinical trial in one (or more) sensitive population. Once these requirements are fulfilled, extrapolation of biosimilar approval to other indications for which the RP is approved is permitted without the need for further clinical trials, as long as this is scientifically justifiable. However, such justification requires that the mechanism(s) of action of the RP in question should be similar across indications and also comparable between the RP and the biosimilar in the clinically tested population(s). Likewise, the pharmacokinetics, immunogenicity, and safety of the RP should be similar across indications and comparable between the RP and biosimilar in the clinically tested population(s). To date, most anti-tumor necrosis factor biosimilars have been tested in trials recruiting patients with rheumatoid arthritis. Concerns have been raised regarding extrapolation of clinical data obtained in rheumatologic populations to IBD indications. In this review, we discuss the issues surrounding indication extrapolation, with a focus on extrapolation to IBD. Copyright © 2016 AGA Institute. Published by Elsevier Inc. All

  13. Numerical method for solving linear Fredholm fuzzy integral equations of the second kind

    Energy Technology Data Exchange (ETDEWEB)

    Abbasbandy, S. [Department of Mathematics, Imam Khomeini International University, P.O. Box 288, Ghazvin 34194 (Iran, Islamic Republic of)]. E-mail: saeid@abbasbandy.com; Babolian, E. [Faculty of Mathematical Sciences and Computer Engineering, Teacher Training University, Tehran 15618 (Iran, Islamic Republic of); Alavi, M. [Department of Mathematics, Arak Branch, Islamic Azad University, Arak 38135 (Iran, Islamic Republic of)

    2007-01-15

    In this paper we use parametric form of fuzzy number and convert a linear fuzzy Fredholm integral equation to two linear system of integral equation of the second kind in crisp case. We can use one of the numerical method such as Nystrom and find the approximation solution of the system and hence obtain an approximation for fuzzy solution of the linear fuzzy Fredholm integral equations of the second kind. The proposed method is illustrated by solving some numerical examples.

  14. Methods of measurement of integral and differential linearity distortions of spectrometry sets

    International Nuclear Information System (INIS)

    Fuan, Jacques; Grimont, Bernard; Marin, Roland; Richard, Jean-Pierre

    1969-05-01

    The objective of this document is to describe different measurement methods, and more particularly to present a software for the processing of obtained results in order to avoid interpretation by the investigator. In a first part, the authors define the parameters of integral and differential linearity, outlines their importance in measurements performed by spectrometry, and describe the use of these parameters. In the second part, they propose various methods of measurement of these linearity parameters, report experimental applications of these methods and compare the obtained results

  15. Stochastic Least-Squares Petrov--Galerkin Method for Parameterized Linear Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kookjin [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science; Carlberg, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Elman, Howard C. [Univ. of Maryland, College Park, MD (United States). Dept. of Computer Science and Inst. for Advanced Computer Studies

    2018-03-29

    Here, we consider the numerical solution of parameterized linear systems where the system matrix, the solution, and the right-hand side are parameterized by a set of uncertain input parameters. We explore spectral methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error. As a remedy for this, we propose a novel stochatic least-squares Petrov--Galerkin (LSPG) method. The proposed method is optimal in the sense that it produces the solution that minimizes a weighted $\\ell^2$-norm of the residual over all solutions in a given finite-dimensional subspace. Moreover, the method can be adapted to minimize the solution error in different weighted $\\ell^2$-norms by simply applying a weighting function within the least-squares formulation. In addition, a goal-oriented seminorm induced by an output quantity of interest can be minimized by defining a weighting function as a linear functional of the solution. We establish optimality and error bounds for the proposed method, and extensive numerical experiments show that the weighted LSPG method outperforms other spectral methods in minimizing corresponding target weighted norms.

  16. Non-linear analysis of skew thin plate by finite difference method

    International Nuclear Information System (INIS)

    Kim, Chi Kyung; Hwang, Myung Hwan

    2012-01-01

    This paper deals with a discrete analysis capability for predicting the geometrically nonlinear behavior of skew thin plate subjected to uniform pressure. The differential equations are discretized by means of the finite difference method which are used to determine the deflections and the in-plane stress functions of plates and reduced to several sets of linear algebraic simultaneous equations. For the geometrically non-linear, large deflection behavior of the plate, the non-linear plate theory is used for the analysis. An iterative scheme is employed to solve these quasi-linear algebraic equations. Several problems are solved which illustrate the potential of the method for predicting the finite deflection and stress. For increasing lateral pressures, the maximum principal tensile stress occurs at the center of the plate and migrates toward the corners as the load increases. It was deemed important to describe the locations of the maximum principal tensile stress as it occurs. The load-deflection relations and the maximum bending and membrane stresses for each case are presented and discussed

  17. WE-DE-201-05: Evaluation of a Windowless Extrapolation Chamber Design and Monte Carlo Based Corrections for the Calibration of Ophthalmic Applicators

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, J; Culberson, W; DeWerd, L [University of Wisconsin Medical Radiation Research Center, Madison, WI (United States); Soares, C [NIST (retired), Gaithersburg, MD (United States)

    2016-06-15

    Purpose: To test the validity of a windowless extrapolation chamber used to measure surface dose rate from planar ophthalmic applicators and to compare different Monte Carlo based codes for deriving correction factors. Methods: Dose rate measurements were performed using a windowless, planar extrapolation chamber with a {sup 90}Sr/{sup 90}Y Tracerlab RA-1 ophthalmic applicator previously calibrated at the National Institute of Standards and Technology (NIST). Capacitance measurements were performed to estimate the initial air gap width between the source face and collecting electrode. Current was measured as a function of air gap, and Bragg-Gray cavity theory was used to calculate the absorbed dose rate to water. To determine correction factors for backscatter, divergence, and attenuation from the Mylar entrance window found in the NIST extrapolation chamber, both EGSnrc Monte Carlo user code and Monte Carlo N-Particle Transport Code (MCNP) were utilized. Simulation results were compared with experimental current readings from the windowless extrapolation chamber as a function of air gap. Additionally, measured dose rate values were compared with the expected result from the NIST source calibration to test the validity of the windowless chamber design. Results: Better agreement was seen between EGSnrc simulated dose results and experimental current readings at very small air gaps (<100 µm) for the windowless extrapolation chamber, while MCNP results demonstrated divergence at these small gap widths. Three separate dose rate measurements were performed with the RA-1 applicator. The average observed difference from the expected result based on the NIST calibration was −1.88% with a statistical standard deviation of 0.39% (k=1). Conclusion: EGSnrc user code will be used during future work to derive correction factors for extrapolation chamber measurements. Additionally, experiment results suggest that an entrance window is not needed in order for an extrapolation

  18. Optimal explicit strong stability preserving Runge–Kutta methods with high linear order and optimal nonlinear order

    KAUST Repository

    Gottlieb, Sigal

    2015-04-10

    High order spatial discretizations with monotonicity properties are often desirable for the solution of hyperbolic PDEs. These methods can advantageously be coupled with high order strong stability preserving time discretizations. The search for high order strong stability time-stepping methods with large allowable strong stability coefficient has been an active area of research over the last two decades. This research has shown that explicit SSP Runge-Kutta methods exist only up to fourth order. However, if we restrict ourselves to solving only linear autonomous problems, the order conditions simplify and this order barrier is lifted: explicit SSP Runge-Kutta methods of any linear order exist. These methods reduce to second order when applied to nonlinear problems. In the current work we aim to find explicit SSP Runge-Kutta methods with large allowable time-step, that feature high linear order and simultaneously have the optimal fourth order nonlinear order. These methods have strong stability coefficients that approach those of the linear methods as the number of stages and the linear order is increased. This work shows that when a high linear order method is desired, it may still be worthwhile to use methods with higher nonlinear order.

  19. An Improved Method for Solving Multiobjective Integer Linear Fractional Programming Problem

    Directory of Open Access Journals (Sweden)

    Meriem Ait Mehdi

    2014-01-01

    Full Text Available We describe an improvement of Chergui and Moulaï’s method (2008 that generates the whole efficient set of a multiobjective integer linear fractional program based on the branch and cut concept. The general step of this method consists in optimizing (maximizing without loss of generality one of the fractional objective functions over a subset of the original continuous feasible set; then if necessary, a branching process is carried out until obtaining an integer feasible solution. At this stage, an efficient cut is built from the criteria’s growth directions in order to discard a part of the feasible domain containing only nonefficient solutions. Our contribution concerns firstly the optimization process where a linear program that we define later will be solved at each step rather than a fractional linear program. Secondly, local ideal and nadir points will be used as bounds to prune some branches leading to nonefficient solutions. The computational experiments show that the new method outperforms the old one in all the treated instances.

  20. Extrapolation of Nitrogen Fertiliser Recommendation Zones for Maize in Kisii District Using Geographical Information Systems

    International Nuclear Information System (INIS)

    Okoth, P.F.; Wamae, D.K.

    1999-01-01

    A GIS database was established for fertiliser recommendation domains in Kisii District by using FURP fertiliser trial results, KSS soils data and MDBP climatic data. These are manipulated in ESRI's (Personal Computer Environmental Systems Research Institute) ARCINFO and ARCVIEW softwares. The extrapolations were only done for the long rains season (March- August) with three to four years data. GIS technology was used to cluster fertiliser recommendation domains as a geographical area expressed in terms of variation over space and not limited to the site of experiment where a certain agronomic or economic fertiliser recommendation was made. The extrapolation over space was found to be more representative for any recommendation, the result being digital maps describing each area in the geographical space. From the results of the extrapolations, approximately 38,255 ha of the district require zero Nitrogen (N) fertilisation while 94,330 ha requires 75 kg ha -1 Nitrogen fertilisation during the (March-August) long rains. The extrapolation was made difficult since no direct relationships could be established to occur between the available-N, % Carbon (C) or any of the other soil properties with the obtained yields. Decision rules were however developed based on % C which was the soil variable with values closest to the obtained yields. 3% organic carbon was found to be the boundary between 0 application and 75 kg-N application. GIS techniques made it possible to model and extrapolates the results using the available data. The extrapolations still need to be verified with more ground data from fertiliser trials. Data gaps in the soil map left some soil mapping units with no recommendations. Elevation was observed to influence yields and it should be included in future extrapolation by clustering digital elevation models with rainfall data in a spatial model at the district scale

  1. Does Prigogine’s Non-linear Thermodynamics Support Popular Philosophical Discussions of Self-Organization?

    Directory of Open Access Journals (Sweden)

    Alexander Pechenkin

    2015-10-01

    Full Text Available The article is concerned with the philosophical talks which became popular in the 1980s and have kept their popularity till now–the philosophical essays about self-organization. The author attempts to find out as to which extent are these essays founded on the scientific theory to which they regularly refer, that is, Ilya Prigogine’s non-linear thermodynamics. The author insists that the equivalent of self-organization in Prigogine’s theoretical physics is the concept of dissipative structure. The concept of selforganization, as it is used in philosophical literature, presupposes a sequence of extrapolations, the first extrapolation being conducted by Prigogine and his coauthors. They became to use the concept of dissipative structure beyond the rigorous theory of this phenomenon. The subsequent step was that the scientific term “dissipative structure” was replaced by the vague concept “self-organization” in many popular and semi-popular books and papers. The author also emphasizes that by placing the concept of self-organization into the framework of philosophical concepts (the picture of the world, the ideals of scientific thought, the contemporary scientific revolution, etc. a philosopher conducts the extrapolation of extrapolation and comes to a kind of what Edmund Husserl called Weltanschauung (‘worldview’ philosophy.

  2. A Fifth Order Hybrid Linear Multistep method For the Direct Solution ...

    African Journals Online (AJOL)

    A linear multistep hybrid method (LMHM)with continuous coefficients isconsidered and directly applied to solve third order initial and boundary value problems (IBVPs). The continuous method is used to obtain Multiple Finite Difference Methods (MFDMs) (each of order 5) which are combined as simultaneous numerical ...

  3. Bulk rock elastic moduli at high pressures, derived from the mineral textures and from extrapolated laboratory data

    International Nuclear Information System (INIS)

    Ullemeyer, K; Keppler, R; Lokajíček, T; Vasin, R N; Behrmann, J H

    2015-01-01

    The elastic anisotropy of bulk rock depends on the mineral textures, the crack fabric and external parameters like, e.g., confining pressure. The texture-related contribution to elastic anisotropy can be predicted from the mineral textures, the largely sample-dependent contribution of the other parameters must be determined experimentally. Laboratory measurements of the elastic wave velocities are mostly limited to pressures of the intermediate crust. We describe a method, how the elastic wave velocity trends and, by this means, the elastic constants can be extrapolated to the pressure conditions of the lower crust. The extrapolated elastic constants are compared to the texture-derived ones. Pronounced elastic anisotropy is evident for phyllosilicate minerals, hence, the approach is demonstrated for two phyllosilicate-rich gneisses with approximately identical volume fractions of the phyllosilicates but different texture types. (paper)

  4. A visual basic program to generate sediment grain-size statistics and to extrapolate particle distributions

    Science.gov (United States)

    Poppe, L.J.; Eliason, A.H.; Hastings, M.E.

    2004-01-01

    Measures that describe and summarize sediment grain-size distributions are important to geologists because of the large amount of information contained in textural data sets. Statistical methods are usually employed to simplify the necessary comparisons among samples and quantify the observed differences. The two statistical methods most commonly used by sedimentologists to describe particle distributions are mathematical moments (Krumbein and Pettijohn, 1938) and inclusive graphics (Folk, 1974). The choice of which of these statistical measures to use is typically governed by the amount of data available (Royse, 1970). If the entire distribution is known, the method of moments may be used; if the next to last accumulated percent is greater than 95, inclusive graphics statistics can be generated. Unfortunately, earlier programs designed to describe sediment grain-size distributions statistically do not run in a Windows environment, do not allow extrapolation of the distribution's tails, or do not generate both moment and graphic statistics (Kane and Hubert, 1963; Collias et al., 1963; Schlee and Webster, 1967; Poppe et al., 2000)1.Owing to analytical limitations, electro-resistance multichannel particle-size analyzers, such as Coulter Counters, commonly truncate the tails of the fine-fraction part of grain-size distributions. These devices do not detect fine clay in the 0.6–0.1 μm range (part of the 11-phi and all of the 12-phi and 13-phi fractions). Although size analyses performed down to 0.6 μm microns are adequate for most freshwater and near shore marine sediments, samples from many deeper water marine environments (e.g. rise and abyssal plain) may contain significant material in the fine clay fraction, and these analyses benefit from extrapolation.The program (GSSTAT) described herein generates statistics to characterize sediment grain-size distributions and can extrapolate the fine-grained end of the particle distribution. It is written in Microsoft

  5. Extrapolation procedures for calculating high-temperature gibbs free energies of aqueous electrolytes

    International Nuclear Information System (INIS)

    Tremaine, P.R.

    1979-01-01

    Methods for calculating high-temprature Gibbs free energies of mononuclear cations and anions from room-temperature data are reviewed. Emphasis is given to species required for oxide solubility calculations relevant to mass transport situations in the nuclear industry. Free energies predicted by each method are compared to selected values calculated from recently reported solubility studies and other literature data. Values for monatomic ions estimated using the assumption anti C 0 p(T) = anti C 0 p(298) agree best with experiment to 423 K. From 423 K to 523 K, free energies from an electrostatic model for ion hydration are more accurate. Extrapolations for hydrolyzed species are limited by a lack of room-temperature entropy data and expressions for estimating these entropies are discussed. (orig.) [de

  6. Generalized empirical equation for the extrapolated range of electrons in elemental and compound materials

    International Nuclear Information System (INIS)

    Lima, W. de; Poli CR, D. de

    1999-01-01

    The extrapolated range R ex of electrons is useful for various purposes in research and in the application of electrons, for example, in polymer modification, electron energy determination and estimation of effects associated with deep penetration of electrons. A number of works have used empirical equations to express the extrapolated range for some elements. In this work a generalized empirical equation, very simple and accurate, in the energy region 0.3 keV - 50 MeV is proposed. The extrapolated range for elements, in organic or inorganic molecules and compound materials, can be well expressed as a function of the atomic number Z or two empirical parameters Zm for molecules and Zc for compound materials instead of Z. (author)

  7. Strong Stability Preserving Explicit Linear Multistep Methods with Variable Step Size

    KAUST Repository

    Hadjimichael, Yiannis; Ketcheson, David I.; Loczi, Lajos; Né meth, Adriá n

    2016-01-01

    Strong stability preserving (SSP) methods are designed primarily for time integration of nonlinear hyperbolic PDEs, for which the permissible SSP step size varies from one step to the next. We develop the first SSP linear multistep methods (of order

  8. A Novel Method of Robust Trajectory Linearization Control Based on Disturbance Rejection

    Directory of Open Access Journals (Sweden)

    Xingling Shao

    2014-01-01

    Full Text Available A novel method of robust trajectory linearization control for a class of nonlinear systems with uncertainties based on disturbance rejection is proposed. Firstly, on the basis of trajectory linearization control (TLC method, a feedback linearization based control law is designed to transform the original tracking error dynamics to the canonical integral-chain form. To address the issue of reducing the influence made by uncertainties, with tracking error as input, linear extended state observer (LESO is constructed to estimate the tracking error vector, as well as the uncertainties in an integrated manner. Meanwhile, the boundedness of the estimated error is investigated by theoretical analysis. In addition, decoupled controller (which has the characteristic of well-tuning and simple form based on LESO is synthesized to realize the output tracking for closed-loop system. The closed-loop stability of the system under the proposed LESO-based control structure is established. Also, simulation results are presented to illustrate the effectiveness of the control strategy.

  9. Improvement of linear reactivity methods and application to long range fuel management

    International Nuclear Information System (INIS)

    Woehlke, R.A.; Quan, B.L.

    1982-01-01

    The original development of the linear reactivity theory assumes flat burnup, batch by batch. The validity of this assumption is explored using multicycle burnup data generated with a detailed 3-D SIMULATE model. The results show that the linear reactivity method can be improved by correcting for batchwise power sharing. The application of linear reactivity to long range fuel management is demonstrated in several examples. Correcting for batchwise power sharing improves the accuracy of the analysis. However, with regard to the sensitivity of fuel cost to changes in various parameters, the corrected and uncorrected linear reactivity theories give remarkably similar results

  10. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    International Nuclear Information System (INIS)

    Kutepov, A. L.

    2017-01-01

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.

  11. Hydrologic nonstationarity and extrapolating models to predict the future: overview of session and proceeding

    Directory of Open Access Journals (Sweden)

    F. H. S. Chiew

    2015-06-01

    Full Text Available This paper provides an overview of this IAHS symposium and PIAHS proceeding on "hydrologic nonstationarity and extrapolating models to predict the future". The paper provides a brief review of research on this topic, presents approaches used to account for nonstationarity when extrapolating models to predict the future, and summarises the papers in this session and proceeding.

  12. Effective ellipsoidal models for wavefield extrapolation in tilted orthorhombic media

    KAUST Repository

    Waheed, Umair Bin

    2016-04-22

    Wavefield computations using the ellipsoidally anisotropic extrapolation operator offer significant cost reduction compared to that for the orthorhombic case, especially when the symmetry planes are tilted and/or rotated. However, ellipsoidal anisotropy does not provide accurate wavefield representation or imaging for media of orthorhombic symmetry. Therefore, we propose the use of ‘effective ellipsoidally anisotropic’ models that correctly capture the kinematic behaviour of wavefields for tilted orthorhombic (TOR) media. We compute effective velocities for the ellipsoidally anisotropic medium using kinematic high-frequency representation of the TOR wavefield, obtained by solving the TOR eikonal equation. The effective model allows us to use the cheaper ellipsoidally anisotropic wave extrapolation operators. Although the effective models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including frequency dependency and caustics, if present, with reasonable accuracy. The proposed methodology offers a much better cost versus accuracy trade-off for wavefield computations in TOR media, particularly for media of low to moderate anisotropic strength. Furthermore, the computed wavefield solution is free from shear-wave artefacts as opposed to the conventional finite-difference based TOR wave extrapolation scheme. We demonstrate applicability and usefulness of our formulation through numerical tests on synthetic TOR models. © 2016 Institute of Geophysics of the ASCR, v.v.i

  13. General methods for determining the linear stability of coronal magnetic fields

    Science.gov (United States)

    Craig, I. J. D.; Sneyd, A. D.; Mcclymont, A. N.

    1988-01-01

    A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak.

  14. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    Science.gov (United States)

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a

  15. Predicting treatment effect from surrogate endpoints and historical trials: an extrapolation involving probabilities of a binary outcome or survival to a specific time.

    Science.gov (United States)

    Baker, Stuart G; Sargent, Daniel J; Buyse, Marc; Burzykowski, Tomasz

    2012-03-01

    Using multiple historical trials with surrogate and true endpoints, we consider various models to predict the effect of treatment on a true endpoint in a target trial in which only a surrogate endpoint is observed. This predicted result is computed using (1) a prediction model (mixture, linear, or principal stratification) estimated from historical trials and the surrogate endpoint of the target trial and (2) a random extrapolation error estimated from successively leaving out each trial among the historical trials. The method applies to either binary outcomes or survival to a particular time that is computed from censored survival data. We compute a 95% confidence interval for the predicted result and validate its coverage using simulation. To summarize the additional uncertainty from using a predicted instead of true result for the estimated treatment effect, we compute its multiplier of standard error. Software is available for download. © 2011, The International Biometric Society No claim to original US government works.

  16. A linear iterative unfolding method

    International Nuclear Information System (INIS)

    László, András

    2012-01-01

    A frequently faced task in experimental physics is to measure the probability distribution of some quantity. Often this quantity to be measured is smeared by a non-ideal detector response or by some physical process. The procedure of removing this smearing effect from the measured distribution is called unfolding, and is a delicate problem in signal processing, due to the well-known numerical ill behavior of this task. Various methods were invented which, given some assumptions on the initial probability distribution, try to regularize the unfolding problem. Most of these methods definitely introduce bias into the estimate of the initial probability distribution. We propose a linear iterative method (motivated by the Neumann series / Landweber iteration known in functional analysis), which has the advantage that no assumptions on the initial probability distribution is needed, and the only regularization parameter is the stopping order of the iteration, which can be used to choose the best compromise between the introduced bias and the propagated statistical and systematic errors. The method is consistent: 'binwise' convergence to the initial probability distribution is proved in absence of measurement errors under a quite general condition on the response function. This condition holds for practical applications such as convolutions, calorimeter response functions, momentum reconstruction response functions based on tracking in magnetic field etc. In presence of measurement errors, explicit formulae for the propagation of the three important error terms is provided: bias error (distance from the unknown to-be-reconstructed initial distribution at a finite iteration order), statistical error, and systematic error. A trade-off between these three error terms can be used to define an optimal iteration stopping criterion, and the errors can be estimated there. We provide a numerical C library for the implementation of the method, which incorporates automatic

  17. Efficient Estimation of Extreme Non-linear Roll Motions using the First-order Reliability Method (FORM)

    DEFF Research Database (Denmark)

    Jensen, Jørgen Juncher

    2007-01-01

    In on-board decision support systems efficient procedures are needed for real-time estimation of the maximum ship responses to be expected within the next few hours, given on-line information on the sea state and user defined ranges of possible headings and speeds. For linear responses standard...... frequency domain methods can be applied. To non-linear responses like the roll motion, standard methods like direct time domain simulations are not feasible due to the required computational time. However, the statistical distribution of non-linear ship responses can be estimated very accurately using...... the first-order reliability method (FORM), well-known from structural reliability problems. To illustrate the proposed procedure, the roll motion is modelled by a simplified non-linear procedure taking into account non-linear hydrodynamic damping, time-varying restoring and wave excitation moments...

  18. Hierarchical and Non-Hierarchical Linear and Non-Linear Clustering Methods to “Shakespeare Authorship Question”

    Directory of Open Access Journals (Sweden)

    Refat Aljumily

    2015-09-01

    Full Text Available A few literary scholars have long claimed that Shakespeare did not write some of his best plays (history plays and tragedies and proposed at one time or another various suspect authorship candidates. Most modern-day scholars of Shakespeare have rejected this claim, arguing that strong evidence that Shakespeare wrote the plays and poems being his name appears on them as the author. This has caused and led to an ongoing scholarly academic debate for quite some long time. Stylometry is a fast-growing field often used to attribute authorship to anonymous or disputed texts. Stylometric attempts to resolve this literary puzzle have raised interesting questions over the past few years. The following paper contributes to “the Shakespeare authorship question” by using a mathematically-based methodology to examine the hypothesis that Shakespeare wrote all the disputed plays traditionally attributed to him. More specifically, the mathematically based methodology used here is based on Mean Proximity, as a linear hierarchical clustering method, and on Principal Components Analysis, as a non-hierarchical linear clustering method. It is also based, for the first time in the domain, on Self-Organizing Map U-Matrix and Voronoi Map, as non-linear clustering methods to cover the possibility that our data contains significant non-linearities. Vector Space Model (VSM is used to convert texts into vectors in a high dimensional space. The aim of which is to compare the degrees of similarity within and between limited samples of text (the disputed plays. The various works and plays assumed to have been written by Shakespeare and possible authors notably, Sir Francis Bacon, Christopher Marlowe, John Fletcher, and Thomas Kyd, where “similarity” is defined in terms of correlation/distance coefficient measure based on the frequency of usage profiles of function words, word bi-grams, and character triple-grams. The claim that Shakespeare authored all the disputed

  19. A General Linear Method for Equating with Small Samples

    Science.gov (United States)

    Albano, Anthony D.

    2015-01-01

    Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…

  20. Linear density response function in the projector augmented wave method

    DEFF Research Database (Denmark)

    Yan, Jun; Mortensen, Jens Jørgen; Jacobsen, Karsten Wedel

    2011-01-01

    We present an implementation of the linear density response function within the projector-augmented wave method with applications to the linear optical and dielectric properties of both solids, surfaces, and interfaces. The response function is represented in plane waves while the single...... functions of Si, C, SiC, AlP, and GaAs compare well with previous calculations. While optical properties of semiconductors, in particular excitonic effects, are generally not well described by ALDA, we obtain excellent agreement with experiments for the surface loss function of graphene and the Mg(0001...

  1. Human risk assessment of dermal and inhalation exposures to chemicals assessed by route-to-route extrapolation: the necessity of kinetic data.

    Science.gov (United States)

    Geraets, Liesbeth; Bessems, Jos G M; Zeilmaker, Marco J; Bos, Peter M J

    2014-10-01

    In toxicity testing the oral route is in general the first choice. Often, appropriate inhalation and dermal toxicity data are absent. Risk assessment for these latter routes usually has to rely on route-to-route extrapolation starting from oral toxicity data. Although it is generally recognized that the uncertainties involved are (too) large, route-to-route extrapolation is applied in many cases because of a strong need of an assessment of risks linked to a given exposure scenario. For an adequate route-to-route extrapolation the availability of at least some basic toxicokinetic data is a pre-requisite. These toxicokinetic data include all phases of kinetics, from absorption (both absorbed fraction and absorption rate for both the starting route and route of interest) via distribution and biotransformation to excretion. However, in practice only differences in absorption between the different routes are accounted for. The present paper demonstrates the necessity of route-specific absorption data by showing the impact of its absence on the uncertainty of the human health risk assessment using route-to-route extrapolation. Quantification of the absorption (by in vivo, in vitro or in silico methods), particularly for the starting route, is considered essential. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. On a new iterative method for solving linear systems and comparison results

    Science.gov (United States)

    Jing, Yan-Fei; Huang, Ting-Zhu

    2008-10-01

    In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.

  3. One step linear reconstruction method for continuous wave diffuse optical tomography

    Science.gov (United States)

    Ukhrowiyah, N.; Yasin, M.

    2017-09-01

    The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.

  4. A Galerkin Finite Element Method for Numerical Solutions of the Modified Regularized Long Wave Equation

    Directory of Open Access Journals (Sweden)

    Liquan Mei

    2014-01-01

    Full Text Available A Galerkin method for a modified regularized long wave equation is studied using finite elements in space, the Crank-Nicolson scheme, and the Runge-Kutta scheme in time. In addition, an extrapolation technique is used to transform a nonlinear system into a linear system in order to improve the time accuracy of this method. A Fourier stability analysis for the method is shown to be marginally stable. Three invariants of motion are investigated. Numerical experiments are presented to check the theoretical study of this method.

  5. Tests of the linearity assumption in the dose-effect relationship for radiation-induced cancer

    International Nuclear Information System (INIS)

    Cohen, A.F.; Cohen, B.L.

    1978-01-01

    The validity of the BEIR linear extrapolation to low doses of the dose-effect relationship for radiation induced cancer is tested by use of natural radiation making use of selectivity on type of cancer, sex, age group, geographic area, and time period. For lung cancer, a linear interpolation between zero dose-zero effect and the data from radon-induced cancers in miners over-estimates the total number of observed lung cancers in many countries in the early years of this century; the discrepancy is substantially increased if the 30-44 year age range and/or if only females are considered, and by the fact that many other causes of lung cancer are shown to have been important at that time. The degree to which changes of diagnostic efficiency with time can influence the analysis is considered at some length. It is concluded that the linear relationship substantially over-estimates effects of low radiation doses. A similar analysis is applied to leukemia induced by natural radiation, applying selectivity by age, sex, natural background level, and date, and considering other causes. It is concluded that effects substantially larger than those obtained from linear extrapolation are excluded. The use of the selectivities mentioned above is justified by the fact that the incidence of cancer or leukemia is an upper limit on the rate at which it is caused by radiation effects; in determining upper limits it is justifiable to select situations which minimize it. (author)

  6. Electric field control methods for foil coils in high-voltage linear actuators

    NARCIS (Netherlands)

    Beek, van T.A.; Jansen, J.W.; Lomonova, E.A.

    2015-01-01

    This paper describes multiple electric field control methods for foil coils in high-voltage coreless linear actuators. The field control methods are evaluated using 2-D and 3-D boundary element methods. A comparison is presented between the field control methods and their ability to mitigate

  7. The optimizied expansion method for wavefield extrapolation

    KAUST Repository

    Wu, Zedong; Alkhalifah, Tariq Ali

    2013-01-01

    , for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain operator.In this abstract, we propose an optimized expansion method that can approximate this operator with its low rank representation. The rank defines the number

  8. Empirical models of the Solar Wind : Extrapolations from the Helios & Ulysses observations back to the corona

    Science.gov (United States)

    Maksimovic, M.; Zaslavsky, A.

    2017-12-01

    We will present extrapolation of the HELIOS & Ulysses proton density, temperature & bulk velocities back to the corona. Using simple mass flux conservations we show a very good agreement between these extrapolations and the current state knowledge of these parameters in the corona, based on SOHO mesurements. These simple extrapolations could potentially be very useful for the science planning of both the Parker Solar Probe and Solar Orbiter missions. Finally will also present some modelling considerations, based on simple energy balance equations which arise from these empirical observationnal models.

  9. The separation-combination method of linear structures in remote sensing image interpretation and its application

    International Nuclear Information System (INIS)

    Liu Linqin

    1991-01-01

    The separation-combination method a new kind of analysis method of linear structures in remote sensing image interpretation is introduced taking northwestern Fujian as the example, its practical application is examined. The practice shows that application results not only reflect intensities of linear structures in overall directions at different locations, but also contribute to the zonation of linear structures and display their space distribution laws. Based on analyses of linear structures, it can provide more information concerning remote sensing on studies of regional mineralization laws and the guide to ore-finding combining with mineralization

  10. Preconditioned Iterative Methods for Solving Weighted Linear Least Squares Problems

    Czech Academy of Sciences Publication Activity Database

    Bru, R.; Marín, J.; Mas, J.; Tůma, Miroslav

    2014-01-01

    Roč. 36, č. 4 (2014), A2002-A2022 ISSN 1064-8275 Institutional support: RVO:67985807 Keywords : preconditioned iterative methods * incomplete decompositions * approximate inverses * linear least squares Subject RIV: BA - General Mathematics Impact factor: 1.854, year: 2014

  11. Two new modified Gauss-Seidel methods for linear system with M-matrices

    Science.gov (United States)

    Zheng, Bing; Miao, Shu-Xin

    2009-12-01

    In 2002, H. Kotakemori et al. proposed the modified Gauss-Seidel (MGS) method for solving the linear system with the preconditioner [H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner () J. Comput. Appl. Math. 145 (2002) 373-378]. Since this preconditioner is constructed by only the largest element on each row of the upper triangular part of the coefficient matrix, the preconditioning effect is not observed on the nth row. In the present paper, to deal with this drawback, we propose two new preconditioners. The convergence and comparison theorems of the modified Gauss-Seidel methods with these two preconditioners for solving the linear system are established. The convergence rates of the new proposed preconditioned methods are compared. In addition, numerical experiments are used to show the effectiveness of the new MGS methods.

  12. Arbitrary Lagrangian-Eulerian method for non-linear problems of geomechanics

    International Nuclear Information System (INIS)

    Nazem, M; Carter, J P; Airey, D W

    2010-01-01

    In many geotechnical problems it is vital to consider the geometrical non-linearity caused by large deformation in order to capture a more realistic model of the true behaviour. The solutions so obtained should then be more accurate and reliable, which should ultimately lead to cheaper and safer design. The Arbitrary Lagrangian-Eulerian (ALE) method originated from fluid mechanics, but has now been well established for solving large deformation problems in geomechanics. This paper provides an overview of the ALE method and its challenges in tackling problems involving non-linearities due to material behaviour, large deformation, changing boundary conditions and time-dependency, including material rate effects and inertia effects in dynamic loading applications. Important aspects of ALE implementation into a finite element framework will also be discussed. This method is then employed to solve some interesting and challenging geotechnical problems such as the dynamic bearing capacity of footings on soft soils, consolidation of a soil layer under a footing, and the modelling of dynamic penetration of objects into soil layers.

  13. New nonlinear methods for linear transport calculations

    International Nuclear Information System (INIS)

    Adams, M.L.

    1993-01-01

    We present a new family of methods for the numerical solution of the linear transport equation. With these methods an iteration consists of an 'S N sweep' followed by an 'S 2 -like' calculation. We show, by analysis as well as numerical results, that iterative convergence is always rapid. We show that this rapid convergence does not depend on a consistent discretization of the S 2 -like equations - they can be discretized independently from the S N equations. We show further that independent discretizations can offer significant advantages over consistent ones. In particular, we find that in a wide range of problems, an accurate discretization of the S 2 -like equation can be combined with a crude discretization of the S N equations to produce an accurate S N answer. We demonstrate this by analysis as well as numerical results. (orig.)

  14. Evaluating In Vitro-In Vivo Extrapolation of Toxicokinetics.

    Science.gov (United States)

    Wambaugh, John F; Hughes, Michael F; Ring, Caroline L; MacMillan, Denise K; Ford, Jermaine; Fennell, Timothy R; Black, Sherry R; Snyder, Rodney W; Sipes, Nisha S; Wetmore, Barbara A; Westerhout, Joost; Setzer, R Woodrow; Pearce, Robert G; Simmons, Jane Ellen; Thomas, Russell S

    2018-05-01

    Prioritizing the risk posed by thousands of chemicals potentially present in the environment requires exposure, toxicity, and toxicokinetic (TK) data, which are often unavailable. Relatively high throughput, in vitro TK (HTTK) assays and in vitro-to-in vivo extrapolation (IVIVE) methods have been developed to predict TK, but most of the in vivo TK data available to benchmark these methods are from pharmaceuticals. Here we report on new, in vivo rat TK experiments for 26 non-pharmaceutical chemicals with environmental relevance. Both intravenous and oral dosing were used to calculate bioavailability. These chemicals, and an additional 19 chemicals (including some pharmaceuticals) from previously published in vivo rat studies, were systematically analyzed to estimate in vivo TK parameters (e.g., volume of distribution [Vd], elimination rate). For each of the chemicals, rat-specific HTTK data were available and key TK predictions were examined: oral bioavailability, clearance, Vd, and uncertainty. For the non-pharmaceutical chemicals, predictions for bioavailability were not effective. While no pharmaceutical was absorbed at less than 10%, the fraction bioavailable for non-pharmaceutical chemicals was as low as 0.3%. Total clearance was generally more under-estimated for nonpharmaceuticals and Vd methods calibrated to pharmaceuticals may not be appropriate for other chemicals. However, the steady-state, peak, and time-integrated plasma concentrations of nonpharmaceuticals were predicted with reasonable accuracy. The plasma concentration predictions improved when experimental measurements of bioavailability were incorporated. In summary, HTTK and IVIVE methods are adequately robust to be applied to high throughput in vitro toxicity screening data of environmentally relevant chemicals for prioritizing based on human health risks.

  15. Non-linear analysis of wave progagation using transform methods and plates and shells using integral equations

    Science.gov (United States)

    Pipkins, Daniel Scott

    Two diverse topics of relevance in modern computational mechanics are treated. The first involves the modeling of linear and non-linear wave propagation in flexible, lattice structures. The technique used combines the Laplace Transform with the Finite Element Method (FEM). The procedure is to transform the governing differential equations and boundary conditions into the transform domain where the FEM formulation is carried out. For linear problems, the transformed differential equations can be solved exactly, hence the method is exact. As a result, each member of the lattice structure is modeled using only one element. In the non-linear problem, the method is no longer exact. The approximation introduced is a spatial discretization of the transformed non-linear terms. The non-linear terms are represented in the transform domain by making use of the complex convolution theorem. A weak formulation of the resulting transformed non-linear equations yields a set of element level matrix equations. The trial and test functions used in the weak formulation correspond to the exact solution of the linear part of the transformed governing differential equation. Numerical results are presented for both linear and non-linear systems. The linear systems modeled are longitudinal and torsional rods and Bernoulli-Euler and Timoshenko beams. For non-linear systems, a viscoelastic rod and Von Karman type beam are modeled. The second topic is the analysis of plates and shallow shells under-going finite deflections by the Field/Boundary Element Method. Numerical results are presented for two plate problems. The first is the bifurcation problem associated with a square plate having free boundaries which is loaded by four, self equilibrating corner forces. The results are compared to two existing numerical solutions of the problem which differ substantially. linear model are compared to those

  16. A new mini-extrapolation chamber for beta source uniformity measurements

    International Nuclear Information System (INIS)

    Oliveira, M.L.; Caldas, L.V.E.

    2006-01-01

    According to recent international recommendations, beta particle sources should be specified in terms of absorbed dose rates to water at the reference point. However, because of the clinical use of these sources, additional information should be supplied in the calibration reports. This additional information include the source uniformity. A new small volume extrapolation chamber was designed and constructed at the Calibration Laboratory at Instituto de Pesquisas Energeticas e Nucleares, IPEN, Brazil, for the calibration of 90 Sr+ 90 Y ophthalmic plaques. This chamber can be used as a primary standard for the calibration of this type of source. Recent additional studies showed the feasibility of the utilization of this chamber to perform source uniformity measurements. Because of the small effective electrode area, it is possible to perform independent measurements by varying the chamber position by small steps. The aim of the present work was to study the uniformity of a 90 Sr+ 90 Y plane ophthalmic plaque utilizing the mini extrapolation chamber developed at IPEN. The uniformity measurements were performed by varying the chamber position by steps of 2 mm in the source central axis (x-and y-directions) and by varying the chamber position off-axis by 3 mm steps. The results obtained showed that this small volume chamber can be used for this purpose with a great advantage: it is a direct method, being unnecessary a previously calibration of the measurement device in relation to a reference instrument, and it provides real -time results, reducing the time necessary for the study and the determination of the uncertainties related to the measurements. (authors)

  17. Regression models in the determination of the absorbed dose with extrapolation chamber for ophthalmological applicators

    International Nuclear Information System (INIS)

    Alvarez R, J.T.; Morales P, R.

    1992-06-01

    The absorbed dose for equivalent soft tissue is determined,it is imparted by ophthalmologic applicators, ( 90 Sr/ 90 Y, 1850 MBq) using an extrapolation chamber of variable electrodes; when estimating the slope of the extrapolation curve using a simple lineal regression model is observed that the dose values are underestimated from 17.7 percent up to a 20.4 percent in relation to the estimate of this dose by means of a regression model polynomial two grade, at the same time are observed an improvement in the standard error for the quadratic model until in 50%. Finally the global uncertainty of the dose is presented, taking into account the reproducibility of the experimental arrangement. As conclusion it can infers that in experimental arrangements where the source is to contact with the extrapolation chamber, it was recommended to substitute the lineal regression model by the quadratic regression model, in the determination of the slope of the extrapolation curve, for more exact and accurate measurements of the absorbed dose. (Author)

  18. Proposal for an alignment method of the CLIC linear accelerator - From geodesic networks to the active pre-alignment

    International Nuclear Information System (INIS)

    Touze, T.

    2011-01-01

    The compact linear collider (CLIC) is the particle accelerator project proposed by the european organization for nuclear research (CERN) for high energy physics after the large hadron collider (LHC). Because of the nano-metric scale of the CLIC leptons beams, the emittance growth budget is very tight. It induces alignment tolerances on the positions of the CLIC components that have never been achieved before. The last step of the CLIC alignment will be done according to the beam itself. It falls within the competence of the physicists. However, in order to implement the beam-based feedback, a challenging pre-alignment is required: 10 μm at 3σ along a 200 m sliding window. For such a precision, the proposed solution must be compatible with a feedback between the measurement and repositioning systems. The CLIC pre-alignment will have to be active. This thesis does not demonstrate the feasibility of the CLIC active pre-alignment but shows the way to the last developments that have to be done for that purpose. A method is proposed. Based on the management of the Helmert transformations between Euclidean coordinate systems, from the geodetic networks to the metrological measurements, this method is likely to solve the CLIC pre-alignment problem. Large scale facilities have been built and Monte-Carlo simulations have been made in order to validate the mathematical modeling of the measurement systems and of the alignment references. When this is done, it will be possible to extrapolate the modeling to the entire CLIC length. It will be the last step towards the demonstration of the CLIC pre-alignment feasibility. (author)

  19. Linear Discontinuous Expansion Method using the Subcell Balances for Unstructured Geometry SN Transport

    International Nuclear Information System (INIS)

    Hong, Ser Gi; Kim, Jong Woon; Lee, Young Ouk; Kim, Kyo Youn

    2010-01-01

    The subcell balance methods have been developed for one- and two-dimensional SN transport calculations. In this paper, a linear discontinuous expansion method using sub-cell balances (LDEM-SCB) is developed for neutral particle S N transport calculations in 3D unstructured geometrical problems. At present, this method is applied to the tetrahedral meshes. As the name means, this method assumes the linear distribution of the particle flux in each tetrahedral mesh and uses the balance equations for four sub-cells of each tetrahedral mesh to obtain the equations for the four sub-cell average fluxes which are unknowns. This method was implemented in the computer code MUST (Multi-group Unstructured geometry S N Transport). The numerical tests show that this method gives more robust solution than DFEM (Discontinuous Finite Element Method)

  20. Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Jaechil [Univ. of Wisconsin, Madison, WI (United States)

    1996-12-31

    Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.

  1. A New Spectral Local Linearization Method for Nonlinear Boundary Layer Flow Problems

    Directory of Open Access Journals (Sweden)

    S. S. Motsa

    2013-01-01

    Full Text Available We propose a simple and efficient method for solving highly nonlinear systems of boundary layer flow problems with exponentially decaying profiles. The algorithm of the proposed method is based on an innovative idea of linearizing and decoupling the governing systems of equations and reducing them into a sequence of subsystems of differential equations which are solved using spectral collocation methods. The applicability of the proposed method, hereinafter referred to as the spectral local linearization method (SLLM, is tested on some well-known boundary layer flow equations. The numerical results presented in this investigation indicate that the proposed method, despite being easy to develop and numerically implement, is very robust in that it converges rapidly to yield accurate results and is more efficient in solving very large systems of nonlinear boundary value problems of the similarity variable boundary layer type. The accuracy and numerical stability of the SLLM can further be improved by using successive overrelaxation techniques.

  2. Testing for one Generalized Linear Single Order Parameter

    DEFF Research Database (Denmark)

    Ellegaard, Niels Langager; Christensen, Tage Emil; Dyre, Jeppe

    We examine a linear single order parameter model for thermoviscoelastic relaxation in viscous liquids, allowing for a distribution of relaxation times. In this model the relaxation of volume and entalpy is completely described by the relaxation of one internal order parameter. In contrast to prior...... work the order parameter may be chosen to have a non-exponential relaxation. The model predictions contradict the general consensus of the properties of viscous liquids in two ways: (i) The model predicts that following a linear isobaric temperature step, the normalized volume and entalpy relaxation...... responses or extrapolate from measurements of a glassy state away from equilibrium. Starting from a master equation description of inherent dynamics, we calculate the complex thermodynamic response functions. We device a way of testing for the generalized single order parameter model by measuring 3 complex...

  3. Thermal-Induced Non-linearity of Ag Nano-fluid Prepared using γ-Radiation Method

    International Nuclear Information System (INIS)

    Esmaeil Shahriari; Wan Mahmood Mat Yunus; Zainal Abidin Talib; Elias Saion

    2011-01-01

    The non-linear refractive index of Ag nano-fluids prepared by γ-radiation method was investigated using a single beam z-scan technique. Under CW 532 nm laser excitation with power output of 40 mW, the Ag nano-fluids showed a large thermal-induced non-linear refractive index. In the present work it was determined that the non-linear refractive index for Ag nano-fluids is -4.80x10 -8 cm 2 / W. The value of Δn 0 was calculated to be -2.05x10 -4 . Our measurements also confirmed that the non-linear phenomenon was caused by the self-defocusing process making them good candidates for non linear optical devices. (author)

  4. Stability of numerical method for semi-linear stochastic pantograph differential equations

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2016-01-01

    Full Text Available Abstract As a particular expression of stochastic delay differential equations, stochastic pantograph differential equations have been widely used in nonlinear dynamics, quantum mechanics, and electrodynamics. In this paper, we mainly study the stability of analytical solutions and numerical solutions of semi-linear stochastic pantograph differential equations. Some suitable conditions for the mean-square stability of an analytical solution are obtained. Then we proved the general mean-square stability of the exponential Euler method for a numerical solution of semi-linear stochastic pantograph differential equations, that is, if an analytical solution is stable, then the exponential Euler method applied to the system is mean-square stable for arbitrary step-size h > 0 $h>0$ . Numerical examples further illustrate the obtained theoretical results.

  5. Optimal Homotopy Asymptotic Method for Solving the Linear Fredholm Integral Equations of the First Kind

    Directory of Open Access Journals (Sweden)

    Mohammad Almousa

    2013-01-01

    Full Text Available The aim of this study is to present the use of a semi analytical method called the optimal homotopy asymptotic method (OHAM for solving the linear Fredholm integral equations of the first kind. Three examples are discussed to show the ability of the method to solve the linear Fredholm integral equations of the first kind. The results indicated that the method is very effective and simple.

  6. Restoring the missing features of the corrupted speech using linear interpolation methods

    Science.gov (United States)

    Rassem, Taha H.; Makbol, Nasrin M.; Hasan, Ali Muttaleb; Zaki, Siti Syazni Mohd; Girija, P. N.

    2017-10-01

    One of the main challenges in the Automatic Speech Recognition (ASR) is the noise. The performance of the ASR system reduces significantly if the speech is corrupted by noise. In spectrogram representation of a speech signal, after deleting low Signal to Noise Ratio (SNR) elements, the incomplete spectrogram is obtained. In this case, the speech recognizer should make modifications to the spectrogram in order to restore the missing elements, which is one direction. In another direction, speech recognizer should be able to restore the missing elements due to deleting low SNR elements before performing the recognition. This is can be done using different spectrogram reconstruction methods. In this paper, the geometrical spectrogram reconstruction methods suggested by some researchers are implemented as a toolbox. In these geometrical reconstruction methods, the linear interpolation along time or frequency methods are used to predict the missing elements between adjacent observed elements in the spectrogram. Moreover, a new linear interpolation method using time and frequency together is presented. The CMU Sphinx III software is used in the experiments to test the performance of the linear interpolation reconstruction method. The experiments are done under different conditions such as different lengths of the window and different lengths of utterances. Speech corpus consists of 20 males and 20 females; each one has two different utterances are used in the experiments. As a result, 80% recognition accuracy is achieved with 25% SNR ratio.

  7. Linear augmented plane wave method for self-consistent calculations

    International Nuclear Information System (INIS)

    Takeda, T.; Kuebler, J.

    1979-01-01

    O.K. Andersen has recently introduced a linear augmented plane wave method (LAPW) for the calculation of electronic structure that was shown to be computationally fast. A more general formulation of an LAPW method is presented here. It makes use of a freely disposable number of eigenfunctions of the radial Schroedinger equation. These eigenfunctions can be selected in a self-consistent way. The present formulation also results in a computationally fast method. It is shown that Andersen's LAPW is obtained in a special limit from the present formulation. Self-consistent test calculations for copper show the present method to be remarkably accurate. As an application, scalar-relativistic self-consistent calculations are presented for the band structure of FCC lanthanum. (author)

  8. 131I-SPGP internal dosimetry: animal model and human extrapolation

    International Nuclear Information System (INIS)

    Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soprani, Juliana; Santos, Raquel Gouvea dos; Figueiredo, Suely Gomes de

    2009-01-01

    Scorpaena plumieri is commonly called moreia-ati or manganga and is the most venomous and one of the most abundant fish species of the Brazilian coast. Soprani 2006, demonstrated that SPGP - an isolated protein from S. plumieri fish- possess high antitumoral activity against malignant tumours and can be a source of template molecules for the development (design) of antitumoral drugs. In the present work, Soprani's 125 ISPGP biokinetic data were treated by MIRD formalism to perform Internal Dosimetry studies. Absorbed doses due to the 131 I-SPGP uptake were determinate in several organs of mice, as well as in the implanted tumor. Doses obtained for animal model were extrapolated to humans assuming a similar ratio for various mouse and human tissues. For the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from 131 I were considered. (author)

  9. Analysis of blood pressure signal in patients with different ventricular ejection fraction using linear and non-linear methods.

    Science.gov (United States)

    Arcentales, Andres; Rivera, Patricio; Caminal, Pere; Voss, Andreas; Bayes-Genis, Antonio; Giraldo, Beatriz F

    2016-08-01

    Changes in the left ventricle function produce alternans in the hemodynamic and electric behavior of the cardiovascular system. A total of 49 cardiomyopathy patients have been studied based on the blood pressure signal (BP), and were classified according to the left ventricular ejection fraction (LVEF) in low risk (LR: LVEF>35%, 17 patients) and high risk (HR: LVEF≤35, 32 patients) groups. We propose to characterize these patients using a linear and a nonlinear methods, based on the spectral estimation and the recurrence plot, respectively. From BP signal, we extracted each systolic time interval (STI), upward systolic slope (BPsl), and the difference between systolic and diastolic BP, defined as pulse pressure (PP). After, the best subset of parameters were obtained through the sequential feature selection (SFS) method. According to the results, the best classification was obtained using a combination of linear and nonlinear features from STI and PP parameters. For STI, the best combination was obtained considering the frequency peak and the diagonal structures of RP, with an area under the curve (AUC) of 79%. The same results were obtained when comparing PP values. Consequently, the use of combined linear and nonlinear parameters could improve the risk stratification of cardiomyopathy patients.

  10. Experimental validation for calcul methods of structures having shock non-linearity

    International Nuclear Information System (INIS)

    Brochard, D.; Buland, P.

    1987-01-01

    For the seismic analysis of non-linear structures, numerical methods have been developed which need to be validated on experimental results. The aim of this paper is to present the design method of a test program which results will be used for this purpose. Some applications to nuclear components will illustrate this presentation [fr

  11. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

  12. Generalization of Asaoka method to linearly anisotropic scattering: benchmark data in cylindrical geometry

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1975-11-01

    The Integral Transform Method for the neutron transport equation has been developed in last years by Asaoka and others. The method uses Fourier transform techniques in solving isotropic one-dimensional transport problems in homogeneous media. The method has been extended to linearly anisotropic transport in one-dimensional homogeneous media. Series expansions were also obtained using Hembd techniques for the new anisotropic matrix elements in cylindrical geometry. Carlvik spatial-spherical harmonics method was generalized to solve the same problem. By applying a relation between the isotropic and anisotropic one-dimensional kernels, it was demonstrated that anisotropic matrix elements can be calculated by a linear combination of a few isotropic matrix elements. This means in practice that the anisotropic problem of order N with the N+2 isotropic matrix for the plane and spherical geometries, and N+1 isotropic matrix for cylindrical geometries can be solved. A method of solving linearly anisotropic one-dimensional transport problems in homogeneous media was defined by applying Mika and Stankiewicz observations: isotropic matrix elements were computed by Hembd series and anisotropic matrix elements then calculated from recursive relations. The method has been applied to albedo and critical problems in cylindrical geometries. Finally, a number of results were computed with 12-digit accuracy for use as benchmarks [fr

  13. Linearly convergent stochastic heavy ball method for minimizing generalization error

    KAUST Repository

    Loizou, Nicolas

    2017-10-30

    In this work we establish the first linear convergence result for the stochastic heavy ball method. The method performs SGD steps with a fixed stepsize, amended by a heavy ball momentum term. In the analysis, we focus on minimizing the expected loss and not on finite-sum minimization, which is typically a much harder problem. While in the analysis we constrain ourselves to quadratic loss, the overall objective is not necessarily strongly convex.

  14. Aitken extrapolation and epsilon algorithm for an accelerated solution of weakly singular nonlinear Volterra integral equations

    International Nuclear Information System (INIS)

    Mesgarani, H; Parmour, P; Aghazadeh, N

    2010-01-01

    In this paper, we apply Aitken extrapolation and epsilon algorithm as acceleration technique for the solution of a weakly singular nonlinear Volterra integral equation of the second kind. In this paper, based on Tao and Yong (2006 J. Math. Anal. Appl. 324 225-37.) the integral equation is solved by Navot's quadrature formula. Also, Tao and Yong (2006) for the first time applied Richardson extrapolation to accelerating convergence for the weakly singular nonlinear Volterra integral equations of the second kind. To our knowledge, this paper may be the first attempt to apply Aitken extrapolation and epsilon algorithm for the weakly singular nonlinear Volterra integral equations of the second kind.

  15. Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures

    Science.gov (United States)

    Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.

    2016-05-01

    Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the

  16. Projecting species' vulnerability to climate change: Which uncertainty sources matter most and extrapolate best?

    Science.gov (United States)

    Steen, Valerie; Sofaer, Helen R; Skagen, Susan K; Ray, Andrea J; Noon, Barry R

    2017-11-01

    Species distribution models (SDMs) are commonly used to assess potential climate change impacts on biodiversity, but several critical methodological decisions are often made arbitrarily. We compare variability arising from these decisions to the uncertainty in future climate change itself. We also test whether certain choices offer improved skill for extrapolating to a changed climate and whether internal cross-validation skill indicates extrapolative skill. We compared projected vulnerability for 29 wetland-dependent bird species breeding in the climatically dynamic Prairie Pothole Region, USA. For each species we built 1,080 SDMs to represent a unique combination of: future climate, class of climate covariates, collinearity level, and thresholding procedure. We examined the variation in projected vulnerability attributed to each uncertainty source. To assess extrapolation skill under a changed climate, we compared model predictions with observations from historic drought years. Uncertainty in projected vulnerability was substantial, and the largest source was that of future climate change. Large uncertainty was also attributed to climate covariate class with hydrological covariates projecting half the range loss of bioclimatic covariates or other summaries of temperature and precipitation. We found that choices based on performance in cross-validation improved skill in extrapolation. Qualitative rankings were also highly uncertain. Given uncertainty in projected vulnerability and resulting uncertainty in rankings used for conservation prioritization, a number of considerations appear critical for using bioclimatic SDMs to inform climate change mitigation strategies. Our results emphasize explicitly selecting climate summaries that most closely represent processes likely to underlie ecological response to climate change. For example, hydrological covariates projected substantially reduced vulnerability, highlighting the importance of considering whether water

  17. Linear source approximation scheme for method of characteristics

    International Nuclear Information System (INIS)

    Tang Chuntao

    2011-01-01

    Method of characteristics (MOC) for solving neutron transport equation based on unstructured mesh has already become one of the fundamental methods for lattice calculation of nuclear design code system. However, most of MOC codes are developed with flat source approximation called step characteristics (SC) scheme, which is another basic assumption for MOC. A linear source (LS) characteristics scheme and its corresponding modification for negative source distribution were proposed. The OECD/NEA C5G7-MOX 2D benchmark and a self-defined BWR mini-core problem were employed to validate the new LS module of PEACH code. Numerical results indicate that the proposed LS scheme employs less memory and computational time compared with SC scheme at the same accuracy. (authors)

  18. A derating method for therapeutic applications of high intensity focused ultrasound

    Science.gov (United States)

    Bessonova, O. V.; Khokhlova, V. A.; Canney, M. S.; Bailey, M. R.; Crum, L. A.

    2010-05-01

    Current methods of determining high intensity focused ultrasound (HIFU) fields in tissue rely on extrapolation of measurements in water assuming linear wave propagation both in water and in tissue. Neglecting nonlinear propagation effects in the derating process can result in significant errors. A new method based on scaling the source amplitude is introduced to estimate focal parameters of nonlinear HIFU fields in tissue. Focal values of acoustic field parameters in absorptive tissue are obtained from a numerical solution to a KZK-type equation and are compared to those simulated for propagation in water. Focal wave-forms, peak pressures, and intensities are calculated over a wide range of source outputs and linear focusing gains. Our modeling indicates, that for the high gain sources which are typically used in therapeutic medical applications, the focal field parameters derated with our method agree well with numerical simulation in tissue. The feasibility of the derating method is demonstrated experimentally in excised bovine liver tissue.

  19. 131I-CRTX internal dosimetry: animal model and human extrapolation

    International Nuclear Information System (INIS)

    Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soares, Marcella Araugio; Silveira, Marina Bicalho; Santos, Raquel Gouvea dos

    2009-01-01

    Snake venoms molecules have been shown to play a role not only in the survival and proliferation of tumor cells but also in the processes of tumor cell adhesion, migration and angiogenesis. 125 I-Crtx, a radiolabeled version of a peptide derived from Crotalus durissus terrificus snake venom, specifically binds to tumor and triggers apoptotic signalling. At the present work, 125 I-Crtx biokinetic data (evaluated in mice bearing Erlich tumor) were treated by MIRD formalism to perform Internal Dosimetry studies. Doses in several organs of mice were determinate, as well as in implanted tumor, for 131 I-Crtx. Doses results obtained for animal model were extrapolated to humans assuming a similar concentration ratio among various tissues between mouse and human. In the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from 131 I in the tissue were considered in dose calculations. (author)

  20. Extrapolation of rate constants of reactions producing H2 and O2 in radiolysis of water at high temperatures

    International Nuclear Information System (INIS)

    Leblanc, R.; Ghandi, K.; Hackman, B.; Liu, G.

    2014-01-01

    One target of our research is to extrapolate known data on the rate constants of reactions and add corrections to estimate the rate constants at the higher temperatures reached by the SCWR reactors. The focus of this work was to extrapolate known data on the rate constants of reactions that produce Hydrogen or Oxygen with a rate constant below 10 10 mol -1 s -1 at room temperature. The extrapolation is done taking into account the change in the diffusion rate of the interacting species and the cage effect with thermodynamic conditions. The extrapolations are done over a wide temperature range and under isobaric conditions. (author)

  1. Windtunnel Rebuilding And Extrapolation To Flight At Transsonic Speed For ExoMars

    Science.gov (United States)

    Fertig, Markus; Neeb, Dominik; Gulhan, Ali

    2011-05-01

    The static as well as the dynamic behaviour of the EXOMARS vehicle in the transonic velocity regime has been investigated experimentally by the Supersonic and Hypersonic Technology Department of DLR in order to investigate the behaviour prior to parachute opening. Since the experimental work was performed in air, a numerical extrapolation to flight by means of CFD is necessary. At low supersonic speed this extrapolation to flight was performed by the Spacecraft Department of the Institute of Flow Technology of DLR employing the CFD code TAU. Numerical as well as experimental results for the wind tunnel test at Mach 1.2 will be compared and discussed for three different angles of attack.

  2. Exact solution to the Coulomb wave using the linearized phase-amplitude method

    Directory of Open Access Journals (Sweden)

    Shuji Kiyokawa

    2015-08-01

    Full Text Available The author shows that the amplitude equation from the phase-amplitude method of calculating continuum wave functions can be linearized into a 3rd-order differential equation. Using this linearized equation, in the case of the Coulomb potential, the author also shows that the amplitude function has an analytically exact solution represented by means of an irregular confluent hypergeometric function. Furthermore, it is shown that the exact solution for the Coulomb potential reproduces the wave function for free space expressed by the spherical Bessel function. The amplitude equation for the large component of the Dirac spinor is also shown to be the linearized 3rd-order differential equation.

  3. Experiences and extrapolations from Hiroshima and Nagasaki

    International Nuclear Information System (INIS)

    Harwell, C.C.

    1985-01-01

    This paper examines the events following the atomic bombings of Hiroshima and Nagasaki in 1945 and extrapolates from these experiences to further understand the possible consequences of detonations on a local area from weapons in the current world nuclear arsenal. The first section deals with a report of the events that occurred in Hiroshima and Nagasaki just after the 1945 bombings with respect to the physical conditions of the affected areas, the immediate effects on humans, the psychological response of the victims, and the nature of outside assistance. Because there can be no experimental data to validate the effects on cities and their populations of detonations from current weapons, the data from the actual explosions on Hiroshima and Nagasaki provide a point of departure. The second section examines possible extrapolations from and comparisons with the Hiroshima and Nagasaki experiences. The limitations of drawing upon the Hiroshima and Nagasaki experiences are discussed. A comparison is made of the scale of effects from other major disasters for urban systems, such as damages from the conventional bombings of cities during World War II, the consequences of major earthquakes, the historical effects of the Black Plague and widespread famines, and other extreme natural events. The potential effects of detonating a modern 1 MT warhead on the city of Hiroshima as it exists today are simulated. This is extended to the local effects on a targeted city from a global nuclear war, and attention is directed to problems of estimating the societal effects from such a war

  4. Shifted Legendre method with residual error estimation for delay linear Fredholm integro-differential equations

    Directory of Open Access Journals (Sweden)

    Şuayip Yüzbaşı

    2017-03-01

    Full Text Available In this paper, we suggest a matrix method for obtaining the approximate solutions of the delay linear Fredholm integro-differential equations with constant coefficients using the shifted Legendre polynomials. The problem is considered with mixed conditions. Using the required matrix operations, the delay linear Fredholm integro-differential equation is transformed into a matrix equation. Additionally, error analysis for the method is presented using the residual function. Illustrative examples are given to demonstrate the efficiency of the method. The results obtained in this study are compared with the known results.

  5. Determination of dose rates in beta radiation fields using extrapolation chamber and GM counter

    International Nuclear Information System (INIS)

    Borg, J.; Christensen, P.

    1995-01-01

    The extrapolation chamber measurement method is the basic method for the determination of dose rates in beta radiation fields and the method has been used for the establishment of beta calibration fields. The paper describes important details of the method and presents results from the measurements of depth-dose profiles from different beta radiation fields with E max values down to 156 keV. Results are also presented from studies of GM counters for use as survey instruments for monitoring beta dose rates at the workplace. Advantages of GM counters are a simple measurement technique and high sensitivity. GM responses were measured from exposures in different beta radiation fields using different filters in front of the GM detector and the paper discusses the possibility of using the results from GM measurements with two different filters in an unknown beta radiation field to obtain a value of the dose rate. (Author)

  6. WE-A-17A-01: Absorbed Dose Rate-To-Water at the Surface of a Beta-Emitting Planar Ophthalmic Applicator with a Planar, Windowless Extrapolation Chamber

    Energy Technology Data Exchange (ETDEWEB)

    Riley, A [of Wisconsin Medical Radiation Research Center, Madison, WI (United States); Soares, C [NIST (Retired), Gaithersburg, MD (United States); Micka, J; Culberson, W [University of Wisconsin Medical Radiation Research Center, Madison, WI (United States); DeWerd, L [University of WIMadison/ ADCL, Madison, WI (United States)

    2014-06-15

    Purpose: Currently there is no primary calibration standard for determining the absorbed dose rate-to-water at the surface of β-emitting concave ophthalmic applicators and plaques. Machining tolerances involved in the design of concave window extrapolation chambers are a limiting factor for development of such a standard. Use of a windowless extrapolation chamber avoids these window-machining tolerance issues. As a windowless extrapolation chamber has never been attempted, this work focuses on proof of principle measurements with a planar, windowless extrapolation chamber to verify the accuracy in comparison to initial calibration, which could be extended to the design of a hemispherical, windowless extrapolation chamber. Methods: The window of an extrapolation chamber defines the electrical field, aids in aligning the source parallel to the collector-guard assembly, and decreases the backscatter due to attenuation of lower electron energy. To create a uniform and parallel electric field in this research, the source was made common to the collector-guard assembly. A precise positioning protocol was designed to enhance the parallelism of the source and collector-guard assembly. Additionally, MCNP5 was used to determine a backscatter correction factor to apply to the calibration. With these issues addressed, the absorbed dose rate-to-water of a Tracerlab 90Sr planar ophthalmic applicator was determined using National Institute of Standards and Technology's (NIST) calibration formalism, and the results of five trials with this source were compared to measurements at NIST with a traditional extrapolation chamber. Results: The absorbed dose rate-to-water of the planar applicator was determined to be 0.473 Gy/s ±0.6%. Comparing these results to NIST's determination of 0.474 Gy/s yields a −0.6% difference. Conclusion: The feasibility of a planar, windowless extrapolation chamber has been demonstrated. A similar principle will be applied to developing a

  7. Solution of linear ordinary differential equations by means of the method of variation of arbitrary constants

    DEFF Research Database (Denmark)

    Mejlbro, Leif

    1997-01-01

    An alternative formula for the solution of linear differential equations of order n is suggested. When applicable, the suggested method requires fewer and simpler computations than the well-known method using Wronskians.......An alternative formula for the solution of linear differential equations of order n is suggested. When applicable, the suggested method requires fewer and simpler computations than the well-known method using Wronskians....

  8. Genomic prediction based on data from three layer lines: a comparison between linear methods

    NARCIS (Netherlands)

    Calus, M.P.L.; Huang, H.; Vereijken, J.; Visscher, J.; Napel, ten J.; Windig, J.J.

    2014-01-01

    Background The prediction accuracy of several linear genomic prediction models, which have previously been used for within-line genomic prediction, was evaluated for multi-line genomic prediction. Methods Compared to a conventional BLUP (best linear unbiased prediction) model using pedigree data, we

  9. Environmental impact assessment methods of the radiation generated by the runing medical linear accelerator

    International Nuclear Information System (INIS)

    Yin haihua; Yao Zhigang

    2014-01-01

    This article describes the environmental impact assessment methods of the radiation generated by the runing. medical linear accelerator. The material and thickness of shielding wall and protective doors of the linear accelerator were already knew, therefore we can evaluate the radiation by the runing. medical linear accelerator whether or not in the normal range of national standard by calculating the annual effective radiation dose of the surrounding personnel suffered. (authors)

  10. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    Science.gov (United States)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  11. A Bayes linear Bayes method for estimation of correlated event rates.

    Science.gov (United States)

    Quigley, John; Wilson, Kevin J; Walls, Lesley; Bedford, Tim

    2013-12-01

    Typically, full Bayesian estimation of correlated event rates can be computationally challenging since estimators are intractable. When estimation of event rates represents one activity within a larger modeling process, there is an incentive to develop more efficient inference than provided by a full Bayesian model. We develop a new subjective inference method for correlated event rates based on a Bayes linear Bayes model under the assumption that events are generated from a homogeneous Poisson process. To reduce the elicitation burden we introduce homogenization factors to the model and, as an alternative to a subjective prior, an empirical method using the method of moments is developed. Inference under the new method is compared against estimates obtained under a full Bayesian model, which takes a multivariate gamma prior, where the predictive and posterior distributions are derived in terms of well-known functions. The mathematical properties of both models are presented. A simulation study shows that the Bayes linear Bayes inference method and the full Bayesian model provide equally reliable estimates. An illustrative example, motivated by a problem of estimating correlated event rates across different users in a simple supply chain, shows how ignoring the correlation leads to biased estimation of event rates. © 2013 Society for Risk Analysis.

  12. Analytical study of dynamic aperture for storage ring by using successive linearization method

    International Nuclear Information System (INIS)

    Yang Jiancheng; Xia Jiawen; Wu Junxia; Xia Guoxing; Liu Wei; Yin Xuejun

    2004-01-01

    The determination of dynamic aperture is a critical issue in circular accelerator. In this paper, authors solved the equation of motion including non-linear forces by using successive linearization method and got a criterion for the determining of the dynamic aperture of the machine. Applying this criterion, a storage ring with FODO lattice has been studied. The results are agree well with the tracking results in a large range of linear turn (Q). The purpose is to improve our understanding of the mechanisms driving the particle motion in the presence of non-linear forces and got another mechanism driving instability of particle in storage ring-parametric resonance caused by 'fluctuating transfer matrices' at small amplification

  13. Local linearization methods for the numerical integration of ordinary differential equations: An overview

    International Nuclear Information System (INIS)

    Jimenez, J.C.

    2009-06-01

    Local Linearization (LL) methods conform a class of one-step explicit integrators for ODEs derived from the following primary and common strategy: the vector field of the differential equation is locally (piecewise) approximated through a first-order Taylor expansion at each time step, thus obtaining successive linear equations that are explicitly integrated. Hereafter, the LL approach may include some additional strategies to improve that basic affine approximation. Theoretical and practical results have shown that the LL integrators have a number of convenient properties. These include arbitrary order of convergence, A-stability, linearization preserving, regularity under quite general conditions, preservation of the dynamics of the exact solution around hyperbolic equilibrium points and periodic orbits, integration of stiff and high-dimensional equations, low computational cost, and others. In this paper, a review of the LL methods and their properties is presented. (author)

  14. Comparison results on preconditioned SOR-type iterative method for Z-matrices linear systems

    Science.gov (United States)

    Wang, Xue-Zhong; Huang, Ting-Zhu; Fu, Ying-Ding

    2007-09-01

    In this paper, we present some comparison theorems on preconditioned iterative method for solving Z-matrices linear systems, Comparison results show that the rate of convergence of the Gauss-Seidel-type method is faster than the rate of convergence of the SOR-type iterative method.

  15. Accurate Conformational Energy Differences of Carbohydrates: A Complete Basis Set Extrapolation

    Czech Academy of Sciences Publication Activity Database

    Csonka, G. I.; Kaminský, Jakub

    2011-01-01

    Roč. 7, č. 4 (2011), s. 988-997 ISSN 1549-9618 Institutional research plan: CEZ:AV0Z40550506 Keywords : MP2 * basis set extrapolation * saccharides Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 5.215, year: 2011

  16. On the economical solution method for a system of linear algebraic equations

    Directory of Open Access Journals (Sweden)

    Jan Awrejcewicz

    2004-01-01

    Full Text Available The present work proposes a novel optimal and exact method of solving large systems of linear algebraic equations. In the approach under consideration, the solution of a system of algebraic linear equations is found as a point of intersection of hyperplanes, which needs a minimal amount of computer operating storage. Two examples are given. In the first example, the boundary value problem for a three-dimensional stationary heat transfer equation in a parallelepiped in ℝ3 is considered, where boundary value problems of first, second, or third order, or their combinations, are taken into account. The governing differential equations are reduced to algebraic ones with the help of the finite element and boundary element methods for different meshes applied. The obtained results are compared with known analytical solutions. The second example concerns computation of a nonhomogeneous shallow physically and geometrically nonlinear shell subject to transversal uniformly distributed load. The partial differential equations are reduced to a system of nonlinear algebraic equations with the error of O(hx12+hx22. The linearization process is realized through either Newton method or differentiation with respect to a parameter. In consequence, the relations of the boundary condition variations along the shell side and the conditions for the solution matching are reported.

  17. High Order A-stable Continuous General Linear Methods for Solution of Systems of Initial Value Problems in ODEs

    Directory of Open Access Journals (Sweden)

    Dauda GuliburYAKUBU

    2012-12-01

    Full Text Available Accurate solutions to initial value systems of ordinary differential equations may be approximated efficiently by Runge-Kutta methods or linear multistep methods. Each of these has limitations of one sort or another. In this paper we consider, as a middle ground, the derivation of continuous general linear methods for solution of stiff systems of initial value problems in ordinary differential equations. These methods are designed to combine the advantages of both Runge-Kutta and linear multistep methods. Particularly, methods possessing the property of A-stability are identified as promising methods within this large class of general linear methods. We show that the continuous general linear methods are self-starting and have more ability to solve the stiff systems of ordinary differential equations, than the discrete ones. The initial value systems of ordinary differential equations are solved, for instance, without looking for any other method to start the integration process. This desirable feature of the proposed approach leads to obtaining very high accuracy of the solution of the given problem. Illustrative examples are given to demonstrate the novelty and reliability of the methods.

  18. Two media method for linear attenuation coefficient determination of irregular soil samples

    International Nuclear Information System (INIS)

    Vici, Carlos Henrique Georges

    2004-01-01

    In several situations of nuclear applications, the knowledge of gamma-ray linear attenuation coefficient for irregular samples is necessary, such as in soil physics and geology. This work presents the validation of a methodology for the determination of the linear attenuation coefficient (μ) of irregular shape samples, in such a way that it is not necessary to know the thickness of the considered sample. With this methodology irregular soil samples (undeformed field samples) from Londrina region, north of Parana were studied. It was employed the two media method for the μ determination. It consists of the μ determination through the measurement of a gamma-ray beam attenuation by the sample sequentially immersed in two different media, with known and appropriately chosen attenuation coefficients. For comparison, the theoretical value of μ was calculated by the product of the mass attenuation coefficient, obtained by the WinXcom code, and the measured value of the density sample. This software employs the chemical composition of the samples and supplies a table of the mass attenuation coefficients versus the photon energy. To verify the validity of the two media method, compared with the simple gamma ray transmission method, regular pome stone samples were used. With these results for the attenuation coefficients and their respective deviations, it was possible to compare the two methods. In this way we concluded that the two media method is a good tool for the determination of the linear attenuation coefficient of irregular materials, particularly in the study of soils samples. (author)

  19. A Comparison of Traditional Worksheet and Linear Programming Methods for Teaching Manure Application Planning.

    Science.gov (United States)

    Schmitt, M. A.; And Others

    1994-01-01

    Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)

  20. Comparison between Two Linear Supervised Learning Machines' Methods with Principle Component Based Methods for the Spectrofluorimetric Determination of Agomelatine and Its Degradants.

    Science.gov (United States)

    Elkhoudary, Mahmoud M; Naguib, Ibrahim A; Abdel Salam, Randa A; Hadad, Ghada M

    2017-05-01

    Four accurate, sensitive and reliable stability indicating chemometric methods were developed for the quantitative determination of Agomelatine (AGM) whether in pure form or in pharmaceutical formulations. Two supervised learning machines' methods; linear artificial neural networks (PC-linANN) preceded by principle component analysis and linear support vector regression (linSVR), were compared with two principle component based methods; principle component regression (PCR) as well as partial least squares (PLS) for the spectrofluorimetric determination of AGM and its degradants. The results showed the benefits behind using linear learning machines' methods and the inherent merits of their algorithms in handling overlapped noisy spectral data especially during the challenging determination of AGM alkaline and acidic degradants (DG1 and DG2). Relative mean squared error of prediction (RMSEP) for the proposed models in the determination of AGM were 1.68, 1.72, 0.68 and 0.22 for PCR, PLS, SVR and PC-linANN; respectively. The results showed the superiority of supervised learning machines' methods over principle component based methods. Besides, the results suggested that linANN is the method of choice for determination of components in low amounts with similar overlapped spectra and narrow linearity range. Comparison between the proposed chemometric models and a reported HPLC method revealed the comparable performance and quantification power of the proposed models.

  1. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    Science.gov (United States)

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    2017-10-01

    We present a code implementing the linearized quasiparticle self-consistent GW method (LQSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N3 scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method. Program Files doi:http://dx.doi.org/10.17632/cpchkfty4w.1 Licensing provisions: GNU General Public License Programming language: Fortran 90 External routines/libraries: BLAS, LAPACK, MPI (optional) Nature of problem: Direct implementation of the GW method scales as N4 with the system size, which quickly becomes prohibitively time consuming even in the modern computers. Solution method: We implemented the GW approach using a method that switches between real space and momentum space representations. Some operations are faster in real space, whereas others are more computationally efficient in the reciprocal space. This makes our approach scale as N3. Restrictions: The limiting factor is usually the memory available in a computer. Using 10 GB/core of memory allows us to study the systems up to 15 atoms per unit cell.

  2. Alternating direction transport sweeps for linear discontinuous SN method

    International Nuclear Information System (INIS)

    Yavuz, M.; Aykanat, C.

    1993-01-01

    The performance of Alternating Direction Transport Sweep (ADTS) method is investigated for spatially differenced Linear Discontinuous S N (LD-S N ) problems on a MIMD multicomputer, Intel IPSC/2. The method consists of dividing a transport problem spatially into sub-problems, assigning each sub-problem to a separate processor. Then, the problem is solved by performing transport sweeps iterating on the scattering source and interface fluxes between the sub-problems. In each processor, the order of transport sweeps is scheduled such that a processor completing its computation in a quadrant of a transport sweep is able to use the most recent information (exiting fluxes of neighboring processor) as its incoming fluxes to start the next quadrant calculation. Implementation of this method on the Intel IPSC/2 multicomputer displays significant speedups over the one-processor method. Also, the performance of the method is compared with those reported previously for the Diamond Differenced S N (DD-S N ) method. Our experimental experience illustrates that the parallel performance of both the ADTS LD- and DD-S N methods is the same. (orig.)

  3. Non-linear shape functions over time in the space-time finite element method

    Directory of Open Access Journals (Sweden)

    Kacprzyk Zbigniew

    2017-01-01

    Full Text Available This work presents a generalisation of the space-time finite element method proposed by Kączkowski in his seminal of 1970’s and early 1980’s works. Kączkowski used linear shape functions in time. The recurrence formula obtained by Kączkowski was conditionally stable. In this paper, non-linear shape functions in time are proposed.

  4. Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method

    KAUST Repository

    Desmal, Abdulla

    2014-07-01

    A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.

  5. Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2014-01-01

    A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.

  6. Linear and nonlinear methods in modeling the aqueous solubility of organic compounds.

    Science.gov (United States)

    Catana, Cornel; Gao, Hua; Orrenius, Christian; Stouten, Pieter F W

    2005-01-01

    Solubility data for 930 diverse compounds have been analyzed using linear Partial Least Square (PLS) and nonlinear PLS methods, Continuum Regression (CR), and Neural Networks (NN). 1D and 2D descriptors from MOE package in combination with E-state or ISIS keys have been used. The best model was obtained using linear PLS for a combination between 22 MOE descriptors and 65 ISIS keys. It has a correlation coefficient (r2) of 0.935 and a root-mean-square error (RMSE) of 0.468 log molar solubility (log S(w)). The model validated on a test set of 177 compounds not included in the training set has r2 0.911 and RMSE 0.475 log S(w). The descriptors were ranked according to their importance, and at the top of the list have been found the 22 MOE descriptors. The CR model produced results as good as PLS, and because of the way in which cross-validation has been done it is expected to be a valuable tool in prediction besides PLS model. The statistics obtained using nonlinear methods did not surpass those got with linear ones. The good statistic obtained for linear PLS and CR recommends these models to be used in prediction when it is difficult or impossible to make experimental measurements, for virtual screening, combinatorial library design, and efficient leads optimization.

  7. Linearized method: A new approach for kinetic analysis of central dopamine D2 receptor specific binding

    International Nuclear Information System (INIS)

    Watabe, Hiroshi; Hatazawa, Jun; Ishiwata, Kiichi; Ido, Tatsuo; Itoh, Masatoshi; Iwata, Ren; Nakamura, Takashi; Takahashi, Toshihiro; Hatano, Kentaro

    1995-01-01

    The authors proposed a new method (Linearized method) to analyze neuroleptic ligand-receptor specific binding in a human brain using positron emission tomography (PET). They derived the linear equation to solve four rate constants, k 3 , k 4 , k 5 , k 6 from PET data. This method does not demand radioactivity curve in plasma as an input function to brain, and can do fast calculations in order to determine rate constants. They also tested Nonlinearized method including nonlinear equations which is conventional analysis using plasma radioactivity corrected for ligand metabolites as an input function. The authors applied these methods to evaluate dopamine D 2 receptor specific binding of [ 11 C] YM-09151-2. The value of B max /K d = k 3 k 4 obtained by Linearized method was 5.72 ± 3.1 which was consistent with the value of 5.78 ± 3.4 obtained by Nonlinearized method

  8. Establishing macroecological trait datasets: digitalization, extrapolation, and validation of diet preferences in terrestrial mammals worldwide.

    Science.gov (United States)

    Kissling, Wilm Daniel; Dalby, Lars; Fløjgaard, Camilla; Lenoir, Jonathan; Sandel, Brody; Sandom, Christopher; Trøjelsgaard, Kristian; Svenning, Jens-Christian

    2014-07-01

    Ecological trait data are essential for understanding the broad-scale distribution of biodiversity and its response to global change. For animals, diet represents a fundamental aspect of species' evolutionary adaptations, ecological and functional roles, and trophic interactions. However, the importance of diet for macroevolutionary and macroecological dynamics remains little explored, partly because of the lack of comprehensive trait datasets. We compiled and evaluated a comprehensive global dataset of diet preferences of mammals ("MammalDIET"). Diet information was digitized from two global and cladewide data sources and errors of data entry by multiple data recorders were assessed. We then developed a hierarchical extrapolation procedure to fill-in diet information for species with missing information. Missing data were extrapolated with information from other taxonomic levels (genus, other species within the same genus, or family) and this extrapolation was subsequently validated both internally (with a jack-knife approach applied to the compiled species-level diet data) and externally (using independent species-level diet information from a comprehensive continentwide data source). Finally, we grouped mammal species into trophic levels and dietary guilds, and their species richness as well as their proportion of total richness were mapped at a global scale for those diet categories with good validation results. The success rate of correctly digitizing data was 94%, indicating that the consistency in data entry among multiple recorders was high. Data sources provided species-level diet information for a total of 2033 species (38% of all 5364 terrestrial mammal species, based on the IUCN taxonomy). For the remaining 3331 species, diet information was mostly extrapolated from genus-level diet information (48% of all terrestrial mammal species), and only rarely from other species within the same genus (6%) or from family level (8%). Internal and external

  9. Extrapolation of vertical target motion through a brief visual occlusion.

    Science.gov (United States)

    Zago, Myrka; Iosa, Marco; Maffei, Vincenzo; Lacquaniti, Francesco

    2010-03-01

    It is known that arbitrary target accelerations along the horizontal generally are extrapolated much less accurately than target speed through a visual occlusion. The extent to which vertical accelerations can be extrapolated through an occlusion is much less understood. Here, we presented a virtual target rapidly descending on a blank screen with different motion laws. The target accelerated under gravity (1g), decelerated under reversed gravity (-1g), or moved at constant speed (0g). Probability of each type of acceleration differed across experiments: one acceleration at a time, or two to three different accelerations randomly intermingled could be presented. After a given viewing period, the target disappeared for a brief, variable period until arrival (occluded trials) or it remained visible throughout (visible trials). Subjects were asked to press a button when the target arrived at destination. We found that, in visible trials, the average performance with 1g targets could be better or worse than that with 0g targets depending on the acceleration probability, and both were always superior to the performance with -1g targets. By contrast, the average performance with 1g targets was always superior to that with 0g and -1g targets in occluded trials. Moreover, the response times of 1g trials tended to approach the ideal value with practice in occluded protocols. To gain insight into the mechanisms of extrapolation, we modeled the response timing based on different types of threshold models. We found that occlusion was accompanied by an adaptation of model parameters (threshold time and central processing time) in a direction that suggests a strategy oriented to the interception of 1g targets at the expense of the interception of the other types of tested targets. We argue that the prediction of occluded vertical motion may incorporate an expectation of gravity effects.

  10. Extrapolation of ZPR sodium void measurements to the power reactor

    International Nuclear Information System (INIS)

    Beck, C.L.; Collins, P.J.; Lineberry, M.J.; Grasseschi, G.L.

    1976-01-01

    Sodium-voiding measurements of ZPPR assemblies 2 and 5 are analyzed with ENDF/B Version IV data. Computations include directional diffusion coefficients to account for streaming effects resulting from the plate structure of the critical assembly. Bias factors for extrapolating critical assembly data to the CRBR design are derived from the results of this analysis

  11. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    Science.gov (United States)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  12. {sup 131}I-SPGP internal dosimetry: animal model and human extrapolation

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soprani, Juliana; Santos, Raquel Gouvea dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: hma@cdtn.br; Figueiredo, Suely Gomes de [Universidade Federal do Espirito Santo, (UFES), Vitoria, ES (Brazil). Dept. de Ciencias Fisiologicas. Lab. de Quimica de Proteinas

    2009-07-01

    Scorpaena plumieri is commonly called moreia-ati or manganga and is the most venomous and one of the most abundant fish species of the Brazilian coast. Soprani 2006, demonstrated that SPGP - an isolated protein from S. plumieri fish- possess high antitumoral activity against malignant tumours and can be a source of template molecules for the development (design) of antitumoral drugs. In the present work, Soprani's {sup 125}ISPGP biokinetic data were treated by MIRD formalism to perform Internal Dosimetry studies. Absorbed doses due to the {sup 131}I-SPGP uptake were determinate in several organs of mice, as well as in the implanted tumor. Doses obtained for animal model were extrapolated to humans assuming a similar ratio for various mouse and human tissues. For the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from {sup 131}I were considered. (author)

  13. Non-linear triangle-based polynomial expansion nodal method for hexagonal core analysis

    International Nuclear Information System (INIS)

    Cho, Jin Young; Cho, Byung Oh; Joo, Han Gyu; Zee, Sung Qunn; Park, Sang Yong

    2000-09-01

    This report is for the implementation of triangle-based polynomial expansion nodal (TPEN) method to MASTER code in conjunction with the coarse mesh finite difference(CMFD) framework for hexagonal core design and analysis. The TPEN method is a variation of the higher order polynomial expansion nodal (HOPEN) method that solves the multi-group neutron diffusion equation in the hexagonal-z geometry. In contrast with the HOPEN method, only two-dimensional intranodal expansion is considered in the TPEN method for a triangular domain. The axial dependence of the intranodal flux is incorporated separately here and it is determined by the nodal expansion method (NEM) for a hexagonal node. For the consistency of node geometry of the MASTER code which is based on hexagon, TPEN solver is coded to solve one hexagonal node which is composed of 6 triangular nodes directly with Gauss elimination scheme. To solve the CMFD linear system efficiently, stabilized bi-conjugate gradient(BiCG) algorithm and Wielandt eigenvalue shift method are adopted. And for the construction of the efficient preconditioner of BiCG algorithm, the incomplete LU(ILU) factorization scheme which has been widely used in two-dimensional problems is used. To apply the ILU factorization scheme to three-dimensional problem, a symmetric Gauss-Seidel Factorization scheme is used. In order to examine the accuracy of the TPEN solution, several eigenvalue benchmark problems and two transient problems, i.e., a realistic VVER1000 and VVER440 rod ejection benchmark problems, were solved and compared with respective references. The results of eigenvalue benchmark problems indicate that non-linear TPEN method is very accurate showing less than 15 pcm of eigenvalue errors and 1% of maximum power errors, and fast enough to solve the three-dimensional VVER-440 problem within 5 seconds on 733MHz PENTIUM-III. In the case of the transient problems, the non-linear TPEN method also shows good results within a few minute of

  14. Guided wave tomography in anisotropic media using recursive extrapolation operators

    Science.gov (United States)

    Volker, Arno

    2018-04-01

    Guided wave tomography is an advanced technology for quantitative wall thickness mapping to image wall loss due to corrosion or erosion. An inversion approach is used to match the measured phase (time) at a specific frequency to a model. The accuracy of the model determines the sizing accuracy. Particularly for seam welded pipes there is a measurable amount of anisotropy. Moreover, for small defects a ray-tracing based modelling approach is no longer accurate. Both issues are solved by applying a recursive wave field extrapolation operator assuming vertical transverse anisotropy. The inversion scheme is extended by not only estimating the wall loss profile but also the anisotropy, local material changes and transducer ring alignment errors. This makes the approach more robust. The approach will be demonstrated experimentally on different defect sizes, and a comparison will be made between this new approach and an isotropic ray-tracing approach. An example is given in Fig. 1 for a 75 mm wide, 5 mm deep defect. The wave field extrapolation based tomography clearly provides superior results.

  15. Semi-analog Monte Carlo (SMC) method for time-dependent non-linear three-dimensional heterogeneous radiative transfer problems

    International Nuclear Information System (INIS)

    Yun, Sung Hwan

    2004-02-01

    Radiative transfer is a complex phenomenon in which radiation field interacts with material. This thermal radiative transfer phenomenon is composed of two equations which are the balance equation of photons and the material energy balance equation. The two equations involve non-linearity due to the temperature and that makes the radiative transfer equation more difficult to solve. During the last several years, there have been many efforts to solve the non-linear radiative transfer problems by Monte Carlo method. Among them, it is known that Semi-Analog Monte Carlo (SMC) method developed by Ahrens and Larsen is accurate regard-less of the time step size in low temperature region. But their works are limited to one-dimensional, low temperature problems. In this thesis, we suggest some method to remove their limitations in the SMC method and apply to the more realistic problems. An initially cold problem was solved over entire temperature region by using piecewise linear interpolation of the heat capacity, while heat capacity is still fitted as a cubic curve within the lowest temperature region. If we assume the heat capacity to be linear in each temperature region, the non-linearity still remains in the radiative transfer equations. We then introduce the first-order Taylor expansion to linearize the non-linear radiative transfer equations. During the linearization procedure, absorption-reemission phenomena may be described by a conventional reemission time sampling scheme which is similar to the repetitive sampling scheme in particle transport Monte Carlo method. But this scheme causes significant stochastic errors, which necessitates many histories. Thus, we present a new reemission time sampling scheme which reduces stochastic errors by storing the information of absorption times. The results of the comparison of the two schemes show that the new scheme has less stochastic errors. Therefore, the improved SMC method is able to solve more realistic problems with

  16. The influence of an extrapolation chamber over the low energy X-ray beam radiation field

    Energy Technology Data Exchange (ETDEWEB)

    Tanuri de F, M. T.; Da Silva, T. A., E-mail: mttf@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Pampulha, Belo Horizonte, Minas Gerais (Brazil)

    2016-10-15

    The extrapolation chambers are detectors whose sensitive volume can be modified by changing the distance between the electrodes and has been widely used for beta particles primary measurement system. In this work, was performed a PTW 23392 extrapolation chamber Monte Carlo simulation, by mean the MCNPX code. Although the sensitive volume of an extrapolation chamber can be reduced to very small size, their packaging is large enough to modify the radiation field and change the absorbed dose measurements values. Experiments were performed to calculate correction factors for this purpose. The validation of the Monte Carlo model was done by comparing the spectra obtained with a CdTe detector according to the ISO 4037 criteria. Agreements smaller than 5% for half value layers, 10% for spectral resolution and 1% for mean energy, were found. It was verified that the correction factors are dependent of the X-ray beam quality. (Author)

  17. The influence of an extrapolation chamber over the low energy X-ray beam radiation field

    International Nuclear Information System (INIS)

    Tanuri de F, M. T.; Da Silva, T. A.

    2016-10-01

    The extrapolation chambers are detectors whose sensitive volume can be modified by changing the distance between the electrodes and has been widely used for beta particles primary measurement system. In this work, was performed a PTW 23392 extrapolation chamber Monte Carlo simulation, by mean the MCNPX code. Although the sensitive volume of an extrapolation chamber can be reduced to very small size, their packaging is large enough to modify the radiation field and change the absorbed dose measurements values. Experiments were performed to calculate correction factors for this purpose. The validation of the Monte Carlo model was done by comparing the spectra obtained with a CdTe detector according to the ISO 4037 criteria. Agreements smaller than 5% for half value layers, 10% for spectral resolution and 1% for mean energy, were found. It was verified that the correction factors are dependent of the X-ray beam quality. (Author)

  18. A Projected Non-linear Conjugate Gradient Method for Interactive Inverse Kinematics

    DEFF Research Database (Denmark)

    Engell-Nørregård, Morten; Erleben, Kenny

    2009-01-01

    Inverse kinematics is the problem of posing an articulated figure to obtain a wanted goal, without regarding inertia and forces. Joint limits are modeled as bounds on individual degrees of freedom, leading to a box-constrained optimization problem. We present A projected Non-linear Conjugate...... Gradient optimization method suitable for box-constrained optimization problems for inverse kinematics. We show application on inverse kinematics positioning of a human figure. Performance is measured and compared to a traditional Jacobian Transpose method. Visual quality of the developed method...

  19. An introduction to linear ordinary differential equations using the impulsive response method and factorization

    CERN Document Server

    Camporesi, Roberto

    2016-01-01

    This book presents a method for solving linear ordinary differential equations based on the factorization of the differential operator. The approach for the case of constant coefficients is elementary, and only requires a basic knowledge of calculus and linear algebra. In particular, the book avoids the use of distribution theory, as well as the other more advanced approaches: Laplace transform, linear systems, the general theory of linear equations with variable coefficients and variation of parameters. The case of variable coefficients is addressed using Mammana’s result for the factorization of a real linear ordinary differential operator into a product of first-order (complex) factors, as well as a recent generalization of this result to the case of complex-valued coefficients.

  20. Design for low dose extrapolation of carcinogenicity data. Technical report No. 24

    International Nuclear Information System (INIS)

    Wong, S.C.

    1979-06-01

    Parameters for modelling dose-response relationships in carcinogenesis models were found to be very complicated, especially for distinguishing low dose effects. The author concluded that extrapolation always bears the danger of providing misleading information

  1. A three operator split-step method covering a larger set of non-linear partial differential equations

    Science.gov (United States)

    Zia, Haider

    2017-06-01

    This paper describes an updated exponential Fourier based split-step method that can be applied to a greater class of partial differential equations than previous methods would allow. These equations arise in physics and engineering, a notable example being the generalized derivative non-linear Schrödinger equation that arises in non-linear optics with self-steepening terms. These differential equations feature terms that were previously inaccessible to model accurately with low computational resources. The new method maintains a 3rd order error even with these additional terms and models the equation in all three spatial dimensions and time. The class of non-linear differential equations that this method applies to is shown. The method is fully derived and implementation of the method in the split-step architecture is shown. This paper lays the mathematical ground work for an upcoming paper employing this method in white-light generation simulations in bulk material.

  2. Density-matrix renormalization group method for the conductance of one-dimensional correlated systems using the Kubo formula

    Science.gov (United States)

    Bischoff, Jan-Moritz; Jeckelmann, Eric

    2017-11-01

    We improve the density-matrix renormalization group (DMRG) evaluation of the Kubo formula for the zero-temperature linear conductance of one-dimensional correlated systems. The dynamical DMRG is used to compute the linear response of a finite system to an applied ac source-drain voltage; then the low-frequency finite-system response is extrapolated to the thermodynamic limit to obtain the dc conductance of an infinite system. The method is demonstrated on the one-dimensional spinless fermion model at half filling. Our method is able to replicate several predictions of the Luttinger liquid theory such as the renormalization of the conductance in a homogeneous conductor, the universal effects of a single barrier, and the resonant tunneling through a double barrier.

  3. A SOCIOLOGICAL ANALYSIS OF THE CHILDBEARING COEFFICIENT IN THE ALTAI REGION BASED ON METHOD OF FUZZY LINEAR REGRESSION

    Directory of Open Access Journals (Sweden)

    Sergei Vladimirovich Varaksin

    2017-06-01

    Full Text Available Purpose. Construction of a mathematical model of the dynamics of childbearing change in the Altai region in 2000–2016, analysis of the dynamics of changes in birth rates for multiple age categories of women of childbearing age. Methodology. A auxiliary analysis element is the construction of linear mathematical models of the dynamics of childbearing by using fuzzy linear regression method based on fuzzy numbers. Fuzzy linear regression is considered as an alternative to standard statistical linear regression for short time series and unknown distribution law. The parameters of fuzzy linear and standard statistical regressions for childbearing time series were defined with using the built in language MatLab algorithm. Method of fuzzy linear regression is not used in sociological researches yet. Results. There are made the conclusions about the socio-demographic changes in society, the high efficiency of the demographic policy of the leadership of the region and the country, and the applicability of the method of fuzzy linear regression for sociological analysis.

  4. Novel methods for Solving Economic Dispatch of Security-Constrained Unit Commitment Based on Linear Programming

    Science.gov (United States)

    Guo, Sangang

    2017-09-01

    There are two stages in solving security-constrained unit commitment problems (SCUC) within Lagrangian framework: one is to obtain feasible units’ states (UC), the other is power economic dispatch (ED) for each unit. The accurate solution of ED is more important for enhancing the efficiency of the solution to SCUC for the fixed feasible units’ statues. Two novel methods named after Convex Combinatorial Coefficient Method and Power Increment Method respectively based on linear programming problem for solving ED are proposed by the piecewise linear approximation to the nonlinear convex fuel cost functions. Numerical testing results show that the methods are effective and efficient.

  5. Effect of chamber enclosure time on soil respiration flux: A comparison of linear and non-linear flux calculation methods

    DEFF Research Database (Denmark)

    Kandel, Tanka P; Lærke, Poul Erik; Elsgaard, Lars

    2016-01-01

    One of the shortcomings of closed chamber methods for soil respiration (SR) measurements is the decreased CO2 diffusion rate from soil to chamber headspace that may occur due to increased chamber CO2 concentrations. This feedback on diffusion rate may lead to underestimation of pre-deployment flu......One of the shortcomings of closed chamber methods for soil respiration (SR) measurements is the decreased CO2 diffusion rate from soil to chamber headspace that may occur due to increased chamber CO2 concentrations. This feedback on diffusion rate may lead to underestimation of pre...... was placed on fixed collars, and CO2 concentration in the chamber headspace were recorded at 1-s intervals for 45 min. Fluxes were measured in different soil types (sandy, sandy loam and organic soils), and for various manipulations (tillage, rain and drought) and soil conditions (temperature and moisture......) to obtain a range of fluxes with different shapes of flux curves. The linear method provided more stable flux results during short enclosure times (few min) but underestimated initial fluxes by 15–300% after 45 min deployment time. Non-linear models reduced the underestimation as average underestimation...

  6. A study on linear and nonlinear Schrodinger equations by the variational iteration method

    International Nuclear Information System (INIS)

    Wazwaz, Abdul-Majid

    2008-01-01

    In this work, we introduce a framework to obtain exact solutions to linear and nonlinear Schrodinger equations. The He's variational iteration method (VIM) is used for analytic treatment of these equations. Numerical examples are tested to show the pertinent features of this method

  7. Linear Ordinary Differential Equations with Constant Coefficients. Revisiting the Impulsive Response Method Using Factorization

    Science.gov (United States)

    Camporesi, Roberto

    2011-01-01

    We present an approach to the impulsive response method for solving linear constant-coefficient ordinary differential equations based on the factorization of the differential operator. The approach is elementary, we only assume a basic knowledge of calculus and linear algebra. In particular, we avoid the use of distribution theory, as well as of…

  8. Performance study of Active Queue Management methods: Adaptive GRED, REDD, and GRED-Linear analytical model

    Directory of Open Access Journals (Sweden)

    Hussein Abdel-jaber

    2015-10-01

    Full Text Available Congestion control is one of the hot research topics that helps maintain the performance of computer networks. This paper compares three Active Queue Management (AQM methods, namely, Adaptive Gentle Random Early Detection (Adaptive GRED, Random Early Dynamic Detection (REDD, and GRED Linear analytical model with respect to different performance measures. Adaptive GRED and REDD are implemented based on simulation, whereas GRED Linear is implemented as a discrete-time analytical model. Several performance measures are used to evaluate the effectiveness of the compared methods mainly mean queue length, throughput, average queueing delay, overflow packet loss probability, and packet dropping probability. The ultimate aim is to identify the method that offers the highest satisfactory performance in non-congestion or congestion scenarios. The first comparison results that are based on different packet arrival probability values show that GRED Linear provides better mean queue length; average queueing delay and packet overflow probability than Adaptive GRED and REDD methods in the presence of congestion. Further and using the same evaluation measures, Adaptive GRED offers a more satisfactory performance than REDD when heavy congestion is present. When the finite capacity of queue values varies the GRED Linear model provides the highest satisfactory performance with reference to mean queue length and average queueing delay and all the compared methods provide similar throughput performance. However, when the finite capacity value is large, the compared methods have similar results in regard to probabilities of both packet overflowing and packet dropping.

  9. Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables

    Science.gov (United States)

    Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.

    2018-02-01

    In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.

  10. Deterministic operations research models and methods in linear optimization

    CERN Document Server

    Rader, David J

    2013-01-01

    Uniquely blends mathematical theory and algorithm design for understanding and modeling real-world problems Optimization modeling and algorithms are key components to problem-solving across various fields of research, from operations research and mathematics to computer science and engineering. Addressing the importance of the algorithm design process. Deterministic Operations Research focuses on the design of solution methods for both continuous and discrete linear optimization problems. The result is a clear-cut resource for understanding three cornerstones of deterministic operations resear

  11. {sup 131}I-CRTX internal dosimetry: animal model and human extrapolation

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Henrique Martins de; Ferreira, Andrea Vidal; Soares, Marcella Araugio; Silveira, Marina Bicalho; Santos, Raquel Gouvea dos [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN-CNEN-MG), Belo Horizonte, MG (Brazil)], e-mail: hma@cdtn.br

    2009-07-01

    Snake venoms molecules have been shown to play a role not only in the survival and proliferation of tumor cells but also in the processes of tumor cell adhesion, migration and angiogenesis. {sup 125}I-Crtx, a radiolabeled version of a peptide derived from Crotalus durissus terrificus snake venom, specifically binds to tumor and triggers apoptotic signalling. At the present work, {sup 125}I-Crtx biokinetic data (evaluated in mice bearing Erlich tumor) were treated by MIRD formalism to perform Internal Dosimetry studies. Doses in several organs of mice were determinate, as well as in implanted tumor, for {sup 131}I-Crtx. Doses results obtained for animal model were extrapolated to humans assuming a similar concentration ratio among various tissues between mouse and human. In the extrapolation, it was used human organ masses from Cristy/Eckerman phantom. Both penetrating and non-penetrating radiation from {sup 131}I in the tissue were considered in dose calculations. (author)

  12. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  13. To study the linear and nonlinear optical properties of Se-Te-Bi-Sn/PVP (polyvinylpyrrolidone) nanocomposites

    Science.gov (United States)

    Tyagi, Chetna; Yadav, Preeti; Sharma, Ambika

    2018-05-01

    The present work reveals the optical study of Se82Te15Bi1.0Sn2.0/polyvinylpyrrolidone (PVP) nanocomposites. Bulk glasses of chalcogenide was prepared by well-known melt quenching technique. Wet chemical technique is proposed for making the composite of Se82Te15Bi1.0Sn2.0 and PVP polymer as it is easy to handle and cost effective. The composites films were made on glass slide from the solution of Se-Te-Bi-Sn and PVP polymer using spin coating technique. The transmission as well as absorbance is recorded by using UV-Vis-NIR spectrophotometer in the spectral range 350-700 nm. The linear refractive index (n) of polymer nanocomposites are calculated by Swanepoel approach. The linear refractive index (n) PVP doped Se82Te15Bi1.0Sn2.0 chalcogenide is found to be 1.7. The optical band gap has been evaluated by means of Tauc extrapolation method. Tichy and Ticha model was utilized for the characterization of nonlinear refractive index (n2).

  14. Mehar Methods for Fuzzy Optimal Solution and Sensitivity Analysis of Fuzzy Linear Programming with Symmetric Trapezoidal Fuzzy Numbers

    Directory of Open Access Journals (Sweden)

    Sukhpreet Kaur Sidhu

    2014-01-01

    Full Text Available The drawbacks of the existing methods to obtain the fuzzy optimal solution of such linear programming problems, in which coefficients of the constraints are represented by real numbers and all the other parameters as well as variables are represented by symmetric trapezoidal fuzzy numbers, are pointed out, and to resolve these drawbacks, a new method (named as Mehar method is proposed for the same linear programming problems. Also, with the help of proposed Mehar method, a new method, much easy as compared to the existing methods, is proposed to deal with the sensitivity analysis of the same type of linear programming problems.

  15. Mathematical Methods in Wave Propagation: Part 2--Non-Linear Wave Front Analysis

    Science.gov (United States)

    Jeffrey, Alan

    1971-01-01

    The paper presents applications and methods of analysis for non-linear hyperbolic partial differential equations. The paper is concluded by an account of wave front analysis as applied to the piston problem of gas dynamics. (JG)

  16. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  17. Solution of systems of linear algebraic equations by the method of summation of divergent series

    International Nuclear Information System (INIS)

    Kirichenko, G.A.; Korovin, Ya.S.; Khisamutdinov, M.V.; Shmojlov, V.I.

    2015-01-01

    A method for solving systems of linear algebraic equations has been proposed on the basis on the summation of the corresponding continued fractions. The proposed algorithm for solving systems of linear algebraic equations is classified as direct algorithms providing an exact solution in a finite number of operations. Examples of solving systems of linear algebraic equations have been presented and the effectiveness of the algorithm has been estimated [ru

  18. Exact solution of some linear matrix equations using algebraic methods

    Science.gov (United States)

    Djaferis, T. E.; Mitter, S. K.

    1977-01-01

    A study is done of solution methods for Linear Matrix Equations including Lyapunov's equation, using methods of modern algebra. The emphasis is on the use of finite algebraic procedures which are easily implemented on a digital computer and which lead to an explicit solution to the problem. The action f sub BA is introduced a Basic Lemma is proven. The equation PA + BP = -C as well as the Lyapunov equation are analyzed. Algorithms are given for the solution of the Lyapunov and comment is given on its arithmetic complexity. The equation P - A'PA = Q is studied and numerical examples are given.

  19. An Introduction to Graphical and Mathematical Methods for Detecting Heteroscedasticity in Linear Regression.

    Science.gov (United States)

    Thompson, Russel L.

    Homoscedasticity is an important assumption of linear regression. This paper explains what it is and why it is important to the researcher. Graphical and mathematical methods for testing the homoscedasticity assumption are demonstrated. Sources of homoscedasticity and types of homoscedasticity are discussed, and methods for correction are…

  20. Biochemical methane potential prediction of plant biomasses: Comparing chemical composition versus near infrared methods and linear versus non-linear models.

    Science.gov (United States)

    Godin, Bruno; Mayer, Frédéric; Agneessens, Richard; Gerin, Patrick; Dardenne, Pierre; Delfosse, Philippe; Delcarte, Jérôme

    2015-01-01

    The reliability of different models to predict the biochemical methane potential (BMP) of various plant biomasses using a multispecies dataset was compared. The most reliable prediction models of the BMP were those based on the near infrared (NIR) spectrum compared to those based on the chemical composition. The NIR predictions of local (specific regression and non-linear) models were able to estimate quantitatively, rapidly, cheaply and easily the BMP. Such a model could be further used for biomethanation plant management and optimization. The predictions of non-linear models were more reliable compared to those of linear models. The presentation form (green-dried, silage-dried and silage-wet form) of biomasses to the NIR spectrometer did not influence the performances of the NIR prediction models. The accuracy of the BMP method should be improved to enhance further the BMP prediction models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. KEELE, Minimization of Nonlinear Function with Linear Constraints, Variable Metric Method

    International Nuclear Information System (INIS)

    Westley, G.W.

    1975-01-01

    1 - Description of problem or function: KEELE is a linearly constrained nonlinear programming algorithm for locating a local minimum of a function of n variables with the variables subject to linear equality and/or inequality constraints. 2 - Method of solution: A variable metric procedure is used where the direction of search at each iteration is obtained by multiplying the negative of the gradient vector by a positive definite matrix which approximates the inverse of the matrix of second partial derivatives associated with the function. 3 - Restrictions on the complexity of the problem: Array dimensions limit the number of variables to 20 and the number of constraints to 50. These can be changed by the user

  2. Extrapolation of rate constants of reactions producing H{sub 2} and O{sub 2} in radiolysis of water at high temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Leblanc, R.; Ghandi, K.; Hackman, B.; Liu, G. [Mount Allison Univ., Sackville, NB (Canada)

    2014-07-01

    One target of our research is to extrapolate known data on the rate constants of reactions and add corrections to estimate the rate constants at the higher temperatures reached by the SCWR reactors. The focus of this work was to extrapolate known data on the rate constants of reactions that produce Hydrogen or Oxygen with a rate constant below 10{sup 10} mol{sup -1} s{sup -1} at room temperature. The extrapolation is done taking into account the change in the diffusion rate of the interacting species and the cage effect with thermodynamic conditions. The extrapolations are done over a wide temperature range and under isobaric conditions. (author)

  3. Linearly decoupled energy-stable numerical methods for multi-component two-phase compressible flow

    KAUST Repository

    Kou, Jisheng

    2017-12-06

    In this paper, for the first time we propose two linear, decoupled, energy-stable numerical schemes for multi-component two-phase compressible flow with a realistic equation of state (e.g. Peng-Robinson equation of state). The methods are constructed based on the scalar auxiliary variable (SAV) approaches for Helmholtz free energy and the intermediate velocities that are designed to decouple the tight relationship between velocity and molar densities. The intermediate velocities are also involved in the discrete momentum equation to ensure a consistency relationship with the mass balance equations. Moreover, we propose a component-wise SAV approach for a multi-component fluid, which requires solving a sequence of linear, separate mass balance equations. We prove that the methods have the unconditional energy-dissipation feature. Numerical results are presented to verify the effectiveness of the proposed methods.

  4. Multivariable extrapolation of grand canonical free energy landscapes

    Science.gov (United States)

    Mahynski, Nathan A.; Errington, Jeffrey R.; Shen, Vincent K.

    2017-12-01

    We derive an approach for extrapolating the free energy landscape of multicomponent systems in the grand canonical ensemble, obtained from flat-histogram Monte Carlo simulations, from one set of temperature and chemical potentials to another. This is accomplished by expanding the landscape in a Taylor series at each value of the order parameter which defines its macrostate phase space. The coefficients in each Taylor polynomial are known exactly from fluctuation formulas, which may be computed by measuring the appropriate moments of extensive variables that fluctuate in this ensemble. Here we derive the expressions necessary to define these coefficients up to arbitrary order. In principle, this enables a single flat-histogram simulation to provide complete thermodynamic information over a broad range of temperatures and chemical potentials. Using this, we also show how to combine a small number of simulations, each performed at different conditions, in a thermodynamically consistent fashion to accurately compute properties at arbitrary temperatures and chemical potentials. This method may significantly increase the computational efficiency of biased grand canonical Monte Carlo simulations, especially for multicomponent mixtures. Although approximate, this approach is amenable to high-throughput and data-intensive investigations where it is preferable to have a large quantity of reasonably accurate simulation data, rather than a smaller amount with a higher accuracy.

  5. A Low-Complexity ESPRIT-Based DOA Estimation Method for Co-Prime Linear Arrays.

    Science.gov (United States)

    Sun, Fenggang; Gao, Bin; Chen, Lizhen; Lan, Peng

    2016-08-25

    The problem of direction-of-arrival (DOA) estimation is investigated for co-prime array, where the co-prime array consists of two uniform sparse linear subarrays with extended inter-element spacing. For each sparse subarray, true DOAs are mapped into several equivalent angles impinging on the traditional uniform linear array with half-wavelength spacing. Then, by applying the estimation of signal parameters via rotational invariance technique (ESPRIT), the equivalent DOAs are estimated, and the candidate DOAs are recovered according to the relationship among equivalent and true DOAs. Finally, the true DOAs are estimated by combining the results of the two subarrays. The proposed method achieves a better complexity-performance tradeoff as compared to other existing methods.

  6. Flavor extrapolation in lattice QCD

    International Nuclear Information System (INIS)

    Duffy, W.C.

    1984-01-01

    Explicit calculation of the effect of virtual quark-antiquark pairs in lattice QCD has eluded researchers. To include their effect explicitly one must calculate the determinant of the fermion-fermion coupling matrix. Owing to the large number of sites in a continuum limit size lattice, direct evaluation of this term requires an unrealistic amount of computer time. The effect of the virtual pairs can be approximated by ignoring this term and adjusting lattice couplings to reproduce experimental results. This procedure is called the valence approximation since it ignores all but the minimal number of quarks needed to describe hadrons. In this work the effect of the quark-antiquark pairs has been incorporated in a theory with an effective negative number of quark flavors contributing to the closed loops. Various particle masses and decay constants have been calculated for this theory and for one with no virtual pairs. The author attempts to extrapolate results towards positive numbers of quark flavors. The results show approximate agreement with experimental measurements and demonstrate the smoothness of lattice expectations in the number of quark flavors

  7. Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media

    KAUST Repository

    Waheed, Umair bin

    2014-05-01

    Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.

  8. Assessing ecological effects of radionuclides: data gaps and extrapolation issues

    International Nuclear Information System (INIS)

    Garnier-Laplace, Jacqueline; Gilek, Michael; Sundell-Bergman, Synnoeve; Larsson, Carl-Magnus

    2004-01-01

    By inspection of the FASSET database on radiation effects on non-human biota, one of the major difficulties in the implementation of ecological risk assessments for radioactive pollutants is found to be the lack of data for chronic low-level exposure. A critical review is provided of a number of extrapolation issues that arise in undertaking an ecological risk assessment: acute versus chronic exposure regime; radiation quality including relative biological effectiveness and radiation weighting factors; biological effects from an individual to a population level, including radiosensitivity and lifestyle variations throughout the life cycle; single radionuclide versus multi-contaminants. The specificities of the environmental situations of interest (mainly chronic low-level exposure regimes) emphasise the importance of reproductive parameters governing the demography of the population within a given ecosystem and, as a consequence, the structure and functioning of that ecosystem. As an operational conclusion to keep in mind for any site-specific risk assessment, the present state-of-the-art on extrapolation issues allows us to grade the magnitude of the uncertainties as follows: one species to another > acute to chronic = external to internal = mixture of stressors> individual to population> ecosystem structure to function

  9. Effective Elliptic Models for Efficient Wavefield Extrapolation in Anisotropic Media

    KAUST Repository

    Waheed, Umair bin; Alkhalifah, Tariq Ali

    2014-01-01

    Wavefield extrapolation operator for elliptically anisotropic media offers significant cost reduction compared to that of transversely isotropic media (TI), especially when the medium exhibits tilt in the symmetry axis (TTI). However, elliptical anisotropy does not provide accurate focusing for TI media. Therefore, we develop effective elliptically anisotropic models that correctly capture the kinematic behavior of the TTI wavefield. Specifically, we use an iterative elliptically anisotropic eikonal solver that provides the accurate traveltimes for a TI model. The resultant coefficients of the elliptical eikonal provide the effective models. These effective models allow us to use the cheaper wavefield extrapolation operator for elliptic media to obtain approximate wavefield solutions for TTI media. Despite the fact that the effective elliptic models are obtained by kinematic matching using high-frequency asymptotic, the resulting wavefield contains most of the critical wavefield components, including the frequency dependency and caustics, if present, with reasonable accuracy. The methodology developed here offers a much better cost versus accuracy tradeoff for wavefield computations in TTI media, considering the cost prohibitive nature of the problem. We demonstrate the applicability of the proposed approach on the BP TTI model.

  10. Fuzzy Linear Regression for the Time Series Data which is Fuzzified with SMRGT Method

    Directory of Open Access Journals (Sweden)

    Seçil YALAZ

    2016-10-01

    Full Text Available Our work on regression and classification provides a new contribution to the analysis of time series used in many areas for years. Owing to the fact that convergence could not obtained with the methods used in autocorrelation fixing process faced with time series regression application, success is not met or fall into obligation of changing the models’ degree. Changing the models’ degree may not be desirable in every situation. In our study, recommended for these situations, time series data was fuzzified by using the simple membership function and fuzzy rule generation technique (SMRGT and to estimate future an equation has created by applying fuzzy least square regression (FLSR method which is a simple linear regression method to this data. Although SMRGT has success in determining the flow discharge in open channels and can be used confidently for flow discharge modeling in open canals, as well as in pipe flow with some modifications, there is no clue about that this technique is successful in fuzzy linear regression modeling. Therefore, in order to address the luck of such a modeling, a new hybrid model has been described within this study. In conclusion, to demonstrate our methods’ efficiency, classical linear regression for time series data and linear regression for fuzzy time series data were applied to two different data sets, and these two approaches performances were compared by using different measures.

  11. A linear complementarity method for the solution of vertical vehicle-track interaction

    Science.gov (United States)

    Zhang, Jian; Gao, Qiang; Wu, Feng; Zhong, Wan-Xie

    2018-02-01

    A new method is proposed for the solution of the vertical vehicle-track interaction including a separation between wheel and rail. The vehicle is modelled as a multi-body system using rigid bodies, and the track is treated as a three-layer beam model in which the rail is considered as an Euler-Bernoulli beam and both the sleepers and the ballast are represented by lumped masses. A linear complementarity formulation is directly established using a combination of the wheel-rail normal contact condition and the generalised-α method. This linear complementarity problem is solved using the Lemke algorithm, and the wheel-rail contact force can be obtained. Then the dynamic responses of the vehicle and the track are solved without iteration based on the generalised-α method. The same equations of motion for the vehicle and track are adopted at the different wheel-rail contact situations. This method can remove some restrictions, that is, time-dependent mass, damping and stiffness matrices of the coupled system, multiple equations of motion for the different contact situations and the effect of the contact stiffness. Numerical results demonstrate that the proposed method is effective for simulating the vehicle-track interaction including a separation between wheel and rail.

  12. Optimal overlapping of waveform relaxation method for linear differential equations

    International Nuclear Information System (INIS)

    Yamada, Susumu; Ozawa, Kazufumi

    2000-01-01

    Waveform relaxation (WR) method is extremely suitable for solving large systems of ordinary differential equations (ODEs) on parallel computers, but the convergence of the method is generally slow. In order to accelerate the convergence, the methods which decouple the system into many subsystems with overlaps some of the components between the adjacent subsystems have been proposed. The methods, in general, converge much faster than the ones without overlapping, but the computational cost per iteration becomes larger due to the increase of the dimension of each subsystem. In this research, the convergence of the WR method for solving constant coefficients linear ODEs is investigated and the strategy to determine the number of overlapped components which minimizes the cost of the parallel computations is proposed. Numerical experiments on an SR2201 parallel computer show that the estimated number of the overlapped components by the proposed strategy is reasonable. (author)

  13. Life assessment of PVD based hard coatings by linear sweep voltammetry for high performance industrial application

    International Nuclear Information System (INIS)

    Malik, M.; Alam, S.; Irfan, M.; Hassan, Z.

    2006-01-01

    PVD based hard coatings have remarkable achievements in order to improve Tribological and surface properties of coating tools and dies. As PVD based hard coatings have a wide range of industrial applications especially in aerospace and automobile parts where they met different chemical attacks and in order to improve industrial performance these coatings must provide an excellent resistance against corrosion, high temperature oxidation and chemical reaction. This paper focuses on study of behaviour of PVD based hard coatings under different corrosive environments like as H/sub 2/SO/sub 4/, HCl, NaCl, KCl, NaOH etc. Corrosion rate was calculate under linear sweep voltammetry method where the Tafel extrapolation curves used for continuously monitoring the corrosion rate. The results show that these coatings have an excellent resistance against chemical attack. (author)

  14. First-order systems of linear partial differential equations: normal forms, canonical systems, transform methods

    Directory of Open Access Journals (Sweden)

    Heinz Toparkus

    2014-04-01

    Full Text Available In this paper we consider first-order systems with constant coefficients for two real-valued functions of two real variables. This is both a problem in itself, as well as an alternative view of the classical linear partial differential equations of second order with constant coefficients. The classification of the systems is done using elementary methods of linear algebra. Each type presents its special canonical form in the associated characteristic coordinate system. Then you can formulate initial value problems in appropriate basic areas, and you can try to achieve a solution of these problems by means of transform methods.

  15. A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers

    Energy Technology Data Exchange (ETDEWEB)

    Melboe, Hallgeir

    2001-10-01

    This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)

  16. Unified Scaling Law for flux pinning in practical superconductors: III. Minimum datasets, core parameters, and application of the Extrapolative Scaling Expression

    Science.gov (United States)

    Ekin, Jack W.; Cheggour, Najib; Goodrich, Loren; Splett, Jolene

    2017-03-01

    In Part 2 of these articles, an extensive analysis of pinning-force curves and raw scaling data was used to derive the Extrapolative Scaling Expression (ESE). This is a parameterization of the Unified Scaling Law (USL) that has the extrapolation capability of fundamental unified scaling, coupled with the application ease of a simple fitting equation. Here in Part 3, the accuracy of the ESE relation to interpolate and extrapolate limited critical-current data to obtain complete I c(B,T,ɛ) datasets is evaluated and compared with present fitting equations. Accuracy is analyzed in terms of root mean square (RMS) error and fractional deviation statistics. Highlights from 92 test cases are condensed and summarized, covering most fitting protocols and proposed parameterizations of the USL. The results show that ESE reliably extrapolates critical currents at fields B, temperatures T, and strains ɛ that are remarkably different from the fitted minimum dataset. Depending on whether the conductor is moderate-J c or high-J c, effective RMS extrapolation errors for ESE are in the range 2-5 A at 12 T, which approaches the I c measurement error (1-2%). The minimum dataset for extrapolating full I c(B,T,ɛ) characteristics is also determined from raw scaling data. It consists of one set of I c(B,ɛ) data at a fixed temperature (e.g., liquid helium temperature), and one set of I c(B,T) data at a fixed strain (e.g., zero applied strain). Error analysis of extrapolations from the minimum dataset with different fitting equations shows that ESE reduces the percentage extrapolation errors at individual data points at high fields, temperatures, and compressive strains down to 1/10th to 1/40th the size of those for extrapolations with present fitting equations. Depending on the conductor, percentage fitting errors for interpolations are also reduced to as little as 1/15th the size. The extrapolation accuracy of the ESE relation offers the prospect of straightforward implementation of

  17. Comparison of different methods for the solution of sets of linear equations

    International Nuclear Information System (INIS)

    Bilfinger, T.; Schmidt, F.

    1978-06-01

    The application of the conjugate-gradient methods as novel general iterative methods for the solution of sets of linear equations with symmetrical systems matrices led to this paper, where a comparison of these methods with the conventional differently accelerated Gauss-Seidel iteration was carried out. In additon, the direct Cholesky method was also included in the comparison. The studies referred mainly to memory requirement, computing time, speed of convergence, and accuracy of different conditions of the systems matrices, by which also the sensibility of the methods with respect to the influence of truncation errors may be recognized. (orig.) 891 RW [de

  18. LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table

  19. Carbon 13 nuclear magnetic resonance chemical shifts empiric calculations of polymers by multi linear regression and molecular modeling

    International Nuclear Information System (INIS)

    Da Silva Pinto, P.S.; Eustache, R.P.; Audenaert, M.; Bernassau, J.M.

    1996-01-01

    This work deals with carbon 13 nuclear magnetic resonance chemical shifts empiric calculations by multi linear regression and molecular modeling. The multi linear regression is indeed one way to obtain an equation able to describe the behaviour of the chemical shift for some molecules which are in the data base (rigid molecules with carbons). The methodology consists of structures describer parameters definition which can be bound to carbon 13 chemical shift known for these molecules. Then, the linear regression is used to determine the equation significant parameters. This one can be extrapolated to molecules which presents some resemblances with those of the data base. (O.L.). 20 refs., 4 figs., 1 tab

  20. Excited-state lifetime measurements: Linearization of the Foerster equation by the phase-plane method

    International Nuclear Information System (INIS)

    Love, J.C.; Demas, J.N.

    1983-01-01

    The Foerster equation describes excited-state decay curves involving resonance intermolecular energy transfer. A linearized solution based on the phase-plane method has been developed. The new method is quick, insensitive to the fitting region, accurate, and precise

  1. Testing an extrapolation chamber in computed tomography standard beams

    Science.gov (United States)

    Castro, M. C.; Silva, N. F.; Caldas, L. V. E.

    2018-03-01

    The computed tomography (CT) is responsible for the highest dose values to the patients. Therefore, the radiation doses in this procedure must be accurate. However, there is no primary standard system for this kind of radiation beam yet. In order to search for a CT primary standard, an extrapolation ionization chamber built at the Calibration Laboratory (LCI) of the Instituto de Pesquisas Energéticas e Nucleares (IPEN), was tested in this work. The results showed to be within the international recommended limits.

  2. Failure of the straight-line DCS boundary when extrapolated to the hypobaric realm.

    Science.gov (United States)

    Conkin, J; Van Liew, H D

    1992-11-01

    The lowest pressure (P2) to which a diver can ascend without developing decompression sickness (DCS) after becoming equilibrated at some higher pressure (P1) is described by a straight line with a negative y-intercept. We tested whether extrapolation of such a line also predicts safe decompression to altitude. We substituted tissue nitrogen pressure (P1N2) calculated for a compartment with a 360-min half-time for P1 values; this allows data from hypobaric exposures to be plotted on a P2 vs. P1N2 graph, even if the subject breathes oxygen before ascent. In literature sources, we found 40 reports of human exposures in hypobaric chambers that fell in the region of a P2 vs. P1N2 plot where the extrapolation from hyperbaric data predicted that the decompression should be free of DCS. Of 4,576 exposures, 785 persons suffered decompression sickness (17%), indicating that extrapolation of the diver line to altitude is not valid. Over the pressure range spanned by human hypobaric exposures and hyperbaric air exposures, the best separation between no DCS and DCS on a P2 vs. P1N2 plot seems to be a curve which approximates a straight line in the hyperbaric region but bends toward the origin in the hypobaric region.

  3. Extrapolation of lattice gauge theories to the continuum limit

    International Nuclear Information System (INIS)

    Duncan, A.; Vaidya, H.

    1978-01-01

    The problem of extrapolating lattice gauge theories from the strong-coupling phase to the continuum critical point is studied for the Abelian (U(1)) and non-Abelian (SU(2)) theories in three (space--time) dimensions. A method is described for obtaining the asymptotic behavior, for large β, of such thermodynamic quantities and correlation functions as the free energy and Wilson loop function. Certain general analyticity and positivity properties (in the complex β-plane) are shown to lead, after appropriate analytic remappings, to a Stieltjes property of these functions. Rigorous theorems then guarantee uniform and monotone convergence of the Pade approximants, with exact pointwise upper and lower bounds. The first three Pade's are computed for both the free energy and the Wilson function. For the free energy, satisfactory agreement is with the asymptotic behavior computed by an explicit lattice calculation. The strong-coupling series for the Wilson function is found to be considerably more unstable in the lower order terms - correspondingly, convergence of the Pade's is found to be slower than in the free-energy case. It is suggested that higher-order calculations may allow a reasonably accurate determination of the string constant for the SU(2) theory. 14 references

  4. Linear least-squares method for global luminescent oil film skin friction field analysis

    Science.gov (United States)

    Lee, Taekjin; Nonomura, Taku; Asai, Keisuke; Liu, Tianshu

    2018-06-01

    A data analysis method based on the linear least-squares (LLS) method was developed for the extraction of high-resolution skin friction fields from global luminescent oil film (GLOF) visualization images of a surface in an aerodynamic flow. In this method, the oil film thickness distribution and its spatiotemporal development are measured by detecting the luminescence intensity of the thin oil film. From the resulting set of GLOF images, the thin oil film equation is solved to obtain an ensemble-averaged (steady) skin friction field as an inverse problem. In this paper, the formulation of a discrete linear system of equations for the LLS method is described, and an error analysis is given to identify the main error sources and the relevant parameters. Simulations were conducted to evaluate the accuracy of the LLS method and the effects of the image patterns, image noise, and sample numbers on the results in comparison with the previous snapshot-solution-averaging (SSA) method. An experimental case is shown to enable the comparison of the results obtained using conventional oil flow visualization and those obtained using both the LLS and SSA methods. The overall results show that the LLS method is more reliable than the SSA method and the LLS method can yield a more detailed skin friction topology in an objective way.

  5. Linear hypergeneralization of learned dynamics across movement speeds reveals anisotropic, gain-encoding primitives for motor adaptation.

    Science.gov (United States)

    Joiner, Wilsaan M; Ajayi, Obafunso; Sing, Gary C; Smith, Maurice A

    2011-01-01

    The ability to generalize learned motor actions to new contexts is a key feature of the motor system. For example, the ability to ride a bicycle or swing a racket is often first developed at lower speeds and later applied to faster velocities. A number of previous studies have examined the generalization of motor adaptation across movement directions and found that the learned adaptation decays in a pattern consistent with the existence of motor primitives that display narrow Gaussian tuning. However, few studies have examined the generalization of motor adaptation across movement speeds. Following adaptation to linear velocity-dependent dynamics during point-to-point reaching arm movements at one speed, we tested the ability of subjects to transfer this adaptation to short-duration higher-speed movements aimed at the same target. We found near-perfect linear extrapolation of the trained adaptation with respect to both the magnitude and the time course of the velocity profiles associated with the high-speed movements: a 69% increase in movement speed corresponded to a 74% extrapolation of the trained adaptation. The close match between the increase in movement speed and the corresponding increase in adaptation beyond what was trained indicates linear hypergeneralization. Computational modeling shows that this pattern of linear hypergeneralization across movement speeds is not compatible with previous models of adaptation in which motor primitives display isotropic Gaussian tuning of motor output around their preferred velocities. Instead, we show that this generalization pattern indicates that the primitives involved in the adaptation to viscous dynamics display anisotropic tuning in velocity space and encode the gain between motor output and motion state rather than motor output itself.

  6. Method of separate determination of high-ohmic sample resistance and contact resistance

    Directory of Open Access Journals (Sweden)

    Vadim A. Golubiatnikov

    2015-09-01

    Full Text Available A method of separate determination of two-pole sample volume resistance and contact resistance is suggested. The method is applicable to high-ohmic semiconductor samples: semi-insulating gallium arsenide, detector cadmium-zinc telluride (CZT, etc. The method is based on near-contact region illumination by monochromatic radiation of variable intensity from light emitting diodes with quantum energies exceeding the band gap of the material. It is necessary to obtain sample photo-current dependence upon light emitting diode current and to find the linear portion of this dependence. Extrapolation of this linear portion to the Y-axis gives the cut-off current. As the bias voltage is known, it is easy to calculate sample volume resistance. Then, using dark current value, one can determine the total contact resistance. The method was tested for n-type semi-insulating GaAs. The contact resistance value was shown to be approximately equal to the sample volume resistance. Thus, the influence of contacts must be taken into account when electrophysical data are analyzed.

  7. Edge database analysis for extrapolation to ITER

    International Nuclear Information System (INIS)

    Shimada, M.; Janeschitz, G.; Stambaugh, R.D.

    1999-01-01

    An edge database has been archived to facilitate cross-machine comparisons of SOL and edge pedestal characteristics, and to enable comparison with theoretical models with an aim to extrapolate to ITER. The SOL decay lengths of power, density and temperature become broader for increasing density and q 95 . The power decay length is predicted to be 1.4-3.5 cm (L-mode) and 1.4-2.7 cm (H-mode) at the midplane in ITER. Analysis of Type I ELMs suggests that each giant ELM on ITER would exceed the ablation threshold of the divertor plates. Theoretical models are proposed for the H-mode transition, for Type I and Type III ELMs and are compared with the edge pedestal database. (author)

  8. A Lagrangian meshfree method applied to linear and nonlinear elasticity.

    Science.gov (United States)

    Walker, Wade A

    2017-01-01

    The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.

  9. Can Morphing Methods Predict Intermediate Structures?

    Science.gov (United States)

    Weiss, Dahlia R.; Levitt, Michael

    2009-01-01

    Movement is crucial to the biological function of many proteins, yet crystallographic structures of proteins can give us only a static snapshot. The protein dynamics that are important to biological function often happen on a timescale that is unattainable through detailed simulation methods such as molecular dynamics as they often involve crossing high-energy barriers. To address this coarse-grained motion, several methods have been implemented as web servers in which a set of coordinates is usually linearly interpolated from an initial crystallographic structure to a final crystallographic structure. We present a new morphing method that does not extrapolate linearly and can therefore go around high-energy barriers and which can produce different trajectories between the same two starting points. In this work, we evaluate our method and other established coarse-grained methods according to an objective measure: how close a coarse-grained dynamics method comes to a crystallographically determined intermediate structure when calculating a trajectory between the initial and final crystal protein structure. We test this with a set of five proteins with at least three crystallographically determined on-pathway high-resolution intermediate structures from the Protein Data Bank. For simple hinging motions involving a small conformational change, segmentation of the protein into two rigid sections outperforms other more computationally involved methods. However, large-scale conformational change is best addressed using a nonlinear approach and we suggest that there is merit in further developing such methods. PMID:18996395

  10. Making the most of what we have: application of extrapolation approaches in wildlife transfer models

    Energy Technology Data Exchange (ETDEWEB)

    Beresford, Nicholas A.; Barnett, Catherine L.; Wells, Claire [NERC Centre for Ecology and Hydrology, Lancaster Environment Center, Library Av., Bailrigg, Lancaster, LA1 4AP (United Kingdom); School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Wood, Michael D. [School of Environment and Life Sciences, University of Salford, Manchester, M4 4WT (United Kingdom); Vives i Batlle, Jordi [Belgian Nuclear Research Centre, Boeretang 200, 2400 Mol (Belgium); Brown, Justin E.; Hosseini, Ali [Norwegian Radiation Protection Authority, P.O. Box 55, N-1332 Oesteraas (Norway); Yankovich, Tamara L. [International Atomic Energy Agency, Vienna International Centre, 1400, Vienna (Austria); Bradshaw, Clare [Department of Ecology, Environment and Plant Sciences, Stockholm University, SE-10691 (Sweden); Willey, Neil [Centre for Research in Biosciences, University of the West of England, Coldharbour Lane, Frenchay, Bristol BS16 1QY (United Kingdom)

    2014-07-01

    Radiological environmental protection models need to predict the transfer of many radionuclides to a large number of organisms. There has been considerable development of transfer (predominantly concentration ratio) databases over the last decade. However, in reality it is unlikely we will ever have empirical data for all the species-radionuclide combinations which may need to be included in assessments. To provide default values for a number of existing models/frameworks various extrapolation approaches have been suggested (e.g. using data for a similar organism or element). This paper presents recent developments in two such extrapolation approaches, namely phylogeny and allometry. An evaluation of how extrapolation approaches have performed and the potential application of Bayesian statistics to make best use of available data will also be given. Using a Residual Maximum Likelihood (REML) mixed-model regression we initially analysed a dataset comprising 597 entries for 53 freshwater fish species from 67 sites to investigate if phylogenetic variation in transfer could be identified. The REML analysis generated an estimated mean value for each species on a common scale after taking account of the effect of the inter-site variation. Using an independent dataset, we tested the hypothesis that the REML model outputs could be used to predict radionuclide activity concentrations in other species from the results of a species which had been sampled at a specific site. The outputs of the REML analysis accurately predicted {sup 137}Cs activity concentrations in different species of fish from 27 lakes. Although initially investigated as an extrapolation approach the output of this work is a potential alternative to the highly site dependent concentration ratio model. We are currently applying this approach to a wider range of organism types and different ecosystems. An initial analysis of these results will be presented. The application of allometric, or mass

  11. Extrapolation in the development of paediatric medicines: examples from approvals for biological treatments for paediatric chronic immune-mediated inflammatory diseases.

    Science.gov (United States)

    Stefanska, Anna M; Distlerová, Dorota; Musaus, Joachim; Olski, Thorsten M; Dunder, Kristina; Salmonson, Tomas; Mentzer, Dirk; Müller-Berghaus, Jan; Hemmings, Robert; Veselý, Richard

    2017-10-01

    The European Union (EU) Paediatric Regulation requires that all new medicinal products applying for a marketing authorisation (MA) in the EU provide a paediatric investigation plan (PIP) covering a clinical and non-clinical trial programme relating to the use in the paediatric population, unless a waiver applies. Conducting trials in children is challenging on many levels, including ethical and practical issues, which may affect the availability of the clinical evidence. In scientifically justified cases, extrapolation of data from other populations can be an option to gather evidence supporting the benefit-risk assessment of the medicinal product for paediatric use. The European Medicines Agency (EMA) is working on providing a framework for extrapolation that is scientifically valid, reliable and adequate to support MA of medicines for children. It is expected that the extrapolation framework together with therapeutic area guidelines and individual case studies will support future PIPs. Extrapolation has already been employed in several paediatric development programmes including biological treatment for immune-mediated diseases. This article reviews extrapolation strategies from MA applications for products for the treatment of juvenile idiopathic arthritis, paediatric psoriasis and paediatric inflammatory bowel disease. It also provides a summary of extrapolation advice expressed in relevant EMA guidelines and initiatives supporting the use of alternative approaches in paediatric medicine development. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. A meta-analysis of cambium phenology and growth: linear and non-linear patterns in conifers of the northern hemisphere.

    Science.gov (United States)

    Rossi, Sergio; Anfodillo, Tommaso; Cufar, Katarina; Cuny, Henri E; Deslauriers, Annie; Fonti, Patrick; Frank, David; Gricar, Jozica; Gruber, Andreas; King, Gregory M; Krause, Cornelia; Morin, Hubert; Oberhuber, Walter; Prislan, Peter; Rathgeber, Cyrille B K

    2013-12-01

    Ongoing global warming has been implicated in shifting phenological patterns such as the timing and duration of the growing season across a wide variety of ecosystems. Linear models are routinely used to extrapolate these observed shifts in phenology into the future and to estimate changes in associated ecosystem properties such as net primary productivity. Yet, in nature, linear relationships may be special cases. Biological processes frequently follow more complex, non-linear patterns according to limiting factors that generate shifts and discontinuities, or contain thresholds beyond which responses change abruptly. This study investigates to what extent cambium phenology is associated with xylem growth and differentiation across conifer species of the northern hemisphere. Xylem cell production is compared with the periods of cambial activity and cell differentiation assessed on a weekly time scale on histological sections of cambium and wood tissue collected from the stems of nine species in Canada and Europe over 1-9 years per site from 1998 to 2011. The dynamics of xylogenesis were surprisingly homogeneous among conifer species, although dispersions from the average were obviously observed. Within the range analysed, the relationships between the phenological timings were linear, with several slopes showing values close to or not statistically different from 1. The relationships between the phenological timings and cell production were distinctly non-linear, and involved an exponential pattern. The trees adjust their phenological timings according to linear patterns. Thus, shifts of one phenological phase are associated with synchronous and comparable shifts of the successive phases. However, small increases in the duration of xylogenesis could correspond to a substantial increase in cell production. The findings suggest that the length of the growing season and the resulting amount of growth could respond differently to changes in environmental conditions.

  13. Characterization of an extrapolation chamber for low-energy X-rays: Experimental and Monte Carlo preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Neves, Lucio P., E-mail: lpneves@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Silva, Eric A.B., E-mail: ebrito@usp.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Perini, Ana P., E-mail: aperini@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil); Maidana, Nora L., E-mail: nmaidana@if.usp.br [Universidade de Sao Paulo, Instituto de Fisica, Travessa R 187, 05508-900 Sao Paulo, SP (Brazil); Caldas, Linda V.E., E-mail: lcaldas@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN), Comissao Nacional de Energia Nuclear, Av. Prof. Lineu Prestes 2242, 05508-000 Sao Paulo, SP (Brazil)

    2012-07-15

    The extrapolation chamber is a parallel-plate ionization chamber that allows variation of its air-cavity volume. In this work, an experimental study and MCNP-4C Monte Carlo code simulations of an ionization chamber designed and constructed at the Calibration Laboratory at IPEN to be used as a secondary dosimetry standard for low-energy X-rays are reported. The results obtained were within the international recommendations, and the simulations showed that the components of the extrapolation chamber may influence its response up to 11.0%. - Highlights: Black-Right-Pointing-Pointer A homemade extrapolation chamber was studied experimentally and with Monte Carlo. Black-Right-Pointing-Pointer It was characterized as a secondary dosimetry standard, for low energy X-rays. Black-Right-Pointing-Pointer Several characterization tests were performed and the results were satisfactory. Black-Right-Pointing-Pointer Simulation showed that its components may influence the response up to 11.0%. Black-Right-Pointing-Pointer This chamber may be used as a secondary standard at our laboratory.

  14. A METHOD FOR SOLVING LINEAR PROGRAMMING PROBLEMS WITH FUZZY PARAMETERS BASED ON MULTIOBJECTIVE LINEAR PROGRAMMING TECHNIQUE

    OpenAIRE

    M. ZANGIABADI; H. R. MALEKI

    2007-01-01

    In the real-world optimization problems, coefficients of the objective function are not known precisely and can be interpreted as fuzzy numbers. In this paper we define the concepts of optimality for linear programming problems with fuzzy parameters based on those for multiobjective linear programming problems. Then by using the concept of comparison of fuzzy numbers, we transform a linear programming problem with fuzzy parameters to a multiobjective linear programming problem. To this end, w...

  15. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    Science.gov (United States)

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  16. Kriging interpolation in seismic attribute space applied to the South Arne Field, North Sea

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Schiøtt, Christian

    2010-01-01

    Seismic attributes can be used to guide interpolation in-between and extrapolation away from well log locations using for example linear regression, neural networks, and kriging. Kriging-based estimation methods (and most other types of interpolation/extrapolation techniques) are intimately linke...

  17. The application of the fall-vector method in decomposition schemes for the solution of integer linear programming problems

    International Nuclear Information System (INIS)

    Sergienko, I.V.; Golodnikov, A.N.

    1984-01-01

    This article applies the methods of decompositions, which are used to solve continuous linear problems, to integer and partially integer problems. The fall-vector method is used to solve the obtained coordinate problems. An algorithm of the fall-vector is described. The Kornai-Liptak decomposition principle is used to reduce the integer linear programming problem to integer linear programming problems of a smaller dimension and to a discrete coordinate problem with simple constraints

  18. Solutions of First-Order Volterra Type Linear Integrodifferential Equations by Collocation Method

    Directory of Open Access Journals (Sweden)

    Olumuyiwa A. Agbolade

    2017-01-01

    Full Text Available The numerical solutions of linear integrodifferential equations of Volterra type have been considered. Power series is used as the basis polynomial to approximate the solution of the problem. Furthermore, standard and Chebyshev-Gauss-Lobatto collocation points were, respectively, chosen to collocate the approximate solution. Numerical experiments are performed on some sample problems already solved by homotopy analysis method and finite difference methods. Comparison of the absolute error is obtained from the present method and those from aforementioned methods. It is also observed that the absolute errors obtained are very low establishing convergence and computational efficiency.

  19. On the Linear Stability of the Fifth-Order WENO Discretization

    KAUST Repository

    Motamed, Mohammad; Macdonald, Colin B.; Ruuth, Steven J.

    2010-01-01

    , the fifth-order extrapolated BDF scheme gave superior results in practice to high-order Runge-Kutta methods whose stability domain includes the imaginary axis. Numerical tests are presented which confirm the analysis. © Springer Science+Business Media, LLC

  20. Engineered high expansion glass-ceramics having near linear thermal strain and methods thereof

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Steve Xunhu; Rodriguez, Mark A.; Lyon, Nathanael L.

    2018-01-30

    The present invention relates to glass-ceramic compositions, as well as methods for forming such composition. In particular, the compositions include various polymorphs of silica that provide beneficial thermal expansion characteristics (e.g., a near linear thermal strain). Also described are methods of forming such compositions, as well as connectors including hermetic seals containing such compositions.

  1. Robust fault detection of linear systems using a computationally efficient set-membership method

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba; Bak, Thomas

    2014-01-01

    In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....

  2. Entropy Rate Estimates for Natural Language—A New Extrapolation of Compressed Large-Scale Corpora

    Directory of Open Access Journals (Sweden)

    Ryosuke Takahira

    2016-10-01

    Full Text Available One of the fundamental questions about human language is whether its entropy rate is positive. The entropy rate measures the average amount of information communicated per unit time. The question about the entropy of language dates back to experiments by Shannon in 1951, but in 1990 Hilberg raised doubt regarding a correct interpretation of these experiments. This article provides an in-depth empirical analysis, using 20 corpora of up to 7.8 gigabytes across six languages (English, French, Russian, Korean, Chinese, and Japanese, to conclude that the entropy rate is positive. To obtain the estimates for data length tending to infinity, we use an extrapolation function given by an ansatz. Whereas some ansatzes were proposed previously, here we use a new stretched exponential extrapolation function that has a smaller error of fit. Thus, we conclude that the entropy rates of human languages are positive but approximately 20% smaller than without extrapolation. Although the entropy rate estimates depend on the script kind, the exponent of the ansatz function turns out to be constant across different languages and governs the complexity of natural language in general. In other words, in spite of typological differences, all languages seem equally hard to learn, which partly confirms Hilberg’s hypothesis.

  3. Method for solving fully fuzzy linear programming problems using deviation degree measure

    Institute of Scientific and Technical Information of China (English)

    Haifang Cheng; Weilai Huang; Jianhu Cai

    2013-01-01

    A new ful y fuzzy linear programming (FFLP) prob-lem with fuzzy equality constraints is discussed. Using deviation degree measures, the FFLP problem is transformed into a crispδ-parametric linear programming (LP) problem. Giving the value of deviation degree in each constraint, the δ-fuzzy optimal so-lution of the FFLP problem can be obtained by solving this LP problem. An algorithm is also proposed to find a balance-fuzzy optimal solution between two goals in conflict: to improve the va-lues of the objective function and to decrease the values of the deviation degrees. A numerical example is solved to il ustrate the proposed method.

  4. The linear characteristic method for spatially discretizing the discrete ordinates equations in (x,y)-geometry

    International Nuclear Information System (INIS)

    Larsen, E.W.; Alcouffe, R.E.

    1981-01-01

    In this article a new linear characteristic (LC) spatial differencing scheme for the discrete ordinates equations in (x,y)-geometry is described and numerical comparisons are given with the diamond difference (DD) method. The LC method is more stable with mesh size and is generally much more accurate than the DD method on both fine and coarse meshes, for eigenvalue and deep penetration problems. The LC method is based on computations involving the exact solution of a cell problem which has spatially linear boundary conditions and interior source. The LC method is coupled to the diffusion synthetic acceleration (DSA) algorithm in that the linear variations of the source are determined in part by the results of the DSA calculation from the previous inner iteration. An inexpensive negative-flux fixup is used which has very little effect on the accuracy of the solution. The storage requirements for LC are essentially the same as that for DD, while the computational times for LC are generally less than twice the DD computational times for the same mesh. This increase in computational cost is offset if one computes LC solutions on somewhat coarser meshes than DD; the resulting LC solutions are still generally much more accurate than the DD solutions. (orig.) [de

  5. Method for linearizing the potentiometric curves of precipitation titration in nonaqueous and aqueous-organic solutions

    International Nuclear Information System (INIS)

    Bykova, L.N.; Chesnokova, O.Ya.; Orlova, M.V.

    1995-01-01

    The method for linearizing the potentiometric curves of precipitation titration is studied for its application in the determination of halide ions (Cl - , Br - , I - ) in dimethylacetamide, dimethylformamide, in which titration is complicated by additional equilibrium processes. It is found that the method of linearization permits the determination of the titrant volume at the end point of titration to high accuracy in the case of titration curves without a potential jump in the proximity of the equivalent point (5 x 10 -5 M). 3 refs., 2 figs., 3 tabs

  6. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  7. Adaptive discontinuous Galerkin methods for non-linear reactive flows

    CERN Document Server

    Uzunca, Murat

    2016-01-01

    The focus of this monograph is the development of space-time adaptive methods to solve the convection/reaction dominated non-stationary semi-linear advection diffusion reaction (ADR) equations with internal/boundary layers in an accurate and efficient way. After introducing the ADR equations and discontinuous Galerkin discretization, robust residual-based a posteriori error estimators in space and time are derived. The elliptic reconstruction technique is then utilized to derive the a posteriori error bounds for the fully discrete system and to obtain optimal orders of convergence. As coupled surface and subsurface flow over large space and time scales is described by (ADR) equation the methods described in this book are of high importance in many areas of Geosciences including oil and gas recovery, groundwater contamination and sustainable use of groundwater resources, storing greenhouse gases or radioactive waste in the subsurface.

  8. Semidefinite linear complementarity problems

    International Nuclear Information System (INIS)

    Eckhardt, U.

    1978-04-01

    Semidefinite linear complementarity problems arise by discretization of variational inequalities describing e.g. elastic contact problems, free boundary value problems etc. In the present paper linear complementarity problems are introduced and the theory as well as the numerical treatment of them are described. In the special case of semidefinite linear complementarity problems a numerical method is presented which combines the advantages of elimination and iteration methods without suffering from their drawbacks. This new method has very attractive properties since it has a high degree of invariance with respect to the representation of the set of all feasible solutions of a linear complementarity problem by linear inequalities. By means of some practical applications the properties of the new method are demonstrated. (orig.) [de

  9. Quality control methods for linear accelerator radiation and mechanical axes alignment.

    Science.gov (United States)

    Létourneau, Daniel; Keller, Harald; Becker, Nathan; Amin, Md Nurul; Norrlinger, Bernhard; Jaffray, David A

    2018-06-01

    The delivery accuracy of highly conformal dose distributions generated using intensity modulation and collimator, gantry, and couch degrees of freedom is directly affected by the quality of the alignment between the radiation beam and the mechanical axes of a linear accelerator. For this purpose, quality control (QC) guidelines recommend a tolerance of ±1 mm for the coincidence of the radiation and mechanical isocenters. Traditional QC methods for assessment of radiation and mechanical axes alignment (based on pointer alignment) are time consuming and complex tasks that provide limited accuracy. In this work, an automated test suite based on an analytical model of the linear accelerator motions was developed to streamline the QC of radiation and mechanical axes alignment. The proposed method used the automated analysis of megavoltage images of two simple task-specific phantoms acquired at different linear accelerator settings to determine the coincidence of the radiation and mechanical isocenters. The sensitivity and accuracy of the test suite were validated by introducing actual misalignments on a linear accelerator between the radiation axis and the mechanical axes using both beam steering and mechanical adjustments of the gantry and couch. The validation demonstrated that the new QC method can detect sub-millimeter misalignment between the radiation axis and the three mechanical axes of rotation. A displacement of the radiation source of 0.2 mm using beam steering parameters was easily detectable with the proposed collimator rotation axis test. Mechanical misalignments of the gantry and couch rotation axes of the same magnitude (0.2 mm) were also detectable using the new gantry and couch rotation axis tests. For the couch rotation axis, the phantom and test design allow detection of both translational and tilt misalignments with the radiation beam axis. For the collimator rotation axis, the test can isolate the misalignment between the beam radiation axis

  10. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    Energy Technology Data Exchange (ETDEWEB)

    Jing, Yaqi; Meng, Qinghao, E-mail: qh-meng@tju.edu.cn; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen [Tianjin Key Laboratory of Process Measurement and Control, Institute of Robotics and Autonomous Systems, School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2014-05-15

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.

  11. Electronic nose with a new feature reduction method and a multi-linear classifier for Chinese liquor classification

    International Nuclear Information System (INIS)

    Jing, Yaqi; Meng, Qinghao; Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen

    2014-01-01

    An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively

  12. Sensitivity-based virtual fields for the non-linear virtual fields method

    Science.gov (United States)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2017-09-01

    The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.

  13. Methods in half-linear asymptotic theory

    Directory of Open Access Journals (Sweden)

    Pavel Rehak

    2016-10-01

    Full Text Available We study the asymptotic behavior of eventually positive solutions of the second-order half-linear differential equation $$ (r(t|y'|^{\\alpha-1}\\hbox{sgn} y''=p(t|y|^{\\alpha-1}\\hbox{sgn} y, $$ where r(t and p(t are positive continuous functions on $[a,\\infty$, $\\alpha\\in(1,\\infty$. The aim of this article is twofold. On the one hand, we show applications of a wide variety of tools, like the Karamata theory of regular variation, the de Haan theory, the Riccati technique, comparison theorems, the reciprocity principle, a certain transformation of dependent variable, and principal solutions. On the other hand, we solve open problems posed in the literature and generalize existing results. Most of our observations are new also in the linear case.

  14. CT image construction of a totally deflated lung using deformable model extrapolation

    International Nuclear Information System (INIS)

    Sadeghi Naini, Ali; Pierce, Greg; Lee, Ting-Yim

    2011-01-01

    Purpose: A novel technique is proposed to construct CT image of a totally deflated lung from a free-breathing 4D-CT image sequence acquired preoperatively. Such a constructed CT image is very useful in performing tumor ablative procedures such as lung brachytherapy. Tumor ablative procedures are frequently performed while the lung is totally deflated. Deflating the lung during such procedures renders preoperative images ineffective for targeting the tumor. Furthermore, the problem cannot be solved using intraoperative ultrasound (U.S.) images because U.S. images are very sensitive to small residual amount of air remaining in the deflated lung. One possible solution to address these issues is to register high quality preoperative CT images of the deflated lung with their corresponding low quality intraoperative U.S. images. However, given that such preoperative images correspond to an inflated lung, such CT images need to be processed to construct CT images pertaining to the lung's deflated state. Methods: To obtain the CT images of deflated lung, we present a novel image construction technique using extrapolated deformable registration to predict the deformation the lung undergoes during full deflation. The proposed construction technique involves estimating the lung's air volume in each preoperative image automatically in order to track the respiration phase of each 4D-CT image throughout a respiratory cycle; i.e., the technique does not need any external marker to form a respiratory signal in the process of curve fitting and extrapolation. The extrapolated deformation field is then applied on a preoperative reference image in order to construct the totally deflated lung's CT image. The technique was evaluated experimentally using ex vivo porcine lung. Results: The ex vivo lung experiments led to very encouraging results. In comparison with the CT image of the deflated lung we acquired for the purpose of validation, the constructed CT image was very similar. The

  15. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  16. Measurements of linear attenuation coefficients of irregular shaped samples by two media method

    International Nuclear Information System (INIS)

    Singh, Sukhpal; Kumar, Ashok; Thind, Kulwant Singh; Mudahar, Gurmel S.

    2008-01-01

    The linear attenuation coefficient values of regular and irregular shaped flyash materials have been measured without knowing the thickness of a sample using a new technique namely 'two media method'. These values have also been measured with a standard gamma ray transmission method and obtained theoretically with winXCOM computer code. From the comparison it is reported that the two media method has given accurate results of attenuation coefficients of flyash materials

  17. Machine learning-based methods for prediction of linear B-cell epitopes.

    Science.gov (United States)

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  18. An overview of solution methods for multi-objective mixed integer linear programming programs

    DEFF Research Database (Denmark)

    Andersen, Kim Allan; Stidsen, Thomas Riis

    Multiple objective mixed integer linear programming (MOMIP) problems are notoriously hard to solve to optimality, i.e. finding the complete set of non-dominated solutions. We will give an overview of existing methods. Among those are interactive methods, the two phases method and enumeration...... methods. In particular we will discuss the existing branch and bound approaches for solving multiple objective integer programming problems. Despite the fact that branch and bound methods has been applied successfully to integer programming problems with one criterion only a few attempts has been made...

  19. Effective linear two-body method for many-body problems in atomic and nuclear physics

    International Nuclear Information System (INIS)

    Kim, Y.E.; Zubarev, A.L.

    2000-01-01

    We present an equivalent linear two-body method for the many body problem, which is based on an approximate reduction of the many-body Schroedinger equation by the use of a variational principle. The method is applied to several problems in atomic and nuclear physics. (author)

  20. Electric form factors of the octet baryons from lattice QCD and chiral extrapolation

    International Nuclear Information System (INIS)

    Shanahan, P.E.; Thomas, A.W.; Young, R.D.; Zanotti, J.M.; Pleiter, D.; Stueben, H.

    2014-03-01

    We apply a formalism inspired by heavy baryon chiral perturbation theory with finite-range regularization to dynamical 2+1-flavor CSSM/QCDSF/UKQCD Collaboration lattice QCD simulation results for the electric form factors of the octet baryons. The electric form factor of each octet baryon is extrapolated to the physical pseudoscalar masses, after finite-volume corrections have been applied, at six fixed values of Q 2 in the range 0.2-1.3 GeV 2 . The extrapolated lattice results accurately reproduce the experimental form factors of the nucleon at the physical point, indicating that omitted disconnected quark loop contributions are small. Furthermore, using the results of a recent lattice study of the magnetic form factors, we determine the ratio μ p G E p /G M p . This quantity decreases with Q 2 in a way qualitatively consistent with recent experimental results.

  1. Neural extrapolation of motion for a ball rolling down an inclined plane.

    Science.gov (United States)

    La Scaleia, Barbara; Lacquaniti, Francesco; Zago, Myrka

    2014-01-01

    It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions) and slope (30°, 45° or 60°). In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1). However, even when participants punched an imaginary moving ball (Experiment 2) or drew in air the imaginary trajectory (Experiment 3), they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion.

  2. Neural extrapolation of motion for a ball rolling down an inclined plane.

    Directory of Open Access Journals (Sweden)

    Barbara La Scaleia

    Full Text Available It is known that humans tend to misjudge the kinematics of a target rolling down an inclined plane. Because visuomotor responses are often more accurate and less prone to perceptual illusions than cognitive judgments, we asked the question of how rolling motion is extrapolated for manual interception or drawing tasks. In three experiments a ball rolled down an incline with kinematics that differed as a function of the starting position (4 different positions and slope (30°, 45° or 60°. In Experiment 1, participants had to punch the ball as it fell off the incline. In Experiment 2, the ball rolled down the incline but was stopped at the end; participants were asked to imagine that the ball kept moving and to punch it. In Experiment 3, the ball rolled down the incline and was stopped at the end; participants were asked to draw with the hand in air the trajectory that would be described by the ball if it kept moving. We found that performance was most accurate when motion of the ball was visible until interception and haptic feedback of hand-ball contact was available (Experiment 1. However, even when participants punched an imaginary moving ball (Experiment 2 or drew in air the imaginary trajectory (Experiment 3, they were able to extrapolate to some extent global aspects of the target motion, including its path, speed and arrival time. We argue that the path and kinematics of a ball rolling down an incline can be extrapolated surprisingly well by the brain using both visual information and internal models of target motion.

  3. Acceptability criteria for linear dependence in validating UV-spectrophotometric methods of quantitative determination in forensic and toxicological analysis

    Directory of Open Access Journals (Sweden)

    L. Yu. Klimenko

    2014-08-01

    Full Text Available Introduction. This article is the result of authors’ research in the field of development of the approaches to validation of quantitative determination methods for purposes of forensic and toxicological analysis and devoted to the problem of acceptability criteria formation for validation parameter «linearity/calibration model». The aim of research. The purpose of this paper is to analyse the present approaches to acceptability estimation of the calibration model chosen for method description according to the requirements of the international guidances, to form the own approaches to acceptability estimation of the linear dependence when carrying out the validation of UV-spectrophotometric methods of quantitative determination for forensic and toxicological analysis. Materials and methods. UV-spectrophotometric method of doxylamine quantitative determination in blood. Results. The approaches to acceptability estimation of calibration models when carrying out the validation of bioanalytical methods is stated in international papers, namely «Guidance for Industry: Bioanalytical method validation» (U.S. FDA, 2001, «Standard Practices for Method Validation in Forensic Toxicology» (SWGTOX, 2012, «Guidance for the Validation of Analytical Methodology and Calibration of Equipment used for Testing of Illicit Drugs in Seized Materials and Biological Specimens» (UNODC, 2009 and «Guideline on validation of bioanalytical methods» (ЕМА, 2011 have been analysed. It has been suggested to be guided by domestic developments in the field of validation of analysis methods for medicines and, particularly, by the approaches to validation methods in the variant of the calibration curve method for forming the acceptability criteria of the obtained linear dependences when carrying out the validation of UV-spectrophotometric methods of quantitative determination for forensic and toxicological analysis. The choice of the method of calibration curve is

  4. A Riccati Based Homogeneous and Self-Dual Interior-Point Method for Linear Economic Model Predictive Control

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Frison, Gianluca; Edlund, Kristian

    2013-01-01

    In this paper, we develop an efficient interior-point method (IPM) for the linear programs arising in economic model predictive control of linear systems. The novelty of our algorithm is that it combines a homogeneous and self-dual model, and a specialized Riccati iteration procedure. We test...

  5. New exact solutions of the Tzitzéica-type equations in non-linear optics using the expa function method

    Science.gov (United States)

    Hosseini, K.; Ayati, Z.; Ansari, R.

    2018-04-01

    One specific class of non-linear evolution equations, known as the Tzitzéica-type equations, has received great attention from a group of researchers involved in non-linear science. In this article, new exact solutions of the Tzitzéica-type equations arising in non-linear optics, including the Tzitzéica, Dodd-Bullough-Mikhailov and Tzitzéica-Dodd-Bullough equations, are obtained using the expa function method. The integration technique actually suggests a useful and reliable method to extract new exact solutions of a wide range of non-linear evolution equations.

  6. Development of MCAERO wing design panel method with interactive graphics module

    Science.gov (United States)

    Hawk, J. D.; Bristow, D. R.

    1984-01-01

    A reliable and efficient iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical pressure distribution. The design process is initialized by using MCAERO (MCAIR 3-D Subsonic Potential Flow Analysis Code) to analyze a baseline configuration. A second program DMCAERO is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter by applying a first-order expansion to the baseline equations in MCAERO. This matrix is calculated only once but is used in each iteration cycle to calculate the geometry perturbation and to analyze the perturbed geometry. The potential on the new geometry is calculated by linear extrapolation from the baseline solution. This extrapolated potential is converted to velocity by numerical differentiation, and velocity is converted to pressure by using Bernoulli's equation. There is an interactive graphics option which allows the user to graphically display the results of the design process and to interactively change either the geometry or the prescribed pressure distribution.

  7. A penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography.

    Science.gov (United States)

    Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn

    2007-01-01

    Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.

  8. Extrapolated surface dose measurements using a NdFeB magnetic deflector for 6 MV x-ray beams.

    Science.gov (United States)

    Damrongkijudom, N; Butson, M; Rosenfeld, A

    2007-03-01

    Extrapolated surface dose measurements have been performed using radiographic film to measure 2-Dimensional maps of skin and surface dose with and without a magnetic deflector device aimed at reducing surface dose. Experiments are also performed using an Attix parallel plate ionisation chamber for comparison to radiographic film extrapolation surface dose analysis. Extrapolated percentage surface dose assessments from radiographic film at the central axis of a 6 MV x-ray beam with magnetic deflector for field size 10 x 10 cm2, 15 x 15 cm2 and 20 x 20 cm2 are 9 +/- 3%, 13 +/- 3% and 16 +/- 3%, these compared to 14 +/- 3%, 19 +/- 3%, and 27 +/- 3% for open fields, respectively. Results from Attix chamber for the same field size are 12 +/- 1%, 15 +/- 1% and 18 +/- 1%, these compared to 16 +/- 1%, 21 +/- 1% and 27 +/- 1% for open fields, respectively. Results are also shown for profiles measured in-plane and cross-plane to the magnetic deflector and compared to open field data. Results have shown that the surface dose is reduced at all sites within the treatment field with larger reductions seen on one side of the field due to the sweeping nature of the designed magnetic field. Radiographic film extrapolation provides an advanced surface dose assessment and has matched well with Attix chamber results. Film measurement allows for easy 2 dimensional dose assessments.

  9. APPLYING ROBUST RANKING METHOD IN TWO PHASE FUZZY OPTIMIZATION LINEAR PROGRAMMING PROBLEMS (FOLPP

    Directory of Open Access Journals (Sweden)

    Monalisha Pattnaik

    2014-12-01

    Full Text Available Background: This paper explores the solutions to the fuzzy optimization linear program problems (FOLPP where some parameters are fuzzy numbers. In practice, there are many problems in which all decision parameters are fuzzy numbers, and such problems are usually solved by either probabilistic programming or multi-objective programming methods. Methods: In this paper, using the concept of comparison of fuzzy numbers, a very effective method is introduced for solving these problems. This paper extends linear programming based problem in fuzzy environment. With the problem assumptions, the optimal solution can still be theoretically solved using the two phase simplex based method in fuzzy environment. To handle the fuzzy decision variables can be initially generated and then solved and improved sequentially using the fuzzy decision approach by introducing robust ranking technique. Results and conclusions: The model is illustrated with an application and a post optimal analysis approach is obtained. The proposed procedure was programmed with MATLAB (R2009a version software for plotting the four dimensional slice diagram to the application. Finally, numerical example is presented to illustrate the effectiveness of the theoretical results, and to gain additional managerial insights. 

  10. Extrapolation of extreme response for different mooring line systems of floating wave energy converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Sterndorff, Martin; Sørensen, John Dalsgaard

    2014-01-01

    Mooring systems for floating wave energy converters (WECs) are a major cost driver. Failure of mooring systems often occurs due to extreme loads. This paper introduces an extrapolation method for extreme response which accounts for the control system of a WEC that controls the loads onto...... measurements from lab-scaled WEPTOS WEC are taken. Different catenary anchor leg mooring (CALM) systems as well as single anchor legmooring (SALM)mooring systemsare implemented for a dynamic simulation with different number of mooring lines. Extreme tension loads with a return period of 50 years are assessed...... for the hawser as well as at the different mooring lines. Furthermore, the extreme load impact given failure of one mooring line is assessed and compared with extreme loads given no system failure....

  11. A Fast Condensing Method for Solution of Linear-Quadratic Control Problems

    DEFF Research Database (Denmark)

    Frison, Gianluca; Jørgensen, John Bagterp

    2013-01-01

    consider a condensing (or state elimination) method to solve an extended version of the LQ control problem, and we show how to exploit the structure of this problem to both factorize the dense Hessian matrix and solve the system. Furthermore, we present two efficient implementations. The first......In both Active-Set (AS) and Interior-Point (IP) algorithms for Model Predictive Control (MPC), sub-problems in the form of linear-quadratic (LQ) control problems need to be solved at each iteration. The solution of these sub-problems is usually the main computational effort. In this paper we...... implementation is formally identical to the Riccati recursion based solver and has a computational complexity that is linear in the control horizon length and cubic in the number of states. The second implementation has a computational complexity that is quadratic in the control horizon length as well...

  12. Enhancing Linearity of Voltage Controlled Oscillator Thermistor Signal Conditioning Circuit Using Linear Search

    Science.gov (United States)

    Rana, K. P. S.; Kumar, Vineet; Prasad, Tapan

    2018-02-01

    Temperature to Frequency Converters (TFCs) are potential signal conditioning circuits (SCCs) usually employed in temperature measurements using thermistors. A NE/SE-566 based SCC has been recently used in several reported works as TFC. Application of NE/SE-566 based SCC requires a mechanism for finding the optimal values of SCC parameters yielding the optimal linearity and desired sensitivity performances. Two classical methods, namely, inflection point and three point have been employed for this task. In this work, the application of these two methods, on NE/SE-566 based SCC in TFC, is investigated in detail and the conditions for its effective usage are developed. Further, since these classical methods offer an approximate linearization of temperature and frequency relationship an application of a linear search based technique is proposed to further enhance the linearity. The implemented linear search method used results obtained from the above mentioned classical methods. The presented simulation studies, for three different industrial grade thermistors, revealed that the linearity enhancements of 21.7, 18.3 and 17.8% can be achieved over the inflection point method and 4.9, 4.7 and 4.7% over the three point method, for an input temperature range of 0-100 °C.

  13. Krylov subspace method with communication avoiding technique for linear system obtained from electromagnetic analysis

    International Nuclear Information System (INIS)

    Ikuno, Soichiro; Chen, Gong; Yamamoto, Susumu; Itoh, Taku; Abe, Kuniyoshi; Nakamura, Hiroaki

    2016-01-01

    Krylov subspace method and the variable preconditioned Krylov subspace method with communication avoiding technique for a linear system obtained from electromagnetic analysis are numerically investigated. In the k−skip Krylov method, the inner product calculations are expanded by Krylov basis, and the inner product calculations are transformed to the scholar operations. k−skip CG method is applied for the inner-loop solver of Variable Preconditioned Krylov subspace methods, and the converged solution of electromagnetic problem is obtained using the method. (author)

  14. Pseudoinverse preconditioners and iterative methods for large dense linear least-squares problems

    Directory of Open Access Journals (Sweden)

    Oskar Cahueñas

    2013-05-01

    Full Text Available We address the issue of approximating the pseudoinverse of the coefficient matrix for dynamically building preconditioning strategies for the numerical solution of large dense linear least-squares problems. The new preconditioning strategies are embedded into simple and well-known iterative schemes that avoid the use of the, usually ill-conditioned, normal equations. We analyze a scheme to approximate the pseudoinverse, based on Schulz iterative method, and also different iterative schemes, based on extensions of Richardson's method, and the conjugate gradient method, that are suitable for preconditioning strategies. We present preliminary numerical results to illustrate the advantages of the proposed schemes.

  15. A block Krylov subspace time-exact solution method for linear ordinary differential equation systems

    NARCIS (Netherlands)

    Bochev, Mikhail A.

    2013-01-01

    We propose a time-exact Krylov-subspace-based method for solving linear ordinary differential equation systems of the form $y'=-Ay+g(t)$ and $y"=-Ay+g(t)$, where $y(t)$ is the unknown function. The method consists of two stages. The first stage is an accurate piecewise polynomial approximation of

  16. Improved Full-Newton Step O(nL) Infeasible Interior-Point Method for Linear Optimization

    NARCIS (Netherlands)

    Gu, G.; Mansouri, H.; Zangiabadi, M.; Bai, Y.Q.; Roos, C.

    2009-01-01

    We present several improvements of the full-Newton step infeasible interior-point method for linear optimization introduced by Roos (SIAM J. Optim. 16(4):1110–1136, 2006). Each main step of the method consists of a feasibility step and several centering steps. We use a more natural feasibility step,

  17. A simple method for HPLC retention time prediction: linear calibration using two reference substances.

    Science.gov (United States)

    Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng

    2017-01-01

    Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.

  18. An improved partial bundle method for linearly constrained minimax problems

    Directory of Open Access Journals (Sweden)

    Chunming Tang

    2016-02-01

    Full Text Available In this paper, we propose an improved partial bundle method for solving linearly constrained minimax problems. In order to reduce the number of component function evaluations, we utilize a partial cutting-planes model to substitute for the traditional one. At each iteration, only one quadratic programming subproblem needs to be solved to obtain a new trial point. An improved descent test criterion is introduced to simplify the algorithm. The method produces a sequence of feasible trial points, and ensures that the objective function is monotonically decreasing on the sequence of stability centers. Global convergence of the algorithm is established. Moreover, we utilize the subgradient aggregation strategy to control the size of the bundle and therefore overcome the difficulty of computation and storage. Finally, some preliminary numerical results show that the proposed method is effective.

  19. A new technique for extracting the red edge position from hyperspectral data : the linear extrapolation method

    NARCIS (Netherlands)

    Cho, M.A.; Skidmore, A.K.

    2006-01-01

    There is increasing interest in using hyperspectral data for quantitative characterization of vegetation in spatial and temporal scopes. Many spectral indices are being developed to improve vegetation sensitivity by minimizing the background influence. The chlorophyll absorption continuum index

  20. Model Predictive Control for Linear Complementarity and Extended Linear Complementarity Systems

    Directory of Open Access Journals (Sweden)

    Bambang Riyanto

    2005-11-01

    Full Text Available In this paper, we propose model predictive control method for linear complementarity and extended linear complementarity systems by formulating optimization along prediction horizon as mixed integer quadratic program. Such systems contain interaction between continuous dynamics and discrete event systems, and therefore, can be categorized as hybrid systems. As linear complementarity and extended linear complementarity systems finds applications in different research areas, such as impact mechanical systems, traffic control and process control, this work will contribute to the development of control design method for those areas as well, as shown by three given examples.