#### Sample records for solution interpolation procedure

1. Interpolation solution of the single-impurity Anderson model

International Nuclear Information System (INIS)

Kuzemsky, A.L.

1990-10-01

The dynamical properties of the single-impurity Anderson model (SIAM) is studied using a novel Irreducible Green's Function method (IGF). The new solution for one-particle GF interpolating between the strong and weak correlation limits is obtained. The unified concept of relevant mean-field renormalizations is indispensable for strong correlation limit. (author). 21 refs

2. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

Science.gov (United States)

Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

2017-11-01

Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

3. Multivariate interpolation

Directory of Open Access Journals (Sweden)

Pakhnutov I.A.

2017-04-01

Full Text Available the paper deals with iterative interpolation methods in forms of similar recursive procedures defined by a sort of simple functions (interpolation basis not necessarily real valued. These basic functions are kind of arbitrary type being defined just by wish and considerations of user. The studied interpolant construction shows virtue of versatility: it may be used in a wide range of vector spaces endowed with scalar product, no dimension restrictions, both in Euclidean and Hilbert spaces. The choice of basic interpolation functions is as wide as possible since it is subdued nonessential restrictions. The interpolation method considered in particular coincides with traditional polynomial interpolation (mimic of Lagrange method in real unidimensional case or rational, exponential etc. in other cases. The interpolation as iterative process, in fact, is fairly flexible and allows one procedure to change the type of interpolation, depending on the node number in a given set. Linear interpolation basis options (perhaps some nonlinear ones allow to interpolate in noncommutative spaces, such as spaces of nondegenerate matrices, interpolated data can also be relevant elements of vector spaces over arbitrary numeric field. By way of illustration, the author gives the examples of interpolation on the real plane, in the separable Hilbert space and the space of square matrices with vektorvalued source data.

4. Spline-Interpolation Solution of One Elasticity Theory Problem

CERN Document Server

Shirakova, Elena A

2011-01-01

The book presents methods of approximate solution of the basic problem of elasticity for special types of solids. Engineers can apply the approximate methods (Finite Element Method, Boundary Element Method) to solve the problems but the application of these methods may not be correct for solids with the certain singularities or asymmetrical boundary conditions. The book is recommended for researchers and professionals working on elasticity modeling. It explains methods of solving elasticity problems for special solids. Approximate methods (Finite Element Method, Boundary Element Method) have b

5. Hermite interpolant multiscaling functions for numerical solution of the convection diffusion equations

Directory of Open Access Journals (Sweden)

2018-04-01

Full Text Available A numerical technique based on the Hermite interpolant multiscaling functions is presented for the solution of Convection-diusion equations. The operational matrices of derivative, integration and product are presented for multiscaling functions and are utilized to reduce the solution of linear Convection-diusion equation to the solution of algebraic equations. Because of sparsity of these matrices, this method is computationally very attractive and reduces the CPU time and computer memory. Illustrative examples are included to demonstrate the validity and applicability of the new technique.

6. Description of the ECMWF/WMO Global Observational Data Set, and associated data extraction and interpolation procedures

NARCIS (Netherlands)

Potma CJM

1993-01-01

This report presents a description of data-extraction and interpolation procedures using the ECMWF/WMO Global Observational Data Set (ODS), an archive of unvalidated observational meteorological surface data measured at 00, 06, 12 and 18 UT. The archive covers the period 1 January 1980 to 31

7. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology

Science.gov (United States)

Allen, P. A.; Wells, D. N.

2013-01-01

No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

8. SPLINE, Spline Interpolation Function

International Nuclear Information System (INIS)

Allouard, Y.

1977-01-01

1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

9. Spatial interpolation

NARCIS (Netherlands)

Stein, A.

1991-01-01

The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are

10. Interpolation functors and interpolation spaces

CERN Document Server

Brudnyi, Yu A

1991-01-01

The theory of interpolation spaces has its origin in the classical work of Riesz and Marcinkiewicz but had its first flowering in the years around 1960 with the pioneering work of Aronszajn, Calderón, Gagliardo, Krein, Lions and a few others. It is interesting to note that what originally triggered off this avalanche were concrete problems in the theory of elliptic boundary value problems related to the scale of Sobolev spaces. Later on, applications were found in many other areas of mathematics: harmonic analysis, approximation theory, theoretical numerical analysis, geometry of Banach spaces, nonlinear functional analysis, etc. Besides this the theory has a considerable internal beauty and must by now be regarded as an independent branch of analysis, with its own problems and methods. Further development in the 1970s and 1980s included the solution by the authors of this book of one of the outstanding questions in the theory of the real method, the K-divisibility problem. In a way, this book harvests the r...

11. Interpolation theory

CERN Document Server

Lunardi, Alessandra

2018-01-01

This book is the third edition of the 1999 lecture notes of the courses on interpolation theory that the author delivered at the Scuola Normale in 1998 and 1999. In the mathematical literature there are many good books on the subject, but none of them is very elementary, and in many cases the basic principles are hidden below great generality. In this book the principles of interpolation theory are illustrated aiming at simplification rather than at generality. The abstract theory is reduced as far as possible, and many examples and applications are given, especially to operator theory and to regularity in partial differential equations. Moreover the treatment is self-contained, the only prerequisite being the knowledge of basic functional analysis.

12. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

Science.gov (United States)

Allen, Phillip A.; Wells, Douglas N.

2013-01-01

No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

13. BIMOND3, Monotone Bivariate Interpolation

International Nuclear Information System (INIS)

Fritsch, F.N.; Carlson, R.E.

2001-01-01

1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data

14. Robust and efficient solution procedures for association models

DEFF Research Database (Denmark)

Michelsen, Michael Locht

2006-01-01

Equations of state that incorporate the Wertheim association expression are more difficult to apply than conventional pressure explicit equations, because the association term is implicit and requires solution for an internal set of composition variables. In this work, we analyze the convergence...... behavior of different solution methods and demonstrate how a simple and efficient, yet globally convergent, procedure for the solution of the equation of state can be formulated....

15. International comparison of interpolation procedures for the efficiency of germanium gamma-ray spectrometers (GAM83 exercise)

International Nuclear Information System (INIS)

Zijp, W.L.; Polle, A.N.; Nolthenius, H.J.

1986-01-01

Results are presented for the outcome of an international intercomparison of a particular gamma-ray spectrometric procedure. Laboratories were asked to determine full energy peak efficiencies and activities by means of their own procedures, starting from supplied peak-efficiency data. Four data sets for four different conditions of germanium detectors were distributed. The sets comprised: a high accuracy- (uncertainty > 1%) data set with a relatively large number of measured data (SET 1); a low accuracy- (uncertainty 3-5%) data set with a relatively small number of measured data (SET 2); a low energy-data set (SET 3); a high accuracy-data set with a relatively small number of measured data (SET 4). The intercomparison (coded GAM83) was organized and analyzed under auspices of the International Committee for Radionuclide Metrology (ICRM). The results comprise the analysis of the contributions of 41 participants

16. Procedures for accurately diluting and dispensing radioactive solutions

International Nuclear Information System (INIS)

1975-01-01

The technique currently used by various laboratories participating in international comparisons of radioactivity measurements are surveyed and recommendations for good laboratory practice established. Thus one describes, for instance, the preparation of solutions, dilution techniques, the use of 'pycnometers', weighing procedures (including buyoancy correction), etc. It should be possible to keep random and systematic uncertainties below 0.1% of the final result

17. Linear Methods for Image Interpolation

OpenAIRE

Pascal Getreuer

2011-01-01

We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

18. Permanently calibrated interpolating time counter

International Nuclear Information System (INIS)

Jachna, Z; Szplet, R; Kwiatkowski, P; Różyc, K

2015-01-01

We propose a new architecture of an integrated time interval counter that provides its permanent calibration in the background. Time interval measurement and the calibration procedure are based on the use of a two-stage interpolation method and parallel processing of measurement and calibration data. The parallel processing is achieved by a doubling of two-stage interpolators in measurement channels of the counter, and by an appropriate extension of control logic. Such modification allows the updating of transfer characteristics of interpolators without the need to break a theoretically infinite measurement session. We describe the principle of permanent calibration, its implementation and influence on the quality of the counter. The precision of the presented counter is kept at a constant level (below 20 ps) despite significant changes in the ambient temperature (from −10 to 60 °C), which can cause a sevenfold decrease in the precision of the counter with a traditional calibration procedure. (paper)

19. Smooth Phase Interpolated Keying

Science.gov (United States)

Borah, Deva K.

2007-01-01

Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values

20. Spline Interpolation of Image

OpenAIRE

I. Kuba; J. Zavacky; J. Mihalik

1995-01-01

This paper presents the use of B spline functions in various digital signal processing applications. The theory of one-dimensional B spline interpolation is briefly reviewed, followed by its extending to two dimensions. After presenting of one and two dimensional spline interpolation, the algorithms of image interpolation and resolution increasing were proposed. Finally, experimental results of computer simulations are presented.

1. a Procedural Solution to Model Roman Masonry Structures

Science.gov (United States)

Cappellini, V.; Saleri, R.; Stefani, C.; Nony, N.; De Luca, L.

2013-07-01

The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac - PAM, developed by IGN (Paris). We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France). Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick), this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects) and metric measures for restoration purposes.

2. A PROCEDURAL SOLUTION TO MODEL ROMAN MASONRY STRUCTURES

Directory of Open Access Journals (Sweden)

V. Cappellini

2013-07-01

Full Text Available The paper will describe a new approach based on the development of a procedural modelling methodology for archaeological data representation. This is a custom-designed solution based on the recognition of the rules belonging to the construction methods used in roman times. We have conceived a tool for 3D reconstruction of masonry structures starting from photogrammetric surveying. Our protocol considers different steps. Firstly we have focused on the classification of opus based on the basic interconnections that can lead to a descriptive system used for their unequivocal identification and design. Secondly, we have chosen an automatic, accurate, flexible and open-source photogrammetric pipeline named Pastis Apero Micmac – PAM, developed by IGN (Paris. We have employed it to generate ortho-images from non-oriented images, using a user-friendly interface implemented by CNRS Marseille (France. Thirdly, the masonry elements are created in parametric and interactive way, and finally they are adapted to the photogrammetric data. The presented application, currently under construction, is developed with an open source programming language called Processing, useful for visual, animated or static, 2D or 3D, interactive creations. Using this computer language, a Java environment has been developed. Therefore, even if the procedural modelling reveals an accuracy level inferior to the one obtained by manual modelling (brick by brick, this method can be useful when taking into account the static evaluation on buildings (requiring quantitative aspects and metric measures for restoration purposes.

3. Generalized interpolative quantum statistics

International Nuclear Information System (INIS)

Ramanathan, R.

1992-01-01

A generalized interpolative quantum statistics is presented by conjecturing a certain reordering of phase space due to the presence of possible exotic objects other than bosons and fermions. Such an interpolation achieved through a Bose-counting strategy predicts the existence of an infinite quantum Boltzmann-Gibbs statistics akin to the one discovered by Greenberg recently

4. CMB anisotropies interpolation

NARCIS (Netherlands)

Zinger, S.; Delabrouille, Jacques; Roux, Michel; Maitre, Henri

2010-01-01

We consider the problem of the interpolation of irregularly spaced spatial data, applied to observation of Cosmic Microwave Background (CMB) anisotropies. The well-known interpolation methods and kriging are compared to the binning method which serves as a reference approach. We analyse kriging

5. Monotone piecewise bicubic interpolation

International Nuclear Information System (INIS)

Carlson, R.E.; Fritsch, F.N.

1985-01-01

In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables

6. Linear Methods for Image Interpolation

Directory of Open Access Journals (Sweden)

Pascal Getreuer

2011-09-01

Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

7. Interpolation of daily rainfall using spatiotemporal models and clustering

KAUST Repository

Militino, A. F.

2014-06-11

Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

8. Interpolation of daily rainfall using spatiotemporal models and clustering

KAUST Repository

Militino, A. F.; Ugarte, M. D.; Goicoa, T.; Genton, Marc G.

2014-01-01

Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

9. Systems and methods for interpolation-based dynamic programming

KAUST Repository

Rockwood, Alyn

2013-01-03

Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

10. Systems and methods for interpolation-based dynamic programming

KAUST Repository

Rockwood, Alyn

2013-01-01

Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

11. Feature displacement interpolation

DEFF Research Database (Denmark)

1998-01-01

Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features...... often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov regularizations, also including scale......-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface....

12. Extension Of Lagrange Interpolation

Directory of Open Access Journals (Sweden)

2015-01-01

Full Text Available Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.

13. A procedure to construct exact solutions of nonlinear evolution ...

Exact solutions; the functional variable method; nonlinear wave equations. PACS Nos 02.30. ... computer science, directly searching for solutions of nonlinear differential equations has become more and ... Right after this pioneer work, this ...

14. New families of interpolating type IIB backgrounds

Science.gov (United States)

Minasian, Ruben; Petrini, Michela; Zaffaroni, Alberto

2010-04-01

We construct new families of interpolating two-parameter solutions of type IIB supergravity. These correspond to D3-D5 systems on non-compact six-dimensional manifolds which are mathbb{T}2 fibrations over Eguchi-Hanson and multi-center Taub-NUT spaces, respectively. One end of the interpolation corresponds to a solution with only D5 branes and vanishing NS three-form flux. A topology changing transition occurs at the other end, where the internal space becomes a direct product of the four-dimensional surface and the two-torus and the complexified NS-RR three-form flux becomes imaginary self-dual. Depending on the choice of the connections on the torus fibre, the interpolating family has either mathcal{N}=2 or mathcal{N}=1 supersymmetry. In the mathcal{N}=2 case it can be shown that the solutions are regular.

15. Digital time-interpolator

International Nuclear Information System (INIS)

Schuller, S.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

1990-01-01

This report presents a description of the design of a digital time meter. This time meter should be able to measure, by means of interpolation, times of 100 ns with an accuracy of 50 ps. In order to determine the best principle for interpolation, three methods were simulated at the computer with a Pascal code. On the basis of this the best method was chosen and used in the design. In order to test the principal operation of the circuit a part of the circuit was constructed with which the interpolation could be tested. The remainder of the circuit was simulated with a computer. So there are no data available about the operation of the complete circuit in practice. The interpolation part however is the most critical part, the remainder of the circuit is more or less simple logic. Besides this report also gives a description of the principle of interpolation and the design of the circuit. The measurement results at the prototype are presented finally. (author). 3 refs.; 37 figs.; 2 tabs

16. Multivariate Birkhoff interpolation

CERN Document Server

Lorentz, Rudolph A

1992-01-01

The subject of this book is Lagrange, Hermite and Birkhoff (lacunary Hermite) interpolation by multivariate algebraic polynomials. It unifies and extends a new algorithmic approach to this subject which was introduced and developed by G.G. Lorentz and the author. One particularly interesting feature of this algorithmic approach is that it obviates the necessity of finding a formula for the Vandermonde determinant of a multivariate interpolation in order to determine its regularity (which formulas are practically unknown anyways) by determining the regularity through simple geometric manipulations in the Euclidean space. Although interpolation is a classical problem, it is surprising how little is known about its basic properties in the multivariate case. The book therefore starts by exploring its fundamental properties and its limitations. The main part of the book is devoted to a complete and detailed elaboration of the new technique. A chapter with an extensive selection of finite elements follows as well a...

17. Ketamine. A solution to procedural pain in burned children.

Science.gov (United States)

Groeneveld, A; Inkson, T

1992-09-01

Our experience has shown ketamine to be a safe and effective method of providing pain relief during specific procedures in burned children. It renders high doses of narcotics unnecessary and offers children the benefit of general anesthesia without the requirement of endotracheal intubation and a trip to the operating room. The response of parents and staff to the use of ketamine has been positive. Parents often experience feelings of guilt following injury to a child and are eager to employ methods that reduce their child's pain. So far, no parent has refused the administration of ketamine; some have even asked that it be used during subsequent procedures on their child. With adequate pre-procedure teaching, parents are prepared for the possible occurrence of emergent reactions and can assist in reorienting the child during recovery. Staff have found that the stress of doing painful procedures on children is reduced when ketamine is used. The procedures tend to be quicker and the predicament of working on a screaming, agitated child is eliminated. At the same time, nursing staff have had to get used to the nystagmic gaze of the children and accept that these patients are truly anesthetized even though they might move and talk. Despite the success we and others have had with ketamine, several questions about its use in burn patients remain unanswered. The literature does not answer such questions as: Which nursing measures reduce the incidence of emergent reactions? How many ketamine anesthetics can safely be administered to one individual? How does the frequency of administration relate to tolerance in a burn patient? Are there detrimental effects of frequent or long-term use? Clearly, an understanding of these questions is necessary to determine the safe boundaries of ketamine use in burn patients. Ketamine is not a panacea for the problem of pain in burned children. But it is one means of managing procedural pain, which is, after all, a significant clinical

18. Solution Tree Problem Solving Procedure for Engineering Analysis ...

African Journals Online (AJOL)

Illustrations are provided in the thermofluid engineering area to showcase the procedure's applications. This approach has proved to be a veritable tool for enhancing the problem-solving and computer algorithmic skills of engineering students, eliciting their curiosity, active participation and appreciation of the taught course.

19. Measurement of Actinides in Molybdenum-99 Solution Analytical Procedure

Energy Technology Data Exchange (ETDEWEB)

Soderquist, Chuck Z. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Weaver, Jamie L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

2015-11-01

This document is a companion report to a previous report, PNNL 24519, Measurement of Actinides in Molybdenum-99 Solution, A Brief Review of the Literature, August 2015. In this companion report, we report a fast, accurate, newly developed analytical method for measurement of trace alpha-emitting actinide elements in commercial high-activity molybdenum-99 solution. Molybdenum-99 is widely used to produce 99mTc for medical imaging. Because it is used as a radiopharmaceutical, its purity must be proven to be extremely high, particularly for the alpha emitting actinides. The sample of 99Mo solution is measured into a vessel (such as a polyethylene centrifuge tube) and acidified with dilute nitric acid. A gadolinium carrier is added (50 µg). Tracers and spikes are added as necessary. Then the solution is made strongly basic with ammonium hydroxide, which causes the gadolinium carrier to precipitate as hydrous Gd(OH)3. The precipitate of Gd(OH)3 carries all of the actinide elements. The suspension of gadolinium hydroxide is then passed through a membrane filter to make a counting mount suitable for direct alpha spectrometry. The high-activity 99Mo and 99mTc pass through the membrane filter and are separated from the alpha emitters. The gadolinium hydroxide, carrying any trace actinide elements that might be present in the sample, forms a thin, uniform cake on the surface of the membrane filter. The filter cake is first washed with dilute ammonium hydroxide to push the last traces of molybdate through, then with water. The filter is then mounted on a stainless steel counting disk. Finally, the alpha emitting actinide elements are measured by alpha spectrometry.

20. Measurement of Actinides in Molybdenum-99 Solution Analytical Procedure

International Nuclear Information System (INIS)

Soderquist, Chuck Z.; Weaver, Jamie L.

2015-01-01

This document is a companion report to a previous report, PNNL 24519, Measurement of Actinides in Molybdenum-99 Solution, A Brief Review of the Literature, August 2015. In this companion report, we report a fast, accurate, newly developed analytical method for measurement of trace alpha-emitting actinide elements in commercial high-activity molybdenum-99 solution. Molybdenum-99 is widely used to produce 99m Tc for medical imaging. Because it is used as a radiopharmaceutical, its purity must be proven to be extremely high, particularly for the alpha emitting actinides. The sample of 99 Mo solution is measured into a vessel (such as a polyethylene centrifuge tube) and acidified with dilute nitric acid. A gadolinium carrier is added (50 µg). Tracers and spikes are added as necessary. Then the solution is made strongly basic with ammonium hydroxide, which causes the gadolinium carrier to precipitate as hydrous Gd(OH) 3 . The precipitate of Gd(OH) 3 carries all of the actinide elements. The suspension of gadolinium hydroxide is then passed through a membrane filter to make a counting mount suitable for direct alpha spectrometry. The high-activity 99 Mo and 99m Tc pass through the membrane filter and are separated from the alpha emitters. The gadolinium hydroxide, carrying any trace actinide elements that might be present in the sample, forms a thin, uniform cake on the surface of the membrane filter. The filter cake is first washed with dilute ammonium hydroxide to push the last traces of molybdate through, then with water. The filter is then mounted on a stainless steel counting disk. Finally, the alpha emitting actinide elements are measured by alpha spectrometry.

1. Procedure and equipment for continuous manufacture of solutions

International Nuclear Information System (INIS)

Stiefel, M.

1979-01-01

In order to manufacture boric acid solution for reactor commissioning, the heated water is divided into a main and subsidiary flow, and the total amount of the salt is added to the subsidiary flow. Mixing of the main flow with the salt-containing subsidiary flow takes place on a mixing column. Undissolved salt is removed in a hydro-cyclotron. Preheating of the water takes place in a recuperator heat exchanger and a through boiler provides the final temperatures. (HK) [de

2. Multiscale empirical interpolation for solving nonlinear PDEs

KAUST Repository

Calo, Victor M.

2014-12-01

In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

3. Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation

Directory of Open Access Journals (Sweden)

Hezerul Abdul Karim

2004-09-01

Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.

4. Time-interpolator

International Nuclear Information System (INIS)

Blok, M. de; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

1990-01-01

This report describes a time-interpolator with which time differences can be measured using digital and analog techniques. It concerns a maximum measuring time of 6.4 μs with a resolution of 100 ps. Use is made of Emitter Coupled Logic (ECL) and analogues of high-frequency techniques. The difficulty which accompanies the use of ECL-logic is keeping as short as possible the mutual connections and closing properly the outputs in order to avoid reflections. The digital part of the time-interpolator consists of a continuous running clock and logic which converts an input signal into a start- and stop signal. The analog part consists of a Time to Amplitude Converter (TAC) and an analog to digital converter. (author). 3 refs.; 30 figs

5. Interpolative Boolean Networks

Directory of Open Access Journals (Sweden)

2017-01-01

Full Text Available Boolean networks are used for modeling and analysis of complex systems of interacting entities. Classical Boolean networks are binary and they are relevant for modeling systems with complex switch-like causal interactions. More descriptive power can be provided by the introduction of gradation in this model. If this is accomplished by using conventional fuzzy logics, the generalized model cannot secure the Boolean frame. Consequently, the validity of the model’s dynamics is not secured. The aim of this paper is to present the Boolean consistent generalization of Boolean networks, interpolative Boolean networks. The generalization is based on interpolative Boolean algebra, the [0,1]-valued realization of Boolean algebra. The proposed model is adaptive with respect to the nature of input variables and it offers greater descriptive power as compared with traditional models. For illustrative purposes, IBN is compared to the models based on existing real-valued approaches. Due to the complexity of the most systems to be analyzed and the characteristics of interpolative Boolean algebra, the software support is developed to provide graphical and numerical tools for complex system modeling and analysis.

6. Finite element analysis of rotating beams physics based interpolation

CERN Document Server

Ganguli, Ranjan

2017-01-01

This book addresses the solution of rotating beam free-vibration problems using the finite element method. It provides an introduction to the governing equation of a rotating beam, before outlining the solution procedures using Rayleigh-Ritz, Galerkin and finite element methods. The possibility of improving the convergence of finite element methods through a judicious selection of interpolation functions, which are closer to the problem physics, is also addressed. The book offers a valuable guide for students and researchers working on rotating beam problems – important engineering structures used in helicopter rotors, wind turbines, gas turbines, steam turbines and propellers – and their applications. It can also be used as a textbook for specialized graduate and professional courses on advanced applications of finite element analysis.

7. Interpolating string field theories

International Nuclear Information System (INIS)

Zwiebach, B.

1992-01-01

This paper reports that a minimal area problem imposing different length conditions on open and closed curves is shown to define a one-parameter family of covariant open-closed quantum string field theories. These interpolate from a recently proposed factorizable open-closed theory up to an extended version of Witten's open string field theory capable of incorporating on shell closed strings. The string diagrams of the latter define a new decomposition of the moduli spaces of Riemann surfaces with punctures and boundaries based on quadratic differentials with both first order and second order poles

8. A Two Stage Solution Procedure for Production Planning System with Advance Demand Information

Science.gov (United States)

We model for ‘Naiji System’ which is a unique corporation technique between a manufacturer and suppliers in Japan. We propose a two stage solution procedure for a production planning problem with advance demand information, which is called ‘Naiji’. Under demand uncertainty, this model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to a probabilistic constraint and some linear production constraints. By the convexity and the special structure of correlation matrix in the problem where inventory for different periods is not independent, we propose a solution procedure with two stages which are named Mass Customization Production Planning & Management System (MCPS) and Variable Mesh Neighborhood Search (VMNS) based on meta-heuristics. It is shown that the proposed solution procedure is available to get a near optimal solution efficiently and practical for making a good master production schedule in the suppliers.

9. Image Interpolation with Contour Stencils

OpenAIRE

Pascal Getreuer

2011-01-01

Image interpolation is the problem of increasing the resolution of an image. Linear methods must compromise between artifacts like jagged edges, blurring, and overshoot (halo) artifacts. More recent works consider nonlinear methods to improve interpolation of edges and textures. In this paper we apply contour stencils for estimating the image contours based on total variation along curves and then use this estimation to construct a fast edge-adaptive interpolation.

10. Quasi interpolation with Voronoi splines.

Science.gov (United States)

Mirzargar, Mahsa; Entezari, Alireza

2011-12-01

We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

11. Pixel Interpolation Methods

OpenAIRE

Mintěl, Tomáš

2009-01-01

Tato diplomová práce se zabývá akcelerací interpolačních metod s využitím GPU a architektury NVIDIA (R) CUDA TM. Grafický výstup je reprezentován demonstrační aplikací pro transformaci obrazu nebo videa s použitím vybrané interpolace. Časově kritické části kódu jsou přesunuty na GPU a vykonány paralelně. Pro práci s obrazem a videem jsou použity vysoce optimalizované algoritmy z knihovny OpenCV, od firmy Intel. This master's thesis deals with acceleration of pixel interpolation methods usi...

12. A computational procedure for finding multiple solutions of convective heat transfer equations

International Nuclear Information System (INIS)

Mishra, S; DebRoy, T

2005-01-01

In recent years numerical solutions of the convective heat transfer equations have provided significant insight into the complex materials processing operations. However, these computational methods suffer from two major shortcomings. First, these procedures are designed to calculate temperature fields and cooling rates as output and the unidirectional structure of these solutions preclude specification of these variables as input even when their desired values are known. Second, and more important, these procedures cannot determine multiple pathways or multiple sets of input variables to achieve a particular output from the convective heat transfer equations. Here we propose a new method that overcomes the aforementioned shortcomings of the commonly used solutions of the convective heat transfer equations. The procedure combines the conventional numerical solution methods with a real number based genetic algorithm (GA) to achieve bi-directionality, i.e. the ability to calculate the required input variables to achieve a specific output such as temperature field or cooling rate. More important, the ability of the GA to find a population of solutions enables this procedure to search for and find multiple sets of input variables, all of which can lead to the desired specific output. The proposed computational procedure has been applied to convective heat transfer in a liquid layer locally heated on its free surface by an electric arc, where various sets of input variables are computed to achieve a specific fusion zone geometry defined by an equilibrium temperature. Good agreement is achieved between the model predictions and the independent experimental results, indicating significant promise for the application of this procedure in finding multiple solutions of convective heat transfer equations

13. Fuzzy linguistic model for interpolation

International Nuclear Information System (INIS)

2007-01-01

In this paper, a fuzzy method for interpolating of smooth curves was represented. We present a novel approach to interpolate real data by applying the universal approximation method. In proposed method, fuzzy linguistic model (FLM) applied as universal approximation for any nonlinear continuous function. Finally, we give some numerical examples and compare the proposed method with spline method

14. A disposition of interpolation techniques

NARCIS (Netherlands)

2010-01-01

A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated

15. Contrast-guided image interpolation.

Science.gov (United States)

Wei, Zhe; Ma, Kai-Kuang

2013-11-01

In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

16. The Convergence Acceleration of Two-Dimensional Fourier Interpolation

Directory of Open Access Journals (Sweden)

Anry Nersessian

2008-07-01

Full Text Available Hereby, the convergence acceleration of two-dimensional trigonometric interpolation for a smooth functions on a uniform mesh is considered. Together with theoretical estimates some numerical results are presented and discussed that reveal the potential of this method for application in image processing. Experiments show that suggested algorithm allows acceleration of conventional Fourier interpolation even for sparse meshes that can lead to an efficient image compression/decompression algorithms and also to applications in image zooming procedures.

17. Interpolation for de-Dopplerisation

Science.gov (United States)

Graham, W. R.

2018-05-01

'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

18. Occlusion-Aware View Interpolation

Directory of Open Access Journals (Sweden)

2009-01-01

Full Text Available View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

19. Occlusion-Aware View Interpolation

Directory of Open Access Journals (Sweden)

Ince Serdar

2008-01-01

Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

20. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

Science.gov (United States)

Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

2016-04-01

Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

1. Interpolation for completely positive maps: Numerical solutions

Czech Academy of Sciences Publication Activity Database

Ambrozie, Calin-Grigore; Gheondea, A.

2018-01-01

Roč. 61, č. 1 (2018), s. 13-22 ISSN 1220-3874 Institutional support: RVO:67985840 Keywords : Choi matrix * completely positive * convex minimization Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.362, year: 2016 http://ssmr.ro/bulletin/volumes/61-1/node3.html

2. Combined LAURA-UPS solution procedure for chemically-reacting flows. M.S. Thesis

Science.gov (United States)

Wood, William A.

1994-01-01

A new procedure seeks to combine the thin-layer Navier-Stokes solver LAURA with the parabolized Navier-Stokes solver UPS for the aerothermodynamic solution of chemically-reacting air flowfields. The interface protocol is presented and the method is applied to two slender, blunted shapes. Both axisymmetric and three dimensional solutions are included with surface pressure and heat transfer comparisons between the present method and previously published results. The case of Mach 25 flow over an axisymmetric six degree sphere-cone with a noncatalytic wall is considered to 100 nose radii. A stability bound on the marching step size was observed with this case and is attributed to chemistry effects resulting from the noncatalytic wall boundary condition. A second case with Mach 28 flow over a sphere-cone-cylinder-flare configuration is computed at both two and five degree angles of attack with a fully-catalytic wall. Surface pressures are seen to be within five percent with the present method compared to the baseline LAURA solution and heat transfers are within 10 percent. The effect of grid resolution is investigated and the nonequilibrium results are compared with a perfect gas solution, showing that while the surface pressure is relatively unchanged by the inclusion of reacting chemistry the nonequilibrium heating is 25 percent higher. The procedure demonstrates significant, order of magnitude reductions in solution time and required memory for the three dimensional case over an all thin-layer Navier-Stokes solution.

3. The research on NURBS adaptive interpolation technology

Science.gov (United States)

Zhang, Wanjun; Gao, Shanping; Zhang, Sujia; Zhang, Feng

2017-04-01

In order to solve the problems of Research on NURBS Adaptive Interpolation Technology, such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for NURBS adaptive interpolation method of NURBS curve and simulation. We can use NURBS adaptive interpolation that calculates (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meets the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

4. COMPARISONS BETWEEN DIFFERENT INTERPOLATION TECHNIQUES

Directory of Open Access Journals (Sweden)

G. Garnero

2014-01-01

In the present study different algorithms will be analysed in order to spot an optimal interpolation methodology. The availability of the recent digital model produced by the Regione Piemonte with airborne LIDAR and the presence of sections of testing realized with higher resolutions and the presence of independent digital models on the same territory allow to set a series of analysis with consequent determination of the best methodologies of interpolation. The analysis of the residuals on the test sites allows to calculate the descriptive statistics of the computed values: all the algorithms have furnished interesting results; all the more interesting, notably for dense models, the IDW (Inverse Distance Weighing algorithm results to give best results in this study case. Moreover, a comparative analysis was carried out by interpolating data at different input point density, with the purpose of highlighting thresholds in input density that may influence the quality reduction of the final output in the interpolation phase.

5. Interpolation in Spaces of Functions

Directory of Open Access Journals (Sweden)

K. Mosaleheh

2006-03-01

Full Text Available In this paper we consider the interpolation by certain functions such as trigonometric and rational functions for finite dimensional linear space X. Then we extend this to infinite dimensional linear spaces

6. Radial basis function interpolation of unstructured, three-dimensional, volumetric particle tracking velocimetry data

International Nuclear Information System (INIS)

Casa, L D C; Krueger, P S

2013-01-01

Unstructured three-dimensional fluid velocity data were interpolated using Gaussian radial basis function (RBF) interpolation. Data were generated to imitate the spatial resolution and experimental uncertainty of a typical implementation of defocusing digital particle image velocimetry. The velocity field associated with a steadily rotating infinite plate was simulated to provide a bounded, fully three-dimensional analytical solution of the Navier–Stokes equations, allowing for robust analysis of the interpolation accuracy. The spatial resolution of the data (i.e. particle density) and the number of RBFs were varied in order to assess the requirements for accurate interpolation. Interpolation constraints, including boundary conditions and continuity, were included in the error metric used for the least-squares minimization that determines the interpolation parameters to explore methods for improving RBF interpolation results. Even spacing and logarithmic spacing of RBF locations were also investigated. Interpolation accuracy was assessed using the velocity field, divergence of the velocity field, and viscous torque on the rotating boundary. The results suggest that for the present implementation, RBF spacing of 0.28 times the boundary layer thickness is sufficient for accurate interpolation, though theoretical error analysis suggests that improved RBF positioning may yield more accurate results. All RBF interpolation results were compared to standard Gaussian weighting and Taylor expansion interpolation methods. Results showed that RBF interpolation improves interpolation results compared to the Taylor expansion method by 60% to 90% based on the average squared velocity error and provides comparable velocity results to Gaussian weighted interpolation in terms of velocity error. RMS accuracy of the flow field divergence was one to two orders of magnitude better for the RBF interpolation compared to the other two methods. RBF interpolation that was applied to

7. Trace interpolation by slant-stack migration

International Nuclear Information System (INIS)

Novotny, M.

1990-01-01

The slant-stack migration formula based on the radon transform is studied with respect to the depth steep Δz of wavefield extrapolation. It can be viewed as a generalized trace-interpolation procedure including wave extrapolation with an arbitrary step Δz. For Δz > 0 the formula yields the familiar plane-wave decomposition, while for Δz > 0 it provides a robust tool for migration transformation of spatially under sampled wavefields. Using the stationary phase method, it is shown that the slant-stack migration formula degenerates into the Rayleigh-Sommerfeld integral in the far-field approximation. Consequently, even a narrow slant-stack gather applied before the diffraction stack can significantly improve the representation of noisy data in the wavefield extrapolation process. The theory is applied to synthetic and field data to perform trace interpolation and dip reject filtration. The data examples presented prove that the radon interpolator works well in the dip range, including waves with mutual stepouts smaller than half the dominant period

8. A Meshfree Quasi-Interpolation Method for Solving Burgers’ Equation

Directory of Open Access Journals (Sweden)

Mingzhu Li

2014-01-01

Full Text Available The main aim of this work is to consider a meshfree algorithm for solving Burgers’ equation with the quartic B-spline quasi-interpolation. Quasi-interpolation is very useful in the study of approximation theory and its applications, since it can yield solutions directly without the need to solve any linear system of equations and overcome the ill-conditioning problem resulting from using the B-spline as a global interpolant. The numerical scheme is presented, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the time derivative of the dependent variable. Compared to other numerical methods, the main advantages of our scheme are higher accuracy and lower computational complexity. Meanwhile, the algorithm is very simple and easy to implement and the numerical experiments show that it is feasible and valid.

9. Updating QR factorization procedure for solution of linear least squares problem with equality constraints.

Science.gov (United States)

2017-01-01

In this article, we present a QR updating procedure as a solution approach for linear least squares problem with equality constraints. We reduce the constrained problem to unconstrained linear least squares and partition it into a small subproblem. The QR factorization of the subproblem is calculated and then we apply updating techniques to its upper triangular factor R to obtain its solution. We carry out the error analysis of the proposed algorithm to show that it is backward stable. We also illustrate the implementation and accuracy of the proposed algorithm by providing some numerical experiments with particular emphasis on dense problems.

10. Parallel iterative procedures for approximate solutions of wave propagation by finite element and finite difference methods

Energy Technology Data Exchange (ETDEWEB)

Kim, S. [Purdue Univ., West Lafayette, IN (United States)

1994-12-31

Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.

11. Vitrification of human ovarian tissue: effect of different solutions and procedures.

Science.gov (United States)

Amorim, Christiani Andrade; David, Anu; Van Langendonckt, Anne; Dolmans, Marie-Madeleine; Donnez, Jacques

2011-03-01

To test the effect of different vitrification solutions and procedures on the morphology of human preantral follicles. Pilot study. Gynecology research unit in a university hospital. Ovarian biopsies were obtained from nine women aged 22-35 years. Ovarian tissue fragments were subjected to [1] different vitrification solutions to test their toxicity or [2] different vitrification methods using plastic straws, medium droplets, or solid-surface vitrification before in vitro culture. Number of morphologically normal follicles after toxicity testing or vitrification with the different treatments determined by histologic analysis. In the toxicity tests, only VS3 showed similar results to fresh tissue before and after in vitro culture (fresh controls 1 and 2). In addition, this was the only solution able to completely vitrify. In all vitrification procedures, the percentage of normal follicles was lower than in controls. However, of the three protocols, the droplet method yielded a significantly higher proportion of normal follicles. Our experiments showed VS3 to have no deleterious effect on follicular morphology and to be able to completely vitrify, although vitrification procedures were found to affect human follicles. Nevertheless, the droplet method resulted in a higher percentage of morphologically normal follicles. Copyright © 2011 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

12. Survey: interpolation methods for whole slide image processing.

Science.gov (United States)

Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

2017-02-01

Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

13. Surface interpolation with radial basis functions for medical imaging

International Nuclear Information System (INIS)

Carr, J.C.; Beatson, R.K.; Fright, W.R.

1997-01-01

Radial basis functions are presented as a practical solution to the problem of interpolating incomplete surfaces derived from three-dimensional (3-D) medical graphics. The specific application considered is the design of cranial implants for the repair of defects, usually holes, in the skull. Radial basis functions impose few restrictions on the geometry of the interpolation centers and are suited to problems where interpolation centers do not form a regular grid. However, their high computational requirements have previously limited their use to problems where the number of interpolation centers is small (<300). Recently developed fast evaluation techniques have overcome these limitations and made radial basis interpolation a practical approach for larger data sets. In this paper radial basis functions are fitted to depth-maps of the skull's surface, obtained from X-ray computed tomography (CT) data using ray-tracing techniques. They are used to smoothly interpolate the surface of the skull across defect regions. The resulting mathematical description of the skull's surface can be evaluated at any desired resolution to be rendered on a graphics workstation or to generate instructions for operating a computer numerically controlled (CNC) mill

14. Interpolation method for the transport theory and its application in fusion-neutronics analysis

International Nuclear Information System (INIS)

Jung, J.

1981-09-01

This report presents an interpolation method for the solution of the Boltzmann transport equation. The method is based on a flux synthesis technique using two reference-point solutions. The equation for the interpolated solution results in a Volterra integral equation which is proved to have a unique solution. As an application of the present method, tritium breeding ratio is calculated for a typical D-T fusion reactor system. The result is compared to that of a variational technique

15. Phase Center Interpolation Algorithm for Airborne GPS through the Kalman Filter

Directory of Open Access Journals (Sweden)

Edson A. Mitishita

2005-12-01

Full Text Available The aerial triangulation is a fundamental step in any photogrammetric project. The surveying of the traditional control points, depending on region to be mapped, still has a high cost. The distribution of control points at the block, and its positional quality, influence directly in the resulting precisions of the aero triangulation processing. The airborne GPS technique has as key objectives cost reduction and quality improvement of the ground control in the modern photogrammetric projects. Nowadays, in Brazil, the greatest photogrammetric companies are acquiring airborne GPS systems, but those systems are usually presenting difficulties in the operation, due to the need of human resources for the operation, because of the high technology involved. Inside the airborne GPS technique, one of the fundamental steps is the interpolation of the position of the phase center of the GPS antenna, in the photo shot instant. Traditionally, low degree polynomials are used, but recent studies show that those polynomials is reduced in turbulent flights, which are quite common, mainly in great scales flights. This paper has as objective to present a solution for that problem, through an algorithm based on the Kalman Filter, which takes into account the dynamic aspect of the problem. At the end of the paper, the results of a comparison between experiments done with the proposed methodology and a common linear interpolator are shown. These results show a significant accuracy gain at the procedure of linear interpolation, when the Kalman filter is used.

16. Survey of a numerical procedure for the solution of hyperbolic systems of three dimensional fluid flow

International Nuclear Information System (INIS)

Graf, U.

1986-01-01

A combination of several numerical methods is used to construct a procedure for effective calculation of complex three-dimensional fluid flow problems. The split coefficient matrix (SCM) method is used so that the differenced equations of the hyperbolic system do not disturb correct signal propagation. The semi-discretisation of the equations of the SCM method is done with the asymmetric, separated region, weighted residual (ASWR) method to give accurate solutions on a relatively coarse mesh. For the resulting system of ordinary differential equations, a general-purpose ordinary differential equation solver is used in conjunction with a method of fractional steps for an economic solution of the large system of linear equations. (orig.) [de

17. A Note on Interpolation of Stable Processes | Nassiuma | Journal of ...

African Journals Online (AJOL)

Interpolation procedures tailored for gaussian processes may not be applied to infinite variance stable processes. Alternative techniques suitable for a limited set of stable case with index α∈(1,2] were initially studied by Pourahmadi (1984) for harmonizable processes. This was later extended to the ARMA stable process ...

18. Spatiotemporal Interpolation Methods for Solar Event Trajectories

Science.gov (United States)

Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

2018-05-01

This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

19. Solution procedure and performance evaluation for a water–LiBr absorption refrigeration machine

International Nuclear Information System (INIS)

Wonchala, Jason; Hazledine, Maxwell; Goni Boulama, Kiari

2014-01-01

The water–lithium bromide absorption cooling machine was investigated theoretically in this paper. A detailed solution procedure was proposed and validated. A parametric study was conducted over the entire admissible ranges of the desorber, condenser, absorber and evaporator temperatures. The performance of the machine was evaluated based on the circulation ratio which is a measure of the system size and cost, the first law coefficient of performance and the second law exergy efficiency. The circulation ratio and the coefficient of performance were seen to improve as the temperature of the heat source increased, while the second law performance deteriorated. The same qualitative responses were obtained when the temperature of the refrigerated environment was increased. On the other hand, simultaneously raising the condenser and absorber temperatures was seen to result in a severe deterioration of both the circulation ratio and first law coefficient of performance, while the second law performance indicator improved significantly. The influence of the difference between the condenser and absorber exit temperatures, as well as that of the internal recovery heat exchanger on the different performance indicators was also calculated and discussed. - Highlights: • Analysis of a water–LiBr absorption machine, including detailed solution procedure. • Performance assessed using first and second law considerations, as well as flow ratio. • Effects of heat source and refrigerated environment temperatures on the performance. • Effects of the difference between condenser and absorber temperatures. • Effects of internal heat exchanger efficiency on overall cooling machine performance

20. Generation of nuclear data banks through interpolation

International Nuclear Information System (INIS)

Castillo M, J.A.

1999-01-01

Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, used to generate Nuclear Data Banks employing bi cubic polynomial interpolation, taking as independent variables the uranium and gadolinium percents. Two proposals were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed to obtain the interpolating polynomial and later, the corresponding linear equations system. In the solution of this system the Gaussian elimination method with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validations test, a comparison was made between the values obtained with INTPOLBI and INTERTEG (created at the Instituto de Investigaciones Electricas with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks. (Author)

1. Nuclear data banks generation by interpolation

International Nuclear Information System (INIS)

Castillo M, J. A.

1999-01-01

Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks

2. An integral conservative gridding--algorithm using Hermitian curve interpolation.

Science.gov (United States)

Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

2008-11-07

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

3. An integral conservative gridding-algorithm using Hermitian curve interpolation

International Nuclear Information System (INIS)

Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

2008-01-01

The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

4. Size-Dictionary Interpolation for Robot's Adjustment

Directory of Open Access Journals (Sweden)

Morteza eDaneshmand

2015-05-01

Full Text Available This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically-inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps towards completion of the whole realistic online fitting package is provided.

5. A Note on Cubic Convolution Interpolation

OpenAIRE

Meijering, E.; Unser, M.

2003-01-01

We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

6. Node insertion in Coalescence Fractal Interpolation Function

International Nuclear Information System (INIS)

2013-01-01

The Iterated Function System (IFS) used in the construction of Coalescence Hidden-variable Fractal Interpolation Function (CHFIF) depends on the interpolation data. The insertion of a new point in a given set of interpolation data is called the problem of node insertion. In this paper, the effect of insertion of new point on the related IFS and the Coalescence Fractal Interpolation Function is studied. Smoothness and Fractal Dimension of a CHFIF obtained with a node are also discussed

7. Bayer Demosaicking with Polynomial Interpolation.

Science.gov (United States)

Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

2016-08-30

Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

8. A procedure to create isoconcentration surfaces in low-chemical-partitioning, high-solute alloys

International Nuclear Information System (INIS)

Hornbuckle, B.C.; Kapoor, M.; Thompson, G.B.

2015-01-01

A proximity histogram or proxigram is the prevailing technique of calculating 3D composition profiles of a second phase in atom probe tomography. The second phase in the reconstruction is delineated by creating an isoconcentration surface, i.e. the precipitate–matrix interface. The 3D composition profile is then calculated with respect to this user-defined isoconcentration surface. Hence, the selection of the correct isoconcentration surface is critical. In general, the preliminary selection of an isoconcentration value is guided by the visual observation of a chemically partitioned second phase. However, in low-chemical -partitioning systems, such a visual guide is absent. The lack of a priori composition information of the precipitate phase may further confound the issue. This paper presents a methodology of selecting an appropriate elemental species and subsequently obtaining an isoconcentration value to create an accurate isoconcentration surface that will act as the precipitate–matrix interface. We use the H-phase precipitate in the Ni–Ti–Hf shape memory alloy as our case study to illustrate the procedure. - Highlights: • A procedure for creating accurate isoconcentration surface for low-chemical-partitioning, high-solute alloys. • Determine the appropriate element to create the isosconcentration surface. • Subsequently identify the accurate isoconcentration value to create an isoconcentration surface.

9. Solution Procedure for Transport Modeling in Effluent Recharge Based on Operator-Splitting Techniques

Directory of Open Access Journals (Sweden)

Shutang Zhu

2008-01-01

Full Text Available The coupling of groundwater movement and reactive transport during groundwater recharge with wastewater leads to a complicated mathematical model, involving terms to describe convection-dispersion, adsorption/desorption and/or biodegradation, and so forth. It has been found very difficult to solve such a coupled model either analytically or numerically. The present study adopts operator-splitting techniques to decompose the coupled model into two submodels with different intrinsic characteristics. By applying an upwind finite difference scheme to the finite volume integral of the convection flux term, an implicit solution procedure is derived to solve the convection-dominant equation. The dispersion term is discretized in a standard central-difference scheme while the dispersion-dominant equation is solved using either the preconditioned Jacobi conjugate gradient (PJCG method or Thomas method based on local-one-dimensional scheme. The solution method proposed in this study is applied to the demonstration project of groundwater recharge with secondary effluent at Gaobeidian sewage treatment plant (STP successfully.

10. Evaluation of solution procedures for material and/or geometrically nonlinear structural analysis by the direct stiffness method.

Science.gov (United States)

Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.

1972-01-01

This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.

11. Precipitation interpolation in mountainous areas

Science.gov (United States)

Kolberg, Sjur

2015-04-01

Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.

12. Potential problems with interpolating fields

Energy Technology Data Exchange (ETDEWEB)

Birse, Michael C. [The University of Manchester, Theoretical Physics Division, School of Physics and Astronomy, Manchester (United Kingdom)

2017-11-15

A potential can have features that do not reflect the dynamics of the system it describes but rather arise from the choice of interpolating fields used to define it. This is illustrated using a toy model of scattering with two coupled channels. A Bethe-Salpeter amplitude is constructed which is a mixture of the waves in the two channels. The potential derived from this has a strong repulsive core, which arises from the admixture of the closed channel in the wave function and not from the dynamics of the model. (orig.)

13. LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections

International Nuclear Information System (INIS)

2007-01-01

1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table

14. Interpolation of rational matrix functions

CERN Document Server

Ball, Joseph A; Rodman, Leiba

1990-01-01

This book aims to present the theory of interpolation for rational matrix functions as a recently matured independent mathematical subject with its own problems, methods and applications. The authors decided to start working on this book during the regional CBMS conference in Lincoln, Nebraska organized by F. Gilfeather and D. Larson. The principal lecturer, J. William Helton, presented ten lectures on operator and systems theory and the interplay between them. The conference was very stimulating and helped us to decide that the time was ripe for a book on interpolation for matrix valued functions (both rational and non-rational). When the work started and the first partial draft of the book was ready it became clear that the topic is vast and that the rational case by itself with its applications is already enough material for an interesting book. In the process of writing the book, methods for the rational case were developed and refined. As a result we are now able to present the rational case as an indepe...

15. Evaluation of various interpolants available in DICE

Energy Technology Data Exchange (ETDEWEB)

Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reu, Phillip L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Crozier, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

2015-02-01

This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.

16. Advantage of Fast Fourier Interpolation for laser modeling

International Nuclear Information System (INIS)

Epatko, I.V.; Serov, R.V.

2006-01-01

The abilities of a new algorithm: the 2-dimensional Fast Fourier Interpolation (FFI) with magnification factor (zoom) 2 n whose purpose is to improve the spatial resolution when necessary, are analyzed in details. FFI procedure is useful when diaphragm/aperture size is less than half of the current simulation scale. The computation noise due to FFI procedure is less than 10 -6 . The additional time for FFI is approximately equal to one Fast Fourier Transform execution time. For some applications using FFI procedure, the execution time decreases by a 10 4 factor compared with other laser simulation codes. (authors)

17. Analysis of ECT Synchronization Performance Based on Different Interpolation Methods

Directory of Open Access Journals (Sweden)

Yang Zhixin

2014-01-01

Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.

18. Incompressible Navier-Stokes and parabolized Navier-Stokes solution procedures and computational techniques

Science.gov (United States)

Rubin, S. G.

1982-01-01

Recent developments with finite-difference techniques are emphasized. The quotation marks reflect the fact that any finite discretization procedure can be included in this category. Many so-called finite element collocation and galerkin methods can be reproduced by appropriate forms of the differential equations and discretization formulas. Many of the difficulties encountered in early Navier-Stokes calculations were inherent not only in the choice of the different equations (accuracy), but also in the method of solution or choice of algorithm (convergence and stability, in the manner in which the dependent variables or discretized equations are related (coupling), in the manner that boundary conditions are applied, in the manner that the coordinate mesh is specified (grid generation), and finally, in recognizing that for many high Reynolds number flows not all contributions to the Navier-Stokes equations are necessarily of equal importance (parabolization, preferred direction, pressure interaction, asymptotic and mathematical character). It is these elements that are reviewed. Several Navier-Stokes and parabolized Navier-Stokes formulations are also presented.

19. Analytical solution to the 1D Lemaitre's isotropic damage model and plane stress projected implicit integration procedure

DEFF Research Database (Denmark)

Andriollo, Tito; Thorborg, Jesper; Hattel, Jesper Henri

2016-01-01

obtaining an integral relationship between total strain and effective stress. By means of the generalized binomial theorem, an expression in terms of infinite series is subsequently derived. The solution is found to simplify considerably existing techniques for material parameters identification based...... on optimization, as all issues associated with classical numerical solution procedures of the constitutive equations are eliminated. In addition, an implicit implementation of the plane stress projected version of Lemaitre's model is discussed, showing that the resulting algebraic system can be reduced...

20. Research on interpolation methods in medical image processing.

Science.gov (United States)

Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

2012-04-01

Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

1. Interpolation decoding method with variable parameters for fractal image compression

International Nuclear Information System (INIS)

He Chuanjiang; Li Gaoping; Shen Xiaona

2007-01-01

The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

2. Real-time interpolation for true 3-dimensional ultrasound image volumes.

Science.gov (United States)

Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

2011-02-01

We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

3. Test procedure for anion exchange testing with Argonne 10-L solutions

International Nuclear Information System (INIS)

Compton, J.A.

1995-01-01

Four anion exchange resins will be tested to confirm that they will sorb and release plutonium from/to the appropriate solutions in the presence of other cations. Certain cations need to be removed from the test solutions to minimize adverse behavior in other processing equipment. The ion exchange resins will be tested using old laboratory solutions from Argonne National Laboratory; results will be compared to results from other similar processes for application to all plutonium solutions stored in the Plutonium Finishing Plant

4. [Effects of different types and concentration of oral sweet solution on reducing neonatal pain during heel lance procedures].

Science.gov (United States)

Leng, Hong-yao; Zheng, Xian-lan; Yan, Li; Zhang, Xian-hong; He, Hua-yun; Xiang, Ming

2013-09-01

To compare the effect of different types and concentrations of sweet solutions on neonatal pain during heel lance procedure. Totally 560 full term neonates (male 295, female 265) were randomized into 7 groups:placebo group (plain water), 10% glucose, 25% glucose, 50% glucose, 12% sucrose, 24% sucrose and 30% sucrose groups.In each group, 2 ml corresponding oral solutions were administered through a syringe by dripping into the neonate's mouth 2 minute before heel lance. The procedure process was recorded by videos, from which to collect heart rate, oxygen saturation and pain score 1 min before puncture, 3, 5 and 10 min after puncture. The average heart rate increase 3, 5 and 10 min after procedure in the 25% and 50% glucose groups, 12% and 24% and 30% sucrose groups was significantly lower than those in the placebo group (P lance (both P lance, but the best concentration of sucrose for pain relief needs further study.

5. Differential Interpolation Effects in Free Recall

Science.gov (United States)

Petrusic, William M.; Jamieson, Donald G.

1978-01-01

Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…

6. Transfinite C2 interpolant over triangles

International Nuclear Information System (INIS)

Alfeld, P.; Barnhill, R.E.

1984-01-01

A transfinite C 2 interpolant on a general triangle is created. The required data are essentially C 2 , no compatibility conditions arise, and the precision set includes all polynomials of degree less than or equal to eight. The symbol manipulation language REDUCE is used to derive the scheme. The scheme is discretized to two different finite dimensional C 2 interpolants in an appendix

7. Analysis of velocity planning interpolation algorithm based on NURBS curve

Science.gov (United States)

Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

2017-04-01

To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.

8. An Improved Rotary Interpolation Based on FPGA

Directory of Open Access Journals (Sweden)

Mingyu Gao

2014-08-01

Full Text Available This paper presents an improved rotary interpolation algorithm, which consists of a standard curve interpolation module and a rotary process module. Compared to the conventional rotary interpolation algorithms, the proposed rotary interpolation algorithm is simpler and more efficient. The proposed algorithm was realized on a FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe, which uses rotary ellipse and rotary parabolic as an example. According to the theoretical analysis and practical process validation, the algorithm has the following advantages: firstly, less arithmetic items is conducive for interpolation operation; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

9. Interpolation/penalization applied for strength design of 3D thermoelastic structures

DEFF Research Database (Denmark)

Pedersen, Pauli; Pedersen, Niels L.

2012-01-01

compliance. This is proved for thermoelastic structures by sensitivity analysis of compliance that facilitates localized determination of sensitivities, and the compliance is not identical to the total elastic energy (twice strain energy). An explicit formula for the difference is derived and numerically...... parameter interpolation in explicit form is preferred, and the influence of interpolation on compliance sensitivity analysis is included. For direct strength maximization the sensitivity analysis of local von Mises stresses is demanding. An applied recursive procedure to obtain uniform energy density...

10. Stereo matching and view interpolation based on image domain triangulation.

Science.gov (United States)

Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

2013-09-01

This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

11. Dynamic Stability Analysis Using High-Order Interpolation

Directory of Open Access Journals (Sweden)

Juarez-Toledo C.

2012-10-01

Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.

12. Interferometric interpolation of sparse marine data

KAUST Repository

Hanafy, Sherif M.

2013-10-11

We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.

13. LINTAB, Linear Interpolable Tables from any Continuous Variable Function

International Nuclear Information System (INIS)

1988-01-01

1 - Description of program or function: LINTAB is designed to construct linearly interpolable tables from any function. The program will start from any function of a single continuous variable... FUNKY(X). By user input the function can be defined, (1) Over 1 to 100 X ranges. (2) Within each X range the function is defined by 0 to 50 constants. (3) At boundaries between X ranges the function may be continuous or discontinuous (depending on the constants used to define the function within each X range). 2 - Method of solution: LINTAB will construct a table of X and Y values where the tabulated (X,Y) pairs will be exactly equal to the function (Y=FUNKY(X)) and linear interpolation between the tabulated pairs will be within any user specified fractional uncertainty of the function for all values of X within the requested X range

14. A procedure for preferential trapping of cesium cations from aqueous solutions and their separation from other inorganic cations

International Nuclear Information System (INIS)

Plesek, J.; Hermanek, S.; Selucky, P.; Williams, R.E.

1995-01-01

The title procedure is as follows. Deltahedral heteroborane anions are added to the aqueous solution containing cesium ions, precipitate (if any) is separated off, and the cesium salts involving the deltahedral heteroborane anions are trapped on activated carbon. The cobaltocarborane anion [3-Co-(1,2-C 2 B 9 H 11 ) 2 ] and/or its substitution derivatives are particularly well suited to this purpose. The process can find use in the separation of radionuclides present in waste solutions arising from spent nuclear fuel treatment. (P.A.). 1 fig

15. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

Science.gov (United States)

2014-01-01

A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

16. Basis set approach in the constrained interpolation profile method

International Nuclear Information System (INIS)

Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.

2003-07-01

We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)

17. NOAA Optimum Interpolation (OI) SST V2

Data.gov (United States)

National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...

18. Kuu plaat : Interpol Antics. Plaadid kauplusest Lasering

Index Scriptorium Estoniae

2005-01-01

Heliplaatidest: "Interpol Antics", Scooter "Mind the Gap", Slide-Fifty "The Way Ahead", Psyhhoterror "Freddy, löö esimesena!", Riho Sibul "Must", Bossacucanova "Uma Batida Diferente", "Biscantorat - Sound of the spirit from Glenstal Abbey"

19. Revisiting Veerman’s interpolation method

DEFF Research Database (Denmark)

Christiansen, Peter; Bay, Niels Oluf

2016-01-01

and (c) FEsimulations. A comparison of the determined forming limits yields insignificant differences in the limit strain obtainedwith Veerman’s method or exact Lagrangian interpolation for the two sheet metal forming processes investigated. Theagreement with the FE-simulations is reasonable.......This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established...... for comparison. Bulge testing and tensile testing of aluminium sheets containingelectro-chemically etched circle grids are performed to experimentally determine the forming limit of the sheet material.The forming limit is determined using (a) Veerman’s interpolation method, (b) exact Lagrangian interpolation...

20. NOAA Daily Optimum Interpolation Sea Surface Temperature

Data.gov (United States)

National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA 1/4° daily Optimum Interpolation Sea Surface Temperature (or daily OISST) is an analysis constructed by combining observations from different platforms...

1. Integration and interpolation of sampled waveforms

International Nuclear Information System (INIS)

Stearns, S.D.

1978-01-01

Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed

2. Wideband DOA Estimation through Projection Matrix Interpolation

OpenAIRE

Selva, J.

2017-01-01

This paper presents a method to reduce the complexity of the deterministic maximum likelihood (DML) estimator in the wideband direction-of-arrival (WDOA) problem, which is based on interpolating the array projection matrix in the temporal frequency variable. It is shown that an accurate interpolator like Chebyshev's is able to produce DML cost functions comprising just a few narrowband-like summands. Actually, the number of such summands is far smaller (roughly by factor ten in the numerical ...

3. Interpolation for a subclass of H

|g(zm)| ≤ c |zm − zm |, ∀m ∈ N. Thus it is natural to pose the following interpolation problem for H. ∞. : DEFINITION 4. We say that (zn) is an interpolating sequence in the weak sense for H. ∞ if given any sequence of complex numbers (λn) verifying. |λn| ≤ c ψ(zn,z. ∗ n) |zn − zn |, ∀n ∈ N,. (4) there exists a product fg ∈ H.

4. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

Science.gov (United States)

Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

2014-01-01

Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

5. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

Science.gov (United States)

Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

2012-01-01

Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

6. A Parallel Strategy for High-speed Interpolation of CNC Using Data Space Constraint Method

Directory of Open Access Journals (Sweden)

Shuan-qiang Yang

2013-12-01

Full Text Available A high-speed interpolation scheme using parallel computing is proposed in this paper. The interpolation method is divided into two tasks, namely, the rough task executing in PC and the fine task in the I/O card. During the interpolation procedure, the double buffers are constructed to exchange the interpolation data between the two tasks. Then, the data space constraint method is adapted to ensure the reliable and continuous data communication between the two buffers. Therefore, the proposed scheme can be realized in the common distribution of the operation systems without real-time performance. The high-speed and high-precision motion control can be achieved as well. Finally, an experiment is conducted on the self-developed CNC platform, the test results are shown to verify the proposed method.

7. Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating

Science.gov (United States)

Chen, Liangji; Guo, Guangsong; Li, Huiying

2017-07-01

NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.

8. Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI

Science.gov (United States)

Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.

2015-01-01

Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085

9. Calculation of electromagnetic parameter based on interpolation algorithm

International Nuclear Information System (INIS)

Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

2015-01-01

Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

10. Image interpolation via graph-based Bayesian label propagation.

Science.gov (United States)

Xianming Liu; Debin Zhao; Jiantao Zhou; Wen Gao; Huifang Sun

2014-03-01

In this paper, we propose a novel image interpolation algorithm via graph-based Bayesian label propagation. The basic idea is to first create a graph with known and unknown pixels as vertices and with edge weights encoding the similarity between vertices, then the problem of interpolation converts to how to effectively propagate the label information from known points to unknown ones. This process can be posed as a Bayesian inference, in which we try to combine the principles of local adaptation and global consistency to obtain accurate and robust estimation. Specially, our algorithm first constructs a set of local interpolation models, which predict the intensity labels of all image samples, and a loss term will be minimized to keep the predicted labels of the available low-resolution (LR) samples sufficiently close to the original ones. Then, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on all samples. Moreover, a graph-Laplacian-based manifold regularization term is incorporated to penalize the global smoothness of intensity labels, such smoothing can alleviate the insufficient training of the local models and make them more robust. Finally, we construct a unified objective function to combine together the global loss of the locally linear regression, square error of prediction bias on the available LR samples, and the manifold regularization term. It can be solved with a closed-form solution as a convex optimization problem. Experimental results demonstrate that the proposed method achieves competitive performance with the state-of-the-art image interpolation algorithms.

11. Sweet Solutions to Reduce Procedural Pain in Neonates: A Meta-analysis.

Science.gov (United States)

Harrison, Denise; Larocque, Catherine; Bueno, Mariana; Stokes, Yehudis; Turner, Lucy; Hutton, Brian; Stevens, Bonnie

2017-01-01

Abundant evidence of sweet taste analgesia in neonates exists, yet placebo-controlled trials continue to be conducted. To review all trials evaluating sweet solutions for analgesia in neonates and to conduct cumulative meta-analyses (CMAs) on behavioral pain outcomes. (1) Data from 2 systematic reviews of sweet solutions for newborns; (2) searches ending 2015 of CINAHL, Medline, Embase, and psychINFO. Two authors screened studies for inclusion, conducted risk-of-bias ratings, and extracted behavioral outcome data for CMAs. CMA was performed using random effects meta-analysis. One hundred and sixty-eight studies were included; 148 (88%) included placebo/no-treatment arms. CMA for crying time included 29 trials (1175 infants). From the fifth trial in 2002, there was a statistically significant reduction in mean cry time for sweet solutions compared with placebo (-27 seconds, 95% confidence interval [CI] -51 to -4). By the final trial, CMA was -23 seconds in favor of sweet solutions (95% CI -29 to -18). CMA for pain scores included 50 trials (3341 infants). Results were in favor of sweet solutions from the second trial (0.5, 95% CI -1 to -0.1). Final results showed a standardized mean difference of -0.9 (95% CI -1.1 to -0.7). We were unable to use or obtain data from many studies to include in the CMA. Evidence of sweet taste analgesia in neonates has existed since the first published trials, yet placebo/no-treatment, controlled trials have continued to be conducted. Future neonatal pain studies need to select more ethically responsible control groups. Copyright © 2017 by the American Academy of Pediatrics.

12. A procedure to construct exact solutions of nonlinear fractional differential equations.

Science.gov (United States)

2014-01-01

We use the fractional transformation to convert the nonlinear partial fractional differential equations with the nonlinear ordinary differential equations. The Exp-function method is extended to solve fractional partial differential equations in the sense of the modified Riemann-Liouville derivative. We apply the Exp-function method to the time fractional Sharma-Tasso-Olver equation, the space fractional Burgers equation, and the time fractional fmKdV equation. As a result, we obtain some new exact solutions.

13. Edge-detect interpolation for direct digital periapical images

International Nuclear Information System (INIS)

Song, Nam Kyu; Koh, Kwang Joon

1998-01-01

The purpose of this study was to aid in the use of the digital images by edge-detect interpolation for direct digital periapical images using edge-deted interpolation. This study was performed by image processing of 20 digital periapical images; pixel replication, linear non-interpolation, linear interpolation, and edge-sensitive interpolation. The obtained results were as follows ; 1. Pixel replication showed blocking artifact and serious image distortion. 2. Linear interpolation showed smoothing effect on the edge. 3. Edge-sensitive interpolation overcame the smoothing effect on the edge and showed better image.

14. Data mining techniques in sensor networks summarization, interpolation and surveillance

CERN Document Server

Appice, Annalisa; Fumarola, Fabio; Malerba, Donato

2013-01-01

Sensor networks comprise of a number of sensors installed across a spatially distributed network, which gather information and periodically feed a central server with the measured data. The server monitors the data, issues possible alarms and computes fast aggregates. As data analysis requests may concern both present and past data, the server is forced to store the entire stream. But the limited storage capacity of a server may reduce the amount of data stored on the disk. One solution is to compute summaries of the data as it arrives, and to use these summaries to interpolate the real data.

15. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

Directory of Open Access Journals (Sweden)

J. Polec

1999-09-01

Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

16. Solución bidimensional sin malla de la ecuación no lineal de convección-difusión-reacción mediante el método de Interpolación Local Hermítica Two-dimensional meshless solution of the non-linear convection diffusion reaction equation by the Local Hermitian Interpolation method

Directory of Open Access Journals (Sweden)

Carlos A Bustamante Chaverra

2013-03-01

17. Interpolation of quasi-Banach spaces

International Nuclear Information System (INIS)

Tabacco Vignati, A.M.

1986-01-01

This dissertation presents a method of complex interpolation for familities of quasi-Banach spaces. This method generalizes the theory for families of Banach spaces, introduced by others. Intermediate spaces in several particular cases are characterized using different approaches. The situation when all the spaces have finite dimensions is studied first. The second chapter contains the definitions and main properties of the new interpolation spaces, and an example concerning the Schatten ideals associated with a separable Hilbert space. The case of L/sup P/ spaces follows from the maximal operator theory contained in Chapter III. Also introduced is a different method of interpolation for quasi-Banach lattices of functions, and conditions are given to guarantee that the two techniques yield the same result. Finally, the last chapter contains a different, and more direct, approach to the case of Hardy spaces

18. Quadratic Interpolation and Linear Lifting Design

Directory of Open Access Journals (Sweden)

Joel Solé

2007-03-01

Full Text Available A quadratic image interpolation method is stated. The formulation is connected to the optimization of lifting steps. This relation triggers the exploration of several interpolation possibilities within the same context, which uses the theory of convex optimization to minimize quadratic functions with linear constraints. The methods consider possible knowledge available from a given application. A set of linear equality constraints that relate wavelet bases and coefficients with the underlying signal is introduced in the formulation. As a consequence, the formulation turns out to be adequate for the design of lifting steps. The resulting steps are related to the prediction minimizing the detail signal energy and to the update minimizing the l2-norm of the approximation signal gradient. Results are reported for the interpolation methods in terms of PSNR and also, coding results are given for the new update lifting steps.

19. Optimized Quasi-Interpolators for Image Reconstruction.

Science.gov (United States)

Sacht, Leonardo; Nehab, Diego

2015-12-01

We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

20. A meshless scheme for partial differential equations based on multiquadric trigonometric B-spline quasi-interpolation

International Nuclear Information System (INIS)

Gao Wen-Wu; Wang Zhi-Gang

2014-01-01

Based on the multiquadric trigonometric B-spline quasi-interpolant, this paper proposes a meshless scheme for some partial differential equations whose solutions are periodic with respect to the spatial variable. This scheme takes into account the periodicity of the analytic solution by using derivatives of a periodic quasi-interpolant (multiquadric trigonometric B-spline quasi-interpolant) to approximate the spatial derivatives of the equations. Thus, it overcomes the difficulties of the previous schemes based on quasi-interpolation (requiring some additional boundary conditions and yielding unwanted high-order discontinuous points at the boundaries in the spatial domain). Moreover, the scheme also overcomes the difficulty of the meshless collocation methods (i.e., yielding a notorious ill-conditioned linear system of equations for large collocation points). The numerical examples that are presented at the end of the paper show that the scheme provides excellent approximations to the analytic solutions. (general)

1. Positivity Preserving Interpolation Using Rational Bicubic Spline

Directory of Open Access Journals (Sweden)

Samsul Ariffin Abdul Karim

2015-01-01

Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

2. Interpolation algorithm for asynchronous ADC-data

Directory of Open Access Journals (Sweden)

S. Bramburger

2017-09-01

Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

3. Imaging of accidental contamination with F-18-solution; a quick trouble-shooting procedure

Directory of Open Access Journals (Sweden)

Kalevi Kairemo

2016-01-01

Full Text Available To the best of our knowledge, imaging of accidental exposure to radioactive fluorine-18 (F-18 due to liquid spill has not been described earlier in the scientific literature. The short half-life of F-18 (t½=110 min, current radiation safety requirements, and Good Manufacturing Practice (GMP regulations on radiopharmaceuticals have restrained the occurrence of these incidents. The possibility of investigating this type of incidents by gamma and positron imaging is also quite limited. Additionally, a quick and precise analysis of radiochemical contamination is cumbersome and sometimes challenging if the spills of radioactive materials are low in activity. Herein, we report a case of accidental F-18 contamination in a service person during a routine cyclotron maintenance procedure. During target replacement, liquid F-18 was spilled on the person responsible for the maintenance. The activities of spills were immediately measured using contamination detectors, and the photon spectrum of contaminated clothes was assessed through gamma spectroscopy. Despite protective clothing, some skin areas were contaminated, which were then thoroughly washed. Later on, these areas were imaged, using positron emission tomography (PET, and a gamma camera (including spectroscopy. Two contaminated skin areas were located on the hand (9.7 and 14.7 cm2, respectively, which showed very low activities (19.0 and 22.8 kBq respectively at the time of incident. Based on the photon spectra, F-18 was confirmed as the main present radionuclide. PET imaging demonstrated the shape of these contaminated hot spots. However, the measured activities were very low due to the use of protective clothing. With prompt action and use of proper equipments at the time of incident, minimal radionuclide activities and their locations could be thoroughly analyzed. The cumulative skin doses of the contaminated regions were calculated at 1.52 and 2.00 mSv, respectively. In the follow-up, no skin

4. Multiscale empirical interpolation for solving nonlinear PDEs

KAUST Repository

Calo, Victor M.; Efendiev, Yalchin R.; Galvis, Juan; Ghommem, Mehdi

2014-01-01

residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully

5. Fast image interpolation via random forests.

Science.gov (United States)

Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

2015-10-01

This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

6. Spectral Compressive Sensing with Polar Interpolation

DEFF Research Database (Denmark)

Fyhn, Karsten; Dadkhahi, Hamid; F. Duarte, Marco

2013-01-01

. In this paper, we introduce a greedy recovery algorithm that leverages a band-exclusion function and a polar interpolation function to address these two issues in spectral compressive sensing. Our algorithm is geared towards line spectral estimation from compressive measurements and outperforms most existing...

7. Technique for image interpolation using polynomial transforms

NARCIS (Netherlands)

Escalante Ramírez, B.; Martens, J.B.; Haskell, G.G.; Hang, H.M.

1993-01-01

We present a new technique for image interpolation based on polynomial transforms. This is an image representation model that analyzes an image by locally expanding it into a weighted sum of orthogonal polynomials. In the discrete case, the image segment within every window of analysis is

8. Hot test of a TALSPEAK procedure for separation of actinides and lanthanides using recirculating DTPA-lactic acid solution

International Nuclear Information System (INIS)

Persson, G.; Svantesson, I.; Wingefors, S.; Liljenzin, J.O.

1984-01-01

Results are reported from a hot test of a TALSPEAK type process for separation of higher actinides (Am, Cm) from lanthanides. Actinides and lanthanides are extracted by 1 M HDEHP and separated by selective strip of the actinides, using a mixture of DTPA and lactic acid (reversed TALSPEAK process). In order to minimize the generation of secondary waste, a procedure using recirculating DTPA-Lactic acid solution has been developed. A separation factor between Am and Eu of 132 was achieved. In regard to separations of Am and Cm from commercial HLLW (high level liquid wastes), the factor corresponds to 1.5% of the lanthanide group remaining with the actinides. The loss of Am was about 0.2%. 9 figures, 3 tables

9. Catheter for Cleaning Surgical Optics During Surgical Procedures: A Possible Solution for Residue Buildup and Fogging in Video Surgery.

Science.gov (United States)

de Abreu, Igor Renato Louro Bruno; Abrão, Fernando Conrado; Silva, Alessandra Rodrigues; Corrêa, Larissa Teresa Cirera; Younes, Riad Nain

2015-05-01

Currently, there is a tendency to perform surgical procedures via laparoscopic or thoracoscopic access. However, even with the impressive technological advancement in surgical materials, such as improvement in quality of monitors, light sources, and optical fibers, surgeons have to face simple problems that can greatly hinder surgery by video. One is the formation of "fog" or residue buildup on the lens, causing decreased visibility. Intracavitary techniques for cleaning surgical optics and preventing fog formation have been described; however, some of these techniques employ the use of expensive and complex devices designed solely for this purpose. Moreover, these techniques allow the cleaning of surgical optics when they becomes dirty, which does not prevent the accumulation of residue in the optics. To solve this problem we have designed a device that allows cleaning the optics with no surgical stops and prevents the fogging and residue accumulation. The objective of this study is to evaluate through experimental testing the effectiveness of a simple device that prevents the accumulation of residue and fogging of optics used in surgical procedures performed through thoracoscopic or laparoscopic access. Ex-vivo experiments were performed simulating the conditions of residue presence in surgical optics during a video surgery. The experiment consists in immersing the optics and catheter set connected to the IV line with crystalloid solution in three types of materials: blood, blood plus fat solution, and 200 mL of distilled water and 1 vial of methylene blue. The optics coupled to the device were immersed in 200 mL of each type of residue, repeating each immersion 10 times for each distinct residue for both thirty and zero degrees optics, totaling 420 experiments. A success rate of 98.1% was observed after the experiments, in these cases the device was able to clean and prevent the residue accumulation in the optics.

10. Workload Balancing on Heterogeneous Systems: A Case Study of Sparse Grid Interpolation

KAUST Repository

Muraraşu, Alin

2012-01-01

Multi-core parallelism and accelerators are becoming common features of today’s computer systems, as they allow for computational power without sacrificing energy efficiency. Due to heterogeneity, tuning for each type of compute unit and adequate load balancing is essential. This paper proposes static and dynamic solutions for load balancing in the context of an application for visualizing high-dimensional simulation data. The application relies on the sparse grid technique for data compression. Its performance critical part is the interpolation routine used for decompression. Results show that our load balancing scheme allows for an efficient acceleration of interpolation on heterogeneous systems containing multi-core CPUs and GPUs.

11. 3D Interpolation Method for CT Images of the Lung

Directory of Open Access Journals (Sweden)

2003-06-01

Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

12. SAR image formation with azimuth interpolation after azimuth transform

Science.gov (United States)

Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

2008-07-08

Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

13. Spectral interpolation and unfolding to measure multi-labelled samples by liquid scintillation

International Nuclear Information System (INIS)

Grau Carles, A.; Grau Malonda, A.

1990-01-01

A new procedure to determine the activity of each radionuclide in a mixture is described. The information contained in the liquid scintillation pulse height spectra is used. The dilatation, interpolation and contraction steps are essential to obtain a good experimental and computed spectra fitting. The procedure can be applied to mixtures of radionuclides decayin by β - , β - - γ, β + ,β + - γ, EC, EC - γ and isomeric transitions. (Author). 10 refs

14. Interpolation of fuzzy data | Khodaparast | Journal of Fundamental ...

African Journals Online (AJOL)

Considering the many applications of mathematical functions in different ways, it is essential to have a defining function. In this study, we used Fuzzy Lagrangian interpolation and natural fuzzy spline polynomials to interpolate the fuzzy data. In the current world and in the field of science and technology, interpolation issues ...

15. Interpolation of diffusion weighted imaging datasets

DEFF Research Database (Denmark)

Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

2014-01-01

anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...

16. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

Science.gov (United States)

Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

2013-01-01

Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

17. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

Science.gov (United States)

Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

2018-08-01

This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

18. Some splines produced by smooth interpolation

Czech Academy of Sciences Publication Activity Database

Segeth, Karel

2018-01-01

Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

19. Some splines produced by smooth interpolation

Czech Academy of Sciences Publication Activity Database

Segeth, Karel

2018-01-01

Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

20. Ab initio/interpolated quantum dynamics on coupled electronic states with full configuration interaction wave functions

International Nuclear Information System (INIS)

Thompson, K.; Martinez, T.J.

1999-01-01

We present a new approach to first-principles molecular dynamics that combines a general and flexible interpolation method with ab initio evaluation of the potential energy surface. This hybrid approach extends significantly the domain of applicability of ab initio molecular dynamics. Use of interpolation significantly reduces the computational effort associated with the dynamics over most of the time scale of interest, while regions where potential energy surfaces are difficult to interpolate, for example near conical intersections, are treated by direct solution of the electronic Schroedinger equation during the dynamics. We demonstrate the concept through application to the nonadiabatic dynamics of collisional electronic quenching of Li(2p). Full configuration interaction is used to describe the wave functions of the ground and excited electronic states. The hybrid approach agrees well with full ab initio multiple spawning dynamics, while being more than an order of magnitude faster. copyright 1999 American Institute of Physics

1. Quadratic polynomial interpolation on triangular domain

Science.gov (United States)

Li, Ying; Zhang, Congcong; Yu, Qian

2018-04-01

In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

2. Image Interpolation with Geometric Contour Stencils

Directory of Open Access Journals (Sweden)

Pascal Getreuer

2011-09-01

Full Text Available We consider the image interpolation problem where given an image vm,n with uniformly-sampled pixels vm,n and point spread function h, the goal is to find function u(x,y satisfying vm,n = (h*u(m,n for all m,n in Z. This article improves upon the IPOL article Image Interpolation with Contour Stencils. In the previous work, contour stencils are used to estimate the image contours locally as short line segments. This article begins with a continuous formulation of total variation integrated over a collection of curves and defines contour stencils as a consistent discretization. This discretization is more reliable than the previous approach and can effectively distinguish contours that are locally shaped like lines, curves, corners, and circles. These improved contour stencils sense more of the geometry in the image. Interpolation is performed using an extension of the method described in the previous article. Using the improved contour stencils, there is an increase in image quality while maintaining similar computational efficiency.

3. Delimiting areas of endemism through kernel interpolation.

Science.gov (United States)

Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

2015-01-01

We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

4. Delimiting areas of endemism through kernel interpolation.

Directory of Open Access Journals (Sweden)

Ubirajara Oliveira

Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

5. A 2.9 ps equivalent resolution interpolating time counter based on multiple independent coding lines

International Nuclear Information System (INIS)

Szplet, R; Jachna, Z; Kwiatkowski, P; Rozyc, K

2013-01-01

We present the design, operation and test results of a time counter that has an equivalent resolution of 2.9 ps, a measurement uncertainty at the level of 6 ps, and a measurement range of 10 s. The time counter has been implemented in a general-purpose reprogrammable device Spartan-6 (Xilinx). To obtain both high precision and wide measurement range the counting of periods of a reference clock is combined with a two-stage interpolation within a single period of the clock signal. The interpolation involves a four-phase clock in the first interpolation stage (FIS) and an equivalent coding line (ECL) in the second interpolation stage (SIS). The ECL is created as a compound of independent discrete time coding lines (TCL). The number of TCLs used to create the virtual ECL has an effect on its resolution. We tested ECLs made from up to 16 TCLs, but the idea may be extended to a larger number of lines. In the presented time counter the coarse resolution of the counting method equal to 2 ns (period of the 500 MHz reference clock) is firstly improved fourfold in the FIS and next even more than 400 times in the SIS. The proposed solution allows us to overcome the technological limitation in achievable resolution and improve the precision of conversion of integrated interpolators based on tapped delay lines. (paper)

6. Generation of response functions of a NaI detector by using an interpolation technique

International Nuclear Information System (INIS)

Tominaga, Shoji

1983-01-01

A computer method is developed for generating response functions of a NaI detector to monoenergetic γ-rays. The method is based on an interpolation between measured response curves by a detector. The computer programs are constructed for Heath's response spectral library. The principle of the basic mathematics used for interpolation, which was reported previously by the author, et al., is that response curves can be decomposed into a linear combination of intrinsic-component patterns, and thereby the interpolation of curves is reduced to a simple interpolation of weighting coefficients needed to combine the component patterns. This technique has some advantages of data compression, reduction in computation time, and stability of the solution, in comparison with the usual functional fitting method. The processing method of segmentation of a spectrum is devised to generate useful and precise response curves. A spectral curve, obtained for each γ-ray source, is divided into some regions defined by the physical processes, such as the photopeak area, the Compton continuum area, the backscatter peak area, and so on. Each segment curve then is processed separately for interpolation. Lastly the estimated curves to the respective areas are connected on one channel scale. The generation programs are explained briefly. It is shown that the generated curve represents the overall shape of a response spectrum including not only its photopeak but also the corresponding Compton area, with a sufficient accuracy. (author)

7. Design of interpolation functions for subpixel-accuracy stereo-vision systems.

Science.gov (United States)

Haller, Istvan; Nedevschi, Sergiu

2012-02-01

Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE

8. Image Interpolation Scheme based on SVM and Improved PSO

Science.gov (United States)

Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.

2018-01-01

In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.

9. Interpolation functions and the Lions-Peetre interpolation construction

International Nuclear Information System (INIS)

Ovchinnikov, V I

2014-01-01

The generalization of the Lions-Peetre interpolation method of means considered in the present survey is less general than the generalizations known since the 1970s. However, our level of generalization is sufficient to encompass spaces that are most natural from the point of view of applications, like the Lorentz spaces, Orlicz spaces, and their analogues. The spaces φ(X 0 ,X 1 ) p 0 ,p 1 considered here have three parameters: two positive numerical parameters p 0 and p 1 of equal standing, and a function parameter φ. For p 0 ≠p 1 these spaces can be regarded as analogues of Orlicz spaces under the real interpolation method. Embedding criteria are established for the family of spaces φ(X 0 ,X 1 ) p 0 ,p 1 , together with optimal interpolation theorems that refine all the known interpolation theorems for operators acting on couples of weighted spaces L p and that extend these theorems beyond scales of spaces. The main specific feature is that the function parameter φ can be an arbitrary natural functional parameter in the interpolation. Bibliography: 43 titles

10. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

Science.gov (United States)

Huang, Ai-Mei; Nguyen, Truong

2009-04-01

In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

11. Research progress and hotspot analysis of spatial interpolation

Science.gov (United States)

Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

2018-02-01

In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

12. Spline-procedures

International Nuclear Information System (INIS)

Schmidt, R.

1976-12-01

This report contains a short introduction to spline functions as well as a complete description of the spline procedures presently available in the HMI-library. These include polynomial splines (using either B-splines or one-sided basis representations) and natural splines, as well as their application to interpolation, quasiinterpolation, L 2 -, and Tchebycheff approximation. Special procedures are included for the case of cubic splines. Complete test examples with input and output are provided for each of the procedures. (orig.) [de

13. Plasma simulation with the Differential Algebraic Cubic Interpolated Propagation scheme

Energy Technology Data Exchange (ETDEWEB)

Utsumi, Takayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

1998-03-01

A computer code based on the Differential Algebraic Cubic Interpolated Propagation scheme has been developed for the numerical solution of the Boltzmann equation for a one-dimensional plasma with immobile ions. The scheme advects the distribution function and its first derivatives in the phase space for one time step by using a numerical integration method for ordinary differential equations, and reconstructs the profile in phase space by using a cubic polynomial within a grid cell. The method gives stable and accurate results, and is efficient. It is successfully applied to a number of equations; the Vlasov equation, the Boltzmann equation with the Fokker-Planck or the Bhatnagar-Gross-Krook (BGK) collision term and the relativistic Vlasov equation. The method can be generalized in a straightforward way to treat cases such as problems with nonperiodic boundary conditions and higher dimensional problems. (author)

14. Pre-inverted SESAME data table construction enhancements to correct unexpected inverse interpolation pathologies in EOSPAC 6

Energy Technology Data Exchange (ETDEWEB)

Pimentel, David A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sheppard, Daniel G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

2018-02-01

It was recently demonstrated that EOSPAC 6 continued to incorrectly create and interpolate pre-inverted SESAME data tables after the release of version 6.3.2beta.2. Significant interpolation pathologies were discovered to occur when EOSPAC 6's host software enabled pre-inversion with the EOS_INVERT_AT_SETUP option. This document describes a solution that uses data transformations found in EOSPAC 5 and its predecessors. The numerical results and performance characteristics of both the default and pre-inverted interpolation modes in both EOSPAC 6.3.2beta.2 and the fixed logic of EOSPAC 6.4.0beta.1 are presented herein, and the latter software release is shown to produce significantly-improved numerical results for the pre-inverted interpolation mode.

15. A Differential Quadrature Procedure with Regularization of the Dirac-delta Function for Numerical Solution of Moving Load Problem

Directory of Open Access Journals (Sweden)

S. A. Eftekhari

Full Text Available AbstractThe differential quadrature method (DQM is one of the most elegant and efficient methods for the numerical solution of partial differential equations arising in engineering and applied sciences. It is simple to use and also straightforward to implement. However, the DQM is well-known to have some difficulty when applied to partial differential equations involving singular functions like the Dirac-delta function. This is caused by the fact that the Dirac-delta function cannot be directly discretized by the DQM. To overcome this difficulty, this paper presents a simple differential quadrature procedure in which the Dirac-delta function is replaced by regularized smooth functions. By regularizing the Dirac-delta function, such singular function is treated as non-singular functions and can be easily and directly discretized using the DQM. To demonstrate the applicability and reliability of the proposed method, it is applied here to solve some moving load problems of beams and rectangular plates, where the location of the moving load is described by a time-dependent Dirac-delta function. The results generated by the proposed method are compared with analytical and numerical results available in the literature. Numerical results reveal that the proposed method can be used as an efficient tool for dynamic analysis of beam- and plate-type structures traversed by moving dynamic loads.

16. Calculation of reactivity without Lagrange interpolation

International Nuclear Information System (INIS)

Suescun D, D.; Figueroa J, J. H.; Rodriguez R, K. C.; Villada P, J. P.

2015-09-01

A new method to solve numerically the inverse equation of punctual kinetics without using Lagrange interpolating polynomial is formulated; this method uses a polynomial approximation with N points based on a process of recurrence for simulating different forms of nuclear power. The results show a reliable accuracy. Furthermore, the method proposed here is suitable for real-time measurements of reactivity, with step sizes of calculations greater that Δt = 0.3 s; due to its precision can be used to implement a digital meter of reactivity in real time. (Author)

17. Solving the Schroedinger equation using Smolyak interpolants

International Nuclear Information System (INIS)

Avila, Gustavo; Carrington, Tucker Jr.

2013-01-01

In this paper, we present a new collocation method for solving the Schroedinger equation. Collocation has the advantage that it obviates integrals. All previous collocation methods have, however, the crucial disadvantage that they require solving a generalized eigenvalue problem. By combining Lagrange-like functions with a Smolyak interpolant, we device a collocation method that does not require solving a generalized eigenvalue problem. We exploit the structure of the grid to develop an efficient algorithm for evaluating the matrix-vector products required to compute energy levels and wavefunctions. Energies systematically converge as the number of points and basis functions are increased

18. Topics in multivariate approximation and interpolation

CERN Document Server

Jetter, Kurt

2005-01-01

This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr

19. Air Quality Assessment Using Interpolation Technique

Directory of Open Access Journals (Sweden)

Awkash Kumar

2016-07-01

Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.

20. Randomized interpolative decomposition of separated representations

Science.gov (United States)

Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

2015-01-01

We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

1. Controversy over Issue Preclusion in Russia’s Criminal Procedure: Can Common Law Offer a Solution?

Directory of Open Access Journals (Sweden)

Yury Rovnov

2015-01-01

Full Text Available Even though Russia’s new Code of Criminal Procedure of 2001 had from the very beginning contained the article titled ‘Preclusive Effects,’ it was not until a decision by the Constitutional Court of 2008 that the doctrine of issue preclusion was, in its proper sense, reinstated in Russian criminal law, barring facts definitively established in a civil trial from relitigation in criminal proceedings. Despite heavy criticism that came down on the Constitutional Court for what was seen by law enforcement agents as unwarranted judicial activism, the Russian Parliament soon amended the article in line with the interpretation offered by the Court. This, however, did not end the controversy as critics raised a valid point: an automatic transfer of facts from civil proceedings with a priori more lenient requirements of proof is likely to distort outcomes, harming defendants, the prosecution, and, ultimately, societal interests. This article will turn for apotential solution to common law, which has been able to avoid this problem by clearly distinguishing between different standards of proof applicable in civil v. criminal litigations. It will be shown, using the United States as an example, how courts can effectively use issue preclusion to pursue a number of legitimate objectives, such as consistency of judgments and judicial economy, with due account for the interests of parties in proceedings. At the same time, issue preclusion appears an inappropriate and ineffective means to combat arbitrariness of the judiciary – the end which Russia’s Constitutional Court and law makers arguably had in mind when introducing the doctrine into Russian law.

2. An exact solution procedure for multi-item two-echelon spare parts inventory control problem with batch ordering in the central warehouse

NARCIS (Netherlands)

Topan, E.; Bayindir, Z.P.; Tan, T.

2009-01-01

We consider a multi-item two-echelon inventory system in which the central warehouse operates under a (Q; R) policy, and the local warehouses implement basestock policy. An exact solution procedure is proposed to find the inventory control policy parameters that minimize the system-wide inventory

3. Spatial interpolation of hourly rainfall – effect of additional information, variogram inference and storm properties

Directory of Open Access Journals (Sweden)

A. Verworn

2011-02-01

Full Text Available Hydrological modelling of floods relies on precipitation data with a high resolution in space and time. A reliable spatial representation of short time step rainfall is often difficult to achieve due to a low network density. In this study hourly precipitation was spatially interpolated with the multivariate geostatistical method kriging with external drift (KED using additional information from topography, rainfall data from the denser daily networks and weather radar data. Investigations were carried out for several flood events in the time period between 2000 and 2005 caused by different meteorological conditions. The 125 km radius around the radar station Ummendorf in northern Germany covered the overall study region. One objective was to assess the effect of different approaches for estimation of semivariograms on the interpolation performance of short time step rainfall. Another objective was the refined application of the method kriging with external drift. Special attention was not only given to find the most relevant additional information, but also to combine the additional information in the best possible way. A multi-step interpolation procedure was applied to better consider sub-regions without rainfall.

The impact of different semivariogram types on the interpolation performance was low. While it varied over the events, an averaged semivariogram was sufficient overall. Weather radar data were the most valuable additional information for KED for convective summer events. For interpolation of stratiform winter events using daily rainfall as additional information was sufficient. The application of the multi-step procedure significantly helped to improve the representation of fractional precipitation coverage.

4. Geospatial interpolation of reference evapotranspiration (ETo in areas with scarce data: case study in the South of Minas Gerais, Brazil

Directory of Open Access Journals (Sweden)

Silvio Jorge Coelho Simões

2012-08-01

Full Text Available The reference evapotranspiration is an important hydrometeorological variable; its measurement is scarce in large portions of the Brazilian territory, what demands the search for alternative methods and techniques for its quantification. In this sense, the present work investigated a method for the spatialization of the reference evapotranspiration using the geostatistical method of kriging, in regions with limited data and hydrometeorological stations. The monthly average reference evapotranspiration was calculated by the Penman-Monteith-FAO equation, based on data from three weather stations located in southern Minas Gerais (Itajubá, Lavras and Poços de Caldas, and subsequently interpolated by ordinary point kriging using the approach "calculate and interpolate." The meteorological data for a fourth station (Três Corações located within the area of interpolation were used to validate the reference evapotranspiration interpolated spatially. Due to the reduced number of stations and the consequent impossibility of carrying variographic analyzes, correlation coefficient (r, index of agreement (d, medium bias error (MBE, root mean square error (RMSE and t-test were used for comparison between the calculated and interpolated reference evapotranspiration for the Três Corações station. The results of this comparison indicated that the spatial kriging procedure, even using a few stations, allows to interpolate satisfactorily the reference evapotranspiration, therefore, it is an important tool for agricultural and hydrological applications in regions with lack of data.

5. Distance-two interpolation for parallel algebraic multigrid

International Nuclear Information System (INIS)

Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M

2007-01-01

In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers

6. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

National Research Council Canada - National Science Library

Decker, Harry

1999-01-01

Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

7. SIGMA1-2007, Doppler Broadening ENDF Format Linear-Linear. Interpolated Point Cross Section

International Nuclear Information System (INIS)

2007-01-01

8. Shape-based grey-level image interpolation

International Nuclear Information System (INIS)

Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh

1999-01-01

The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)

9. Interpolation from Grid Lines: Linear, Transfinite and Weighted Method

DEFF Research Database (Denmark)

Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

2017-01-01

When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid l...

10. Shape Preserving Interpolation Using C2 Rational Cubic Spline

Directory of Open Access Journals (Sweden)

Samsul Ariffin Abdul Karim

2016-01-01

Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.

11. Input variable selection for interpolating high-resolution climate ...

African Journals Online (AJOL)

Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...

12. An efficient interpolation filter VLSI architecture for HEVC standard

Science.gov (United States)

Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

2015-12-01

The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

13. Some observations on interpolating gauges and non-covariant gauges

We discuss the viability of using interpolating gauges to deﬁne the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition deﬁning term. We show that the boundary condition needed to maintain gauge-invariance as the interpolating parameter ...

14. Convergence of trajectories in fractal interpolation of stochastic processes

International Nuclear Information System (INIS)

MaIysz, Robert

2006-01-01

The notion of fractal interpolation functions (FIFs) can be applied to stochastic processes. Such construction is especially useful for the class of α-self-similar processes with stationary increments and for the class of α-fractional Brownian motions. For these classes, convergence of the Minkowski dimension of the graphs in fractal interpolation of the Hausdorff dimension of the graph of original process was studied in [Herburt I, MaIysz R. On convergence of box dimensions of fractal interpolation stochastic processes. Demonstratio Math 2000;4:873-88.], [MaIysz R. A generalization of fractal interpolation stochastic processes to higher dimension. Fractals 2001;9:415-28.], and [Herburt I. Box dimension of interpolations of self-similar processes with stationary increments. Probab Math Statist 2001;21:171-8.]. We prove that trajectories of fractal interpolation stochastic processes converge to the trajectory of the original process. We also show that convergence of the trajectories in fractal interpolation of stochastic processes is equivalent to the convergence of trajectories in linear interpolation

15. Improved Interpolation Kernels for Super-resolution Algorithms

DEFF Research Database (Denmark)

Rasti, Pejman; Orlova, Olga; Tamberg, Gert

2016-01-01

Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....

16. Improvement of Hydrological Simulations by Applying Daily Precipitation Interpolation Schemes in Meso-Scale Catchments

Directory of Open Access Journals (Sweden)

Mateusz Szcześniak

2015-02-01

Full Text Available Ground-based precipitation data are still the dominant input type for hydrological models. Spatial variability in precipitation can be represented by spatially interpolating gauge data using various techniques. In this study, the effect of daily precipitation interpolation methods on discharge simulations using the semi-distributed SWAT (Soil and Water Assessment Tool model over a 30-year period is examined. The study was carried out in 11 meso-scale (119–3935 km2 sub-catchments lying in the Sulejów reservoir catchment in central Poland. Four methods were tested: the default SWAT method (Def based on the Nearest Neighbour technique, Thiessen Polygons (TP, Inverse Distance Weighted (IDW and Ordinary Kriging (OK. =The evaluation of methods was performed using a semi-automated calibration program SUFI-2 (Sequential Uncertainty Fitting Procedure Version 2 with two objective functions: Nash-Sutcliffe Efficiency (NSE and the adjusted R2 coefficient (bR2. The results show that: (1 the most complex OK method outperformed other methods in terms of NSE; and (2 OK, IDW, and TP outperformed Def in terms of bR2. The median difference in daily/monthly NSE between OK and Def/TP/IDW calculated across all catchments ranged between 0.05 and 0.15, while the median difference between TP/IDW/OK and Def ranged between 0.05 and 0.07. The differences between pairs of interpolation methods were, however, spatially variable and a part of this variability was attributed to catchment properties: catchments characterised by low station density and low coefficient of variation of daily flows experienced more pronounced improvement resulting from using interpolation methods. Methods providing higher precipitation estimates often resulted in a better model performance. The implication from this study is that appropriate consideration of spatial precipitation variability (often neglected by model users that can be achieved using relatively simple interpolation methods can

17. A comparison of different interpolation methods for wind data in Central Asia

Science.gov (United States)

Reinhardt, Katja; Samimi, Cyrus

2017-04-01

For the assessment of the global climate change and its consequences, the results of computer based climate models are of central importance. The quality of these results and the validity of the derived forecasts are strongly determined by the quality of the underlying climate data. However, in many parts of the world high resolution data are not available. This is particularly true for many regions in Central Asia, where the density of climatological stations has often to be described as thinned out. Due to this insufficient data base the use of statistical methods to improve the resolution of existing climate data is of crucial importance. Only this can provide a substantial data base for a well-founded analysis of past climate changes as well as for a reliable forecast of future climate developments for the particular region. The study presented here shows a comparison of different interpolation methods for the wind components u and v for a region in Central Asia with a pronounced topography. The aim of the study is to find out whether there is an optimal interpolation method which can equally be applied for all pressure levels or if different interpolation methods have to be applied for each pressure level. The European reanalysis data Era-Interim for the years 1989 - 2015 are used as input data for the pressure levels of 850 hPa, 500 hPa and 200 hPa. In order to improve the input data, two different interpolation procedures were applied: On the one hand pure interpolation methods were used, such as inverse distance weighting and ordinary kriging. On the other hand machine learning algorithms, generalized additive models and regression kriging were applied, considering additional influencing factors, e.g. geopotential and topography. As a result it can be concluded that regression kriging provides the best results for all pressure levels, followed by support vector machine, neural networks and ordinary kriging. Inverse distance weighting showed the worst

18. Scalable Intersample Interpolation Architecture for High-channel-count Beamformers

DEFF Research Database (Denmark)

Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt

2011-01-01

Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....

19. Fractional Delayer Utilizing Hermite Interpolation with Caratheodory Representation

Directory of Open Access Journals (Sweden)

Qiang DU

2018-04-01

Full Text Available Fractional delay is indispensable for many sorts of circuits and signal processing applications. Fractional delay filter (FDF utilizing Hermite interpolation with an analog differentiator is a straightforward way to delay discrete signals. This method has a low time-domain error, but a complicated sampling module than the Shannon sampling scheme. A simplified scheme, which is based on Shannon sampling and utilizing Hermite interpolation with a digital differentiator, will lead a much higher time-domain error when the signal frequency approaches the Nyquist rate. In this letter, we propose a novel fractional delayer utilizing Hermite interpolation with Caratheodory representation. The samples of differential signal are obtained by Caratheodory representation from the samples of the original signal only. So, only one sampler is needed and the sampling module is simple. Simulation results for four types of signals demonstrate that the proposed method has significantly higher interpolation accuracy than Hermite interpolation with digital differentiator.

20. Function Allocation in Complex Socio-Technical Systems: Procedure usage in nuclear power and the Context Analysis Method for Identifying Design Solutions (CAMIDS) Model

Science.gov (United States)

Schmitt, Kara Anne

This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to

1. Temporal interpolation alters motion in fMRI scans: Magnitudes and consequences for artifact detection.

Directory of Open Access Journals (Sweden)

Jonathan D Power

Full Text Available Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10-50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed. We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion.

2. Computing Diffeomorphic Paths for Large Motion Interpolation.

Science.gov (United States)

Seo, Dohyung; Jeffrey, Ho; Vemuri, Baba C

2013-06-01

In this paper, we introduce a novel framework for computing a path of diffeomorphisms between a pair of input diffeomorphisms. Direct computation of a geodesic path on the space of diffeomorphisms Diff (Ω) is difficult, and it can be attributed mainly to the infinite dimensionality of Diff (Ω). Our proposed framework, to some degree, bypasses this difficulty using the quotient map of Diff (Ω) to the quotient space Diff ( M )/ Diff ( M ) μ obtained by quotienting out the subgroup of volume-preserving diffeomorphisms Diff ( M ) μ . This quotient space was recently identified as the unit sphere in a Hilbert space in mathematics literature, a space with well-known geometric properties. Our framework leverages this recent result by computing the diffeomorphic path in two stages. First, we project the given diffeomorphism pair onto this sphere and then compute the geodesic path between these projected points. Second, we lift the geodesic on the sphere back to the space of diffeomerphisms, by solving a quadratic programming problem with bilinear constraints using the augmented Lagrangian technique with penalty terms. In this way, we can estimate the path of diffeomorphisms, first, staying in the space of diffeomorphisms, and second, preserving shapes/volumes in the deformed images along the path as much as possible. We have applied our framework to interpolate intermediate frames of frame-sub-sampled video sequences. In the reported experiments, our approach compares favorably with the popular Large Deformation Diffeomorphic Metric Mapping framework (LDDMM).

3. Functions with disconnected spectrum sampling, interpolation, translates

CERN Document Server

Olevskii, Alexander M

2016-01-01

The classical sampling problem is to reconstruct entire functions with given spectrum S from their values on a discrete set L. From the geometric point of view, the possibility of such reconstruction is equivalent to determining for which sets L the exponential system with frequencies in L forms a frame in the space L^2(S). The book also treats the problem of interpolation of discrete functions by analytic ones with spectrum in S and the problem of completeness of discrete translates. The size and arithmetic structure of both the spectrum S and the discrete set L play a crucial role in these problems. After an elementary introduction, the authors give a new presentation of classical results due to Beurling, Kahane, and Landau. The main part of the book focuses on recent progress in the area, such as construction of universal sampling sets, high-dimensional and non-analytic phenomena. The reader will see how methods of harmonic and complex analysis interplay with various important concepts in different areas, ...

4. Spatiotemporal video deinterlacing using control grid interpolation

Science.gov (United States)

Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

2015-03-01

With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

5. Power transformations improve interpolation of grids for molecular mechanics interaction energies.

Science.gov (United States)

Minh, David D L

2018-02-18

A common strategy for speeding up molecular docking calculations is to precompute nonbonded interaction energies between a receptor molecule and a set of three-dimensional grids. The grids are then interpolated to compute energies for ligand atoms in many different binding poses. Here, I evaluate a smoothing strategy of taking a power transformation of grid point energies and inverse transformation of the result from trilinear interpolation. For molecular docking poses from 85 protein-ligand complexes, this smoothing procedure leads to significant accuracy improvements, including an approximately twofold reduction in the root mean square error at a grid spacing of 0.4 Å and retaining the ability to rank docking poses even at a grid spacing of 0.7 Å. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

6. Development of a Boundary Layer Property Interpolation Tool in Support of Orbiter Return To Flight

Science.gov (United States)

Greene, Francis A.; Hamilton, H. Harris

2006-01-01

A new tool was developed to predict the boundary layer quantities required by several physics-based predictive/analytic methods that assess damaged Orbiter tile. This new tool, the Boundary Layer Property Prediction (BLPROP) tool, supplies boundary layer values used in correlations that determine boundary layer transition onset and surface heating-rate augmentation/attenuation factors inside tile gouges (i.e. cavities). BLPROP interpolates through a database of computed solutions and provides boundary layer and wall data (delta, theta, Re(sub theta)/M(sub e), Re(sub theta)/M(sub e), Re(sub theta), P(sub w), and q(sub w)) based on user input surface location and free stream conditions. Surface locations are limited to the Orbiter s windward surface. Constructed using predictions from an inviscid w/boundary-layer method and benchmark viscous CFD, the computed database covers the hypersonic continuum flight regime based on two reference flight trajectories. First-order one-dimensional Lagrange interpolation accounts for Mach number and angle-of-attack variations, whereas non-dimensional normalization accounts for differences between the reference and input Reynolds number. Employing the same computational methods used to construct the database, solutions at other trajectory points taken from previous STS flights were computed: these results validate the BLPROP algorithm. Percentage differences between interpolated and computed values are presented and are used to establish the level of uncertainty of the new tool.

7. A fast network solution by the decoupled procedure during short-term dynamic processes in power systems

Energy Technology Data Exchange (ETDEWEB)

Popovic, D P; Stefanovic, M D [Nikola Tesla Inst., Belgrade (YU). Power System Dept.

1990-01-01

A simple, fast and reliable decoupled procedure for solving the network problems during short-term dynamic processes in power systems is presented. It is based on the Newton-Raphson method applied to the power balance equations, which include the effects of generator saliency and non-impedance loads, with further modifications resulting from the physical properties of the phenomena under study. The good convergence characteristics of the developed procedure are demonstrated, and a comparison is made with the traditional method based on the current equation and the triangularized admittance matrix, using the example of stability analysis of the Yugoslav power grid. (author).

8. Research of Cubic Bezier Curve NC Interpolation Signal Generator

Directory of Open Access Journals (Sweden)

Shijun Ji

2014-08-01

Full Text Available Interpolation technology is the core of the computer numerical control (CNC system, and the precision and stability of the interpolation algorithm directly affect the machining precision and speed of CNC system. Most of the existing numerical control interpolation technology can only achieve circular arc interpolation, linear interpolation or parabola interpolation, but for the numerical control (NC machining of parts with complicated surface, it needs to establish the mathematical model and generate the curved line and curved surface outline of parts and then discrete the generated parts outline into a large amount of straight line or arc to carry on the processing, which creates the complex program and a large amount of code, so it inevitably introduce into the approximation error. All these factors affect the machining accuracy, surface roughness and machining efficiency. The stepless interpolation of cubic Bezier curve controlled by analog signal is studied in this paper, the tool motion trajectory of Bezier curve can be directly planned out in CNC system by adjusting control points, and then these data were put into the control motor which can complete the precise feeding of Bezier curve. This method realized the improvement of CNC trajectory controlled ability from the simple linear and circular arc to the complex project curve, and it provides a new way for economy realizing the curve surface parts with high quality and high efficiency machining.

9. The analysis of composite laminated beams using a 2D interpolating meshless technique

Science.gov (United States)

Sadek, S. H. M.; Belinha, J.; Parente, M. P. L.; Natal Jorge, R. M.; de Sá, J. M. A. César; Ferreira, A. J. M.

2018-02-01

Laminated composite materials are widely implemented in several engineering constructions. For its relative light weight, these materials are suitable for aerospace, military, marine, and automotive structural applications. To obtain safe and economical structures, the modelling analysis accuracy is highly relevant. Since meshless methods in the recent years achieved a remarkable progress in computational mechanics, the present work uses one of the most flexible and stable interpolation meshless technique available in the literature—the Radial Point Interpolation Method (RPIM). Here, a 2D approach is considered to numerically analyse composite laminated beams. Both the meshless formulation and the equilibrium equations ruling the studied physical phenomenon are presented with detail. Several benchmark beam examples are studied and the results are compared with exact solutions available in the literature and the results obtained from a commercial finite element software. The results show the efficiency and accuracy of the proposed numeric technique.

10. Shape-based interpolation of multidimensional grey-level images

International Nuclear Information System (INIS)

Grevera, G.J.; Udupa, J.K.

1996-01-01

Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. In this paper, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n + 1)-dimensional [(n + 1)-D] space. The binary shape-based method is then applied to this image to create an (n + 1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation

11. On Multiple Interpolation Functions of the -Genocchi Polynomials

Directory of Open Access Journals (Sweden)

Jin Jeong-Hee

2010-01-01

Full Text Available Abstract Recently, many mathematicians have studied various kinds of the -analogue of Genocchi numbers and polynomials. In the work (New approach to q-Euler, Genocchi numbers and their interpolation functions, "Advanced Studies in Contemporary Mathematics, vol. 18, no. 2, pp. 105–112, 2009.", Kim defined new generating functions of -Genocchi, -Euler polynomials, and their interpolation functions. In this paper, we give another definition of the multiple Hurwitz type -zeta function. This function interpolates -Genocchi polynomials at negative integers. Finally, we also give some identities related to these polynomials.

12. Spectral interpolation - Zero fill or convolution. [image processing

Science.gov (United States)

Forman, M. L.

1977-01-01

Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

13. Steady State Stokes Flow Interpolation for Fluid Control

DEFF Research Database (Denmark)

Bhatacharya, Haimasree; Nielsen, Michael Bang; Bridson, Robert

2012-01-01

— suffer from a common problem. They fail to capture the rotational components of the velocity field, although extrapolation in the normal direction does consider the tangential component. We address this problem by casting the interpolation as a steady state Stokes flow. This type of flow captures......Fluid control methods often require surface velocities interpolated throughout the interior of a shape to use the velocity as a feedback force or as a boundary condition. Prior methods for interpolation in computer graphics — velocity extrapolation in the normal direction and potential flow...

14. C1 Rational Quadratic Trigonometric Interpolation Spline for Data Visualization

Directory of Open Access Journals (Sweden)

Shengjun Liu

2015-01-01

Full Text Available A new C1 piecewise rational quadratic trigonometric spline with four local positive shape parameters in each subinterval is constructed to visualize the given planar data. Constraints are derived on these free shape parameters to generate shape preserving interpolation curves for positive and/or monotonic data sets. Two of these shape parameters are constrained while the other two can be set free to interactively control the shape of the curves. Moreover, the order of approximation of developed interpolant is investigated as O(h3. Numeric experiments demonstrate that our method can construct nice shape preserving interpolation curves efficiently.

15. Chiral properties of baryon interpolating fields

International Nuclear Information System (INIS)

Nagata, Keitaro; Hosaka, Atsushi; Dmitrasinovic, V.

2008-01-01

We study the chiral transformation properties of all possible local (non-derivative) interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We derive and use the relations/identities among the baryon operators with identical quantum numbers that follow from the combined color, Dirac and isospin Fierz transformations. These relations reduce the number of independent baryon operators with any given spin and isospin. The Fierz identities also effectively restrict the allowed baryon chiral multiplets. It turns out that the non-derivative baryons' chiral multiplets have the same dimensionality as their Lorentz representations. For the two independent nucleon operators the only permissible chiral multiplet is the fundamental one, ((1)/(2),0)+(0,(1)/(2)). For the Δ, admissible Lorentz representations are (1,(1)/(2))+((1)/(2),1) and ((3)/(2),0)+(0,(3)/(2)). In the case of the (1,(1)/(2))+((1)/(2),1) chiral multiplet, the I(J)=(3)/(2)((3)/(2)) Δ field has one I(J)=(1)/(2)((3)/(2)) chiral partner; otherwise it has none. We also consider the Abelian (U A (1)) chiral transformation properties of the fields and show that each baryon comes in two varieties: (1) with Abelian axial charge +3; and (2) with Abelian axial charge -1. In case of the nucleon these are the two Ioffe fields; in case of the Δ, the (1,(1)/(2))+((1)/(2),1) multiplet has an Abelian axial charge -1 and the ((3)/(2),0)+(0,(3)/(2)) multiplet has an Abelian axial charge +3. (orig.)

16. MODIS Snow Cover Recovery Using Variational Interpolation

Science.gov (United States)

Tran, H.; Nguyen, P.; Hsu, K. L.; Sorooshian, S.

2017-12-01

Cloud obscuration is one of the major problems that limit the usages of satellite images in general and in NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) global Snow-Covered Area (SCA) products in particular. Among the approaches to resolve the problem, the Variational Interpolation (VI) algorithm method, proposed by Xia et al., 2012, obtains cloud-free dynamic SCA images from MODIS. This method is automatic and robust. However, computational deficiency is a main drawback that degrades applying the method for larger scales (i.e., spatial and temporal scales). To overcome this difficulty, this study introduces an improved version of the original VI. The modified VI algorithm integrates the MINimum RESidual (MINRES) iteration (Paige and Saunders., 1975) to prevent the system from breaking up when applied to much broader scales. An experiment was done to demonstrate the crash-proof ability of the new algorithm in comparison with the original VI method, an ability that is obtained when maintaining the distribution of the weights set after solving the linear system. After that, the new VI algorithm was applied to the whole Contiguous United States (CONUS) over four winter months of 2016 and 2017, and validated using the snow station network (SNOTEL). The resulting cloud free images have high accuracy in capturing the dynamical changes of snow in contrast with the MODIS snow cover maps. Lastly, the algorithm was applied to create a Cloud free images dataset from March 10, 2000 to February 28, 2017, which is able to provide an overview of snow trends over CONUS for nearly two decades. ACKNOWLEDGMENTSWe would like to acknowledge NASA, NOAA Office of Hydrologic Development (OHD) National Weather Service (NWS), Cooperative Institute for Climate and Satellites (CICS), Army Research Office (ARO), ICIWaRM, and UNESCO for supporting this research.

17. Comparison of two fractal interpolation methods

Science.gov (United States)

Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo

2017-03-01

As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has

18. Interpolation-Based Condensation Model Reduction Part 1: Frequency Window Reduction Method Application to Structural Acoustics

National Research Council Canada - National Science Library

Ingel, R

1999-01-01

... (which require derivative information) interpolation functions as well as standard Lagrangian functions, which can be linear, quadratic or cubic, have been used to construct the interpolation windows...

19. Reduction of numerical diffusion in three-dimensional vortical flows using a coupled Eulerian/Lagrangian solution procedure

Science.gov (United States)

Felici, Helene M.; Drela, Mark

1993-01-01

A new approach based on the coupling of an Eulerian and a Lagrangian solver, aimed at reducing the numerical diffusion errors of standard Eulerian time-marching finite-volume solvers, is presented. The approach is applied to the computation of the secondary flow in two bent pipes and the flow around a 3D wing. Using convective point markers the Lagrangian approach provides a correction of the basic Eulerian solution. The Eulerian flow in turn integrates in time the Lagrangian state-vector. A comparison of coarse and fine grid Eulerian solutions makes it possible to identify numerical diffusion. It is shown that the Eulerian/Lagrangian approach is an effective method for reducing numerical diffusion errors.

20. Pre-physical treatment: an important procedure to improve spectral resolution in polymers microstructure studies using 13C solution NMR

International Nuclear Information System (INIS)

Pedroza, Oscar J.O.; Tavares, Maria I.B.

2004-01-01

Changes in physical properties of polymeric materials can be evaluated from their microstructures, which can be investigated using solution carbon-13 nuclear magnetic resonance (NMR). In this type of study spectral resolution is very important, which obviously depend on the sample and solvent. A pre physical treatment allows for an improvement in the spectral resolution. Consequently, more information on chain linking can be obtained, thus facilitating the determination of the stereo sequences. (author)

1. Rhie-Chow interpolation in strong centrifugal fields

Science.gov (United States)

Bogovalov, S. V.; Tronin, I. V.

2015-10-01

Rhie-Chow interpolation formulas are derived from the Navier-Stokes and continuity equations. These formulas are generalized to gas dynamics in strong centrifugal fields (as high as 106 g) occurring in gas centrifuges.

2. Efficient Algorithms and Design for Interpolation Filters in Digital Receiver

Directory of Open Access Journals (Sweden)

Xiaowei Niu

2014-05-01

Full Text Available Based on polynomial functions this paper introduces a generalized design method for interpolation filters. The polynomial-based interpolation filters can be implemented efficiently by using a modified Farrow structure with an arbitrary frequency response, the filters allow many pass- bands and stop-bands, and for each band the desired amplitude and weight can be set arbitrarily. The optimization coefficients of the interpolation filters in time domain are got by minimizing the weighted mean squared error function, then converting to solve the quadratic programming problem. The optimization coefficients in frequency domain are got by minimizing the maxima (MiniMax of the weighted mean squared error function. The degree of polynomials and the length of interpolation filter can be selected arbitrarily. Numerical examples verified the proposed design method not only can reduce the hardware cost effectively but also guarantee an excellent performance.

3. [Multimodal medical image registration using cubic spline interpolation method].

Science.gov (United States)

He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

2007-12-01

Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

4. Interpolating and sampling sequences in finite Riemann surfaces

OpenAIRE

Ortega-Cerda, Joaquim

2007-01-01

We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.

5. Illumination estimation via thin-plate spline interpolation.

Science.gov (United States)

Shi, Lilong; Xiong, Weihua; Funt, Brian

2011-05-01

Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

6. Fast image interpolation for motion estimation using graphics hardware

Science.gov (United States)

Kelly, Francis; Kokaram, Anil

2004-05-01

Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

7. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

Institute of Scientific and Technical Information of China (English)

2007-01-01

In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

8. Interpolation and sampling in spaces of analytic functions

CERN Document Server

Seip, Kristian

2004-01-01

The book is about understanding the geometry of interpolating and sampling sequences in classical spaces of analytic functions. The subject can be viewed as arising from three classical topics: Nevanlinna-Pick interpolation, Carleson's interpolation theorem for H^\\infty, and the sampling theorem, also known as the Whittaker-Kotelnikov-Shannon theorem. The book aims at clarifying how certain basic properties of the space at hand are reflected in the geometry of interpolating and sampling sequences. Key words for the geometric descriptions are Carleson measures, Beurling densities, the Nyquist rate, and the Helson-Szegő condition. The book is based on six lectures given by the author at the University of Michigan. This is reflected in the exposition, which is a blend of informal explanations with technical details. The book is essentially self-contained. There is an underlying assumption that the reader has a basic knowledge of complex and functional analysis. Beyond that, the reader should have some familiari...

9. Energy-Driven Image Interpolation Using Gaussian Process Regression

Directory of Open Access Journals (Sweden)

Lingling Zi

2012-01-01

Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

10. Spatial interpolation of point velocities in stream cross-section

Directory of Open Access Journals (Sweden)

Hasníková Eliška

2015-03-01

Full Text Available The most frequently used instrument for measuring velocity distribution in the cross-section of small rivers is the propeller-type current meter. Output of measuring using this instrument is point data of a tiny bulk. Spatial interpolation of measured data should produce a dense velocity profile, which is not available from the measuring itself. This paper describes the preparation of interpolation models.

11. Comparing interpolation schemes in dynamic receive ultrasound beamforming

DEFF Research Database (Denmark)

Kortbek, Jacob; Andresen, Henrik; Nikolov, Svetoslav

2005-01-01

In medical ultrasound interpolation schemes are of- ten applied in receive focusing for reconstruction of image points. This paper investigates the performance of various interpolation scheme by means of ultrasound simulations of point scatterers in Field II. The investigation includes conventional...... B-mode imaging and synthetic aperture (SA) imaging using a 192-element, 7 MHz linear array transducer with λ pitch as simulation model. The evaluation consists primarily of calculations of the side lobe to main lobe ratio, SLMLR, and the noise power of the interpolation error. When using...... conventional B-mode imaging and linear interpolation, the difference in mean SLMLR is 6.2 dB. With polynomial interpolation the ratio is in the range 6.2 dB to 0.3 dB using 2nd to 5th order polynomials, and with FIR interpolation the ratio is in the range 5.8 dB to 0.1 dB depending on the filter design...

12. At the ethical crossroads: how a gastroenterology procedure unit negotiated a solution for a reoccurring ethical dilemma.

Science.gov (United States)

Gair, Jonathan

2013-01-01

The gastroenterology procedures environment has proven to be fertile ground for the realization of moral distress as it relates to the practice of nursing. Specifically, nurses are expected to fulfill their duty as advocates for their clients at all times and within all contexts; however, their ability to discharge this essential function has been complicated by such influential factors as sedating medications, competing ethical motivations, discordant conclusions of moral reasoning and action, as well as competing institutional factors. This article begins with a fictional case study to introduce readers to the contextual essence of the moral distress that a group of gastroenterology nurses was collectively experiencing. Subsequently, the aim of this article was to explicate how one department, with the aid of an ethics committee, negotiated a process similar to the case study to develop a pragmatic policy and identify an educational primer that encourages nurses to reexamine and value the tangible realities inherent and expected of an advocate in the dynamically complex environment that characterizes all gastroenterology procedure environments where gastroenterology nurses practice.

13. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

Science.gov (United States)

Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

2017-12-01

Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method

14. Subsurface temperature maps in French sedimentary basins: new data compilation and interpolation

International Nuclear Information System (INIS)

Bonte, D.; Guillou-Frottier, L.; Garibaldi, C.; Bourgine, B.; Lopez, S.; Bouchot, V.; Garibaldi, C.; Lucazeau, F.

2010-01-01

Assessment of the underground geothermal potential requires the knowledge of deep temperatures (1-5 km). Here, we present new temperature maps obtained from oil boreholes in the French sedimentary basins. Because of their origin, the data need to be corrected, and their local character necessitates spatial interpolation. Previous maps were obtained in the 1970's using empirical corrections and manual interpolation. In this study, we update the number of measurements by using values collected during the last thirty years, correct the temperatures for transient perturbations and carry out statistical analyses before modelling the 3D distribution of temperatures. This dataset provides 977 temperatures corrected for transient perturbations in 593 boreholes located in the French sedimentary basins. An average temperature gradient of 30.6 deg. C/km is obtained for a representative surface temperature of 10 deg. C. When surface temperature is not accounted for, deep measurements are best fitted with a temperature gradient of 25.7 deg. C/km. We perform a geostatistical analysis on a residual temperature dataset (using a drift of 25.7 deg. C/km) to constrain the 3D interpolation kriging procedure with horizontal and vertical models of variograms. The interpolated residual temperatures are added to the country-scale averaged drift in order to get a three dimensional thermal structure of the French sedimentary basins. The 3D thermal block enables us to extract isothermal surfaces and 2D sections (iso-depth maps and iso-longitude cross-sections). A number of anomalies with a limited depth and spatial extension have been identified, from shallow in the Rhine graben and Aquitanian basin, to deep in the Provence basin. Some of these anomalies (Paris basin, Alsace, south of the Provence basin) may be partly related to thick insulating sediments, while for some others (southwestern Aquitanian basin, part of the Provence basin) large-scale fluid circulation may explain superimposed

15. 5-D interpolation with wave-front attributes

Science.gov (United States)

Xie, Yujiang; Gajewski, Dirk

2017-11-01

Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that

16. Interpolation of Gamma-ray buildup Factors for Arbitrary Source Energies in the Vicinity of the K-edge

International Nuclear Information System (INIS)

Michieli, I.

1998-01-01

Recently, a new buildup factors approximation formula based on the expanded polynomial set (E-P function) was successfully introduced (Michieli 1994.) with the maximum approximation error below 4% throughout the standard data domain. Buildup factors interpolation in E-P function parameters for arbitrary source energies, near the K-edge in lead, was satisfactory. Maximum interpolation error, for lead, lays within 12% what appears to be acceptable for most Point Kernel application. 1991. Harima at. al., showed that, near the K-edge, fluctuation in energy of exposure rate attenuation factors i.e.: D(E)B(E, μ E r)exp(-μ E r), given as a function of penetration depth (r) in ordinary length units (not mfps.), is not nearly as great as that of buildup factors. That phenomenon leads to the recommendation (ANSI/ANS-6.4.3) that interpolations in that energy range should be made in the attenuation factors B(E, μ E r)exp(-μ E r) rather than in the buildup factors alone. In present article, such interpolation approach is investigated by applying it to the attenuation factors in lead, with E-P function representation of exposure buildup factors. Simple form of the E-P function leads to strait calculation of new function parameters for arbitrary source energy near the K-edge and thus allowing the same representation form of buildup factors as in the standard interpolation procedure. results of the interpolation are discussed and compared with those from standard approach. (author)

17. Interpolation between multi-dimensional histograms using a new non-linear moment morphing method

Energy Technology Data Exchange (ETDEWEB)

Baak, M., E-mail: max.baak@cern.ch [CERN, CH-1211 Geneva 23 (Switzerland); Gadatsch, S., E-mail: stefan.gadatsch@nikhef.nl [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands); Harrington, R. [School of Physics and Astronomy, University of Edinburgh, Mayfield Road, Edinburgh, EH9 3JZ, Scotland (United Kingdom); Verkerke, W. [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands)

2015-01-21

A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties.

18. Interpolation between multi-dimensional histograms using a new non-linear moment morphing method

International Nuclear Information System (INIS)

Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.

2015-01-01

A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties

19. Interpolation between multi-dimensional histograms using a new non-linear moment morphing method

CERN Document Server

Baak, Max; Harrington, Robert; Verkerke, Wouter

2014-01-01

A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.

20. Interpolation between multi-dimensional histograms using a new non-linear moment morphing method

CERN Document Server

Baak, Max; Harrington, Robert; Verkerke, Wouter

2015-01-01

A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.

1. Kernel reconstruction methods for Doppler broadening — Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

International Nuclear Information System (INIS)

Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

2017-01-01

This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (T_j). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T_0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (T_j). The choice of the L_2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (T_j) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [T_m_i_n,T_m_a_x]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope "2"3"8U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of "2"3"8U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.

2. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

International Nuclear Information System (INIS)

Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

2017-01-01

This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

3. Study on Scattered Data Points Interpolation Method Based on Multi-line Structured Light

International Nuclear Information System (INIS)

Fan, J Y; Wang, F G; W, Y; Zhang, Y L

2006-01-01

Aiming at the range image obtained through multi-line structured light, a regional interpolation method is put forward in this paper. This method divides interpolation into two parts according to the memory format of the scattered data, one is interpolation of the data on the stripes, and the other is interpolation of data between the stripes. Trend interpolation method is applied to the data on the stripes, and Gauss wavelet interpolation method is applied to the data between the stripes. Experiments prove regional interpolation method feasible and practical, and it also promotes the speed and precision

4. An Integrated Solution-Based Rapid Sample Preparation Procedure for the Analysis of N-Glycans From Therapeutic Monoclonal Antibodies.

Science.gov (United States)

Aich, Udayanath; Liu, Aston; Lakbub, Jude; Mozdzanowski, Jacek; Byrne, Michael; Shah, Nilesh; Galosy, Sybille; Patel, Pramthesh; Bam, Narendra

2016-03-01

Consistent glycosylation in therapeutic monoclonal antibodies is a major concern in the biopharmaceutical industry as it impacts the drug's safety and efficacy and manufacturing processes. Large numbers of samples are created for the analysis of glycans during various stages of recombinant proteins drug development. Profiling and quantifying protein N-glycosylation is important but extremely challenging due to its microheterogeneity and more importantly the limitations of existing time-consuming sample preparation methods. Thus, a quantitative method with fast sample preparation is crucial for understanding, controlling, and modifying the glycoform variance in therapeutic monoclonal antibody development. Presented here is a rapid and highly quantitative method for the analysis of N-glycans from monoclonal antibodies. The method comprises a simple and fast solution-based sample preparation method that uses nontoxic reducing reagents for direct labeling of N-glycans. The complete work flow for the preparation of fluorescently labeled N-glycans takes a total of 3 h with less than 30 min needed for the release of N-glycans from monoclonal antibody samples. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

5. Preparation procedure and certification of uranous-uranic oxide and nitric acid solution of neptunium as standard specimens of plant

International Nuclear Information System (INIS)

Bulyanitsa, L.S.; Lipovskij, A.A.; Ryzhinskij, M.V.; Preobrazhensskaya, L.D.; Aleksandruk, V.M.; Alekseeva, N.A.; Gromova, E.A.; Solntseva, L.F.; Shereshevskaya, I.I.

1981-01-01

Two techniques of certification of standard specimens of plant (SSP) are considered. The first technique-comparison with initial SS-metallic uranium NBS-960 - is used for certification of uranium. protoxide-oxide. The mass part of the sum of analyzed impurities in prepared initial SS is (8.4+-0.8)x10 -3 %. For certification according to mass uranium part the method of gravimetric potentiometric titration with semiautomatic titrator is used; the mean quadratic deviation of the method is s=0.0002-0.0003, certified value of uranium mass part in SSP (taking account of the error of initial SS) is (84.80+-0.02)%. The second technigue - a simplified circular experiment - is used for certification of SSP-nitric acid solution of neptunium as to Np mass part. Coulometry at controlled potential and coulometry at controlled current and two variants of potentiometric titration are used as certification methods of analysis. Relative mean quadratic deviations of the methods are ssub(r)=0.0014-0.0023. When calculating total error of certified value of neptunium mass part constituents of both accidental and unremoved systematic errors of the methods were included. The final certification result of SSP is (5.707+-0.018)% [ru

6. A FAST MORPHING-BASED INTERPOLATION FOR MEDICAL IMAGES: APPLICATION TO CONFORMAL RADIOTHERAPY

Directory of Open Access Journals (Sweden)

Hussein Atoui

2011-05-01

Full Text Available A method is presented for fast interpolation between medical images. The method is intended for both slice and projective interpolation. It allows offline interpolation between neighboring slices in tomographic data. Spatial correspondence between adjacent images is established using a block matching algorithm. Interpolation of image intensities is then carried out by morphing between the images. The morphing-based method is compared to standard linear interpolation, block-matching-based interpolation and registrationbased interpolation in 3D tomographic data sets. Results show that the proposed method scored similar performance in comparison to registration-based interpolation, and significantly outperforms both linear and block-matching-based interpolation. This method is applied in the context of conformal radiotherapy for online projective interpolation between Digitally Reconstructed Radiographs (DRRs.

7. Sparse representation based image interpolation with nonlocal autoregressive modeling.

Science.gov (United States)

Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

2013-04-01

Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

8. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

Science.gov (United States)

Soleimani, H.; Khosravifard, M.A.

2011-01-01

Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

9. The interpolation method of stochastic functions and the stochastic variational principle

International Nuclear Information System (INIS)

Liu Xianbin; Chen Qiu

1993-01-01

-order stochastic finite element equations are not very reasonable. On the other hand, Galerkin Method is hopeful, along with the method, the projection principle had been advanced to solve the stochastic operator equations. In Galerkin Method, by means of projecting the stochastic solution functions into the subspace of the solution function space, the treatment of the stochasticity of the structural physical properties and the loads is reasonable. However, the construction or the selection of the subspace of the solution function space which is a Hilbert Space of stochastic functions is difficult, and furthermore it is short of a reasonable rule to measure whether the approximation of the subspace to the solution function space is fine or not. In stochastic finite element method, the discretization of stochastic functions in space and time shows a very importance, so far, the discrete patterns consist of Local Average Theory, Interpolation Method and Orthogonal Expansion Method. Although the Local Average Theory has already been a success in the stationary random fields, it is not suitable for the non-stationary ones as well. For the general stochastic functions, whether it is stationary or not, interpolation method is available. In the present paper, the authors have shown that the error between the true solution function and its approximation, its projection in the subspace, depends continuously on the errors between the stochastic functions and their interpolation functions, the latter rely continuously on the scales of the discrete elements; so a conclusion can be obtained that the Interpolation method of stochastic functions is convergent. That is to say that the approximation solution functions would limit to the true solution functions when the scales of the discrete elements goes smaller and smaller. Using the Interpolation method, a basis of subspace of the solution function space is constructed in this paper, and by means of combining the projection principle and

10. Interpolant tree automata and their application in Horn clause verification

DEFF Research Database (Denmark)

Kafle, Bishoksan; Gallagher, John Patrick

2016-01-01

This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this ......This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way...... clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead....

11. Interpolation of vector fields from human cardiac DT-MRI

International Nuclear Information System (INIS)

Yang, F; Zhu, Y M; Rapacchi, S; Robini, M; Croisille, P; Luo, J H

2011-01-01

There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.

12. Inoculating against eyewitness suggestibility via interpolated verbatim vs. gist testing.

Science.gov (United States)

Pansky, Ainat; Tenenboim, Einat

2011-01-01

In real-life situations, eyewitnesses often have control over the level of generality in which they choose to report event information. In the present study, we adopted an early-intervention approach to investigate to what extent eyewitness memory may be inoculated against suggestibility, following two different levels of interpolated reporting: verbatim and gist. After viewing a target event, participants responded to interpolated questions that required reporting of target details at either the verbatim or the gist level. After 48 hr, both groups of participants were misled about half of the target details and were finally tested for verbatim memory of all the details. The findings were consistent with our predictions: Whereas verbatim testing was successful in completely inoculating against suggestibility, gist testing did not reduce it whatsoever. These findings are particularly interesting in light of the comparable testing effects found for these two modes of interpolated testing.

13. Interpolation-free scanning and sampling scheme for tomographic reconstructions

International Nuclear Information System (INIS)

Donohue, K.D.; Saniie, J.

1987-01-01

In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

14. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

Science.gov (United States)

Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

2014-01-01

Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

15. Image interpolation used in three-dimensional range data compression.

Science.gov (United States)

Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian

2016-05-20

Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.

16. Importance of interpolation and coincidence errors in data fusion

Science.gov (United States)

Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

2018-02-01

The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

17. An adaptive interpolation scheme for molecular potential energy surfaces

Science.gov (United States)

Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

2016-08-01

The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

18. Estimating monthly temperature using point based interpolation techniques

Science.gov (United States)

Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

2013-04-01

This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

19. Multi-dimensional cubic interpolation for ICF hydrodynamics simulation

International Nuclear Information System (INIS)

Aoki, Takayuki; Yabe, Takashi.

1991-04-01

A new interpolation method is proposed to solve the multi-dimensional hyperbolic equations which appear in describing the hydrodynamics of inertial confinement fusion (ICF) implosion. The advection phase of the cubic-interpolated pseudo-particle (CIP) is greatly improved, by assuming the continuities of the second and the third spatial derivatives in addition to the physical value and the first derivative. These derivatives are derived from the given physical equation. In order to evaluate the new method, Zalesak's example is tested, and we obtain successfully good results. (author)

20. Oversampling of digitized images. [effects on interpolation in signal processing

Science.gov (United States)

Fischel, D.

1976-01-01

Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

1. Scientific data interpolation with low dimensional manifold model

Science.gov (United States)

Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

2018-01-01

We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

2. Implementing fuzzy polynomial interpolation (FPI and fuzzy linear regression (LFR

Directory of Open Access Journals (Sweden)

Maria Cristina Floreno

1996-05-01

Full Text Available This paper presents some preliminary results arising within a general framework concerning the development of software tools for fuzzy arithmetic. The program is in a preliminary stage. What has been already implemented consists of a set of routines for elementary operations, optimized functions evaluation, interpolation and regression. Some of these have been applied to real problems.This paper describes a prototype of a library in C++ for polynomial interpolation of fuzzifying functions, a set of routines in FORTRAN for fuzzy linear regression and a program with graphical user interface allowing the use of such routines.

3. Scientific data interpolation with low dimensional manifold model

International Nuclear Information System (INIS)

Zhu, Wei; Wang, Bao; Barnard, Richard C.; Hauck, Cory D.

2017-01-01

Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

4. Flame atomic absorption spectrometric determination of heavy metals in aqueous solution and surface water preceded by co-precipitation procedure with copper(II) 8-hydroxyquinoline

Science.gov (United States)

Ipeaiyeda, Ayodele Rotimi; Ayoade, Abisayo Ruth

2017-12-01

Co-precipitation procedure has widely been employed for preconcentration and separation of metal ions from the matrices of environmental samples. This is simply due to its simplicity, low consumption of separating solvent and short duration for analysis. Various organic ligands have been used for this purpose. However, there is dearth of information on the application of 8-hydroxyquinoline (8-HQ) as ligand and Cu(II) as carrier element. The use of Cu(II) is desirable because there is no contamination and background adsorption interference. Therefore, the objective of this study was to use 8-HQ in the presence of Cu(II) for coprecipitation of Cd(II), Co(II), Cr(III), Ni(II) and Pb(II) from standard solutions and surface water prior to their determinations by flame atomic absorption spectrometry (FAAS). The effects of pH, sample volume, amount of 8-HQ and Cu(II) and interfering ions on the recoveries of metal ions from standard solutions were monitored using FAAS. The water samples were treated with 8-HQ under the optimum experimental conditions and metal concentrations were determined by FAAS. The metal concentrations in water samples not treated with 8-HQ were also determined. The optimum recovery values for metal ions were higher than 85.0%. The concentrations (mg/L) of Co(II), Ni(II), Cr(III), and Pb(II) in water samples treated with 8-HQ were 0.014 ± 0.002, 0.03 ± 0.01, 0.04 ± 0.02 and 0.05 ± 0.02, respectively. These concentrations and those obtained without coprecipitation technique were significantly different. Coprecipitation procedure using 8-HQ as ligand and Cu(II) as carrier element enhanced the preconcentration and separation of metal ions from the matrix of water sample.

5. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

Science.gov (United States)

Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

2014-04-01

Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

6. Non-negative Feynman endash Kac kernels in Schroedinger close-quote s interpolation problem

International Nuclear Information System (INIS)

Blanchard, P.; Garbaczewski, P.; Olkiewicz, R.

1997-01-01

The local formulations of the Markovian interpolating dynamics, which is constrained by the prescribed input-output statistics data, usually utilize strictly positive Feynman endash Kac kernels. This implies that the related Markov diffusion processes admit vanishing probability densities only at the boundaries of the spatial volume confining the process. We discuss an extension of the framework to encompass singular potentials and associated non-negative Feynman endash Kac-type kernels. It allows us to deal with a class of continuous interpolations admitted by general non-negative solutions of the Schroedinger boundary data problem. The resulting nonstationary stochastic processes are capable of both developing and destroying nodes (zeros) of probability densities in the course of their evolution, also away from the spatial boundaries. This observation conforms with the general mathematical theory (due to M. Nagasawa and R. Aebi) that is based on the notion of multiplicative functionals, extending in turn the well known Doob close-quote s h-transformation technique. In view of emphasizing the role of the theory of non-negative solutions of parabolic partial differential equations and the link with open-quotes Wiener exclusionclose quotes techniques used to evaluate certain Wiener functionals, we give an alternative insight into the issue, that opens a transparent route towards applications.copyright 1997 American Institute of Physics

7. Biased motion vector interpolation for reduced video artifacts.

NARCIS (Netherlands)

2011-01-01

In a video processing system where motion vectors are estimated for a subset of the blocks of data forming a video frame, and motion vectors are interpolated for the remainder of the blocks of the frame, a method includes determining, for at least at least one block of the current frame for which a

8. Analysis of Spatial Interpolation in the Material-Point Method

DEFF Research Database (Denmark)

Andersen, Søren; Andersen, Lars

2010-01-01

are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...

9. Hybrid vehicle optimal control : Linear interpolation and singular control

NARCIS (Netherlands)

Delprat, S.; Hofman, T.

2015-01-01

Hybrid vehicle energy management can be formulated as an optimal control problem. Considering that the fuel consumption is often computed using linear interpolation over lookup table data, a rigorous analysis of the necessary conditions provided by the Pontryagin Minimum Principle is conducted. For

10. Fast interpolation for Global Positioning System (GPS) satellite orbits

OpenAIRE

Clynch, James R.; Sagovac, Christopher Patrick; Danielson, D. A. (Donald A.); Neta, Beny

1995-01-01

In this report, we discuss and compare several methods for polynomial interpolation of Global Positioning Systems ephemeris data. We show that the use of difference tables is more efficient than the method currently in use to construct and evaluate the Lagrange polynomials.

11. Interpolation in computing science : the semantics of modularization

NARCIS (Netherlands)

Renardel de Lavalette, Gerard R.

2008-01-01

The Interpolation Theorem, first formulated and proved by W. Craig fifty years ago for predicate logic, has been extended to many other logical frameworks and is being applied in several areas of computer science. We give a short overview, and focus on the theory of software systems and modules. An

12. Parallel optimization of IDW interpolation algorithm on multicore platform

Science.gov (United States)

Guan, Xuefeng; Wu, Huayi

2009-10-01

Due to increasing power consumption, heat dissipation, and other physical issues, the architecture of central processing unit (CPU) has been turning to multicore rapidly in recent years. Multicore processor is packaged with multiple processor cores in the same chip, which not only offers increased performance, but also presents significant challenges to application developers. As a matter of fact, in GIS field most of current GIS algorithms were implemented serially and could not best exploit the parallelism potential on such multicore platforms. In this paper, we choose Inverse Distance Weighted spatial interpolation algorithm (IDW) as an example to study how to optimize current serial GIS algorithms on multicore platform in order to maximize performance speedup. With the help of OpenMP, threading methodology is introduced to split and share the whole interpolation work among processor cores. After parallel optimization, execution time of interpolation algorithm is greatly reduced and good performance speedup is achieved. For example, performance speedup on Intel Xeon 5310 is 1.943 with 2 execution threads and 3.695 with 4 execution threads respectively. An additional output comparison between pre-optimization and post-optimization is carried out and shows that parallel optimization does to affect final interpolation result.

13. LIP: The Livermore Interpolation Package, Version 1.6

Energy Technology Data Exchange (ETDEWEB)

Fritsch, F. N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

2016-01-04

This report describes LIP, the Livermore Interpolation Package. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since it is a general-purpose package that need not be restricted to equation of state data, which uses variables ρ (density) and T (temperature).

14. Functional Commutant Lifting and Interpolation on Generalized Analytic Polyhedra

Czech Academy of Sciences Publication Activity Database

Ambrozie, Calin-Grigore

2008-01-01

Roč. 34, č. 2 (2008), s. 519-543 ISSN 0362-1588 R&D Projects: GA ČR(CZ) GA201/06/0128 Institutional research plan: CEZ:AV0Z10190503 Keywords : intertwining lifting * interpolation * analytic functions Subject RIV: BA - General Mathematics Impact factor: 0.327, year: 2008

15. Interpolant Tree Automata and their Application in Horn Clause Verification

Directory of Open Access Journals (Sweden)

Bishoksan Kafle

2016-07-01

Full Text Available This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this paper. The role of an interpolant tree automaton is to provide a generalisation of a spurious counterexample during refinement, capturing a possibly infinite set of spurious counterexample traces. In our approach these traces are then eliminated using a transformation of the Horn clauses. We compare this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead.

16. Two-dimensional interpolation with experimental data smoothing

International Nuclear Information System (INIS)

Trejbal, Z.

1989-01-01

A method of two-dimensional interpolation with smoothing of time statistically deflected points is developed for processing of magnetic field measurements at the U-120M field measurements at the U-120M cyclotron. Mathematical statement of initial requirements and the final result of relevant algebraic transformations are given. 3 refs

17. Data interpolation for vibration diagnostics using two-variable correlations

International Nuclear Information System (INIS)

Branagan, L.

1991-01-01

This paper reports that effective machinery vibration diagnostics require a clear differentiation between normal vibration changes caused by plant process conditions and those caused by degradation. The normal relationship between vibration and a process parameter can be quantified by developing the appropriate correlation. The differences in data acquisition requirements between dynamic signals (vibration spectra) and static signals (pressure, temperature, etc.) result in asynchronous data acquisition; the development of any correlation must then be based on some form of interpolated data. This interpolation can reproduce or distort the original measured quantity depending on the characteristics of the data and the interpolation technique. Relevant data characteristics, such as acquisition times, collection cycle times, compression method, storage rate, and the slew rate of the measured variable, are dependent both on the data handling and on the measured variable. Linear and staircase interpolation, along with the use of clustering and filtering, provide the necessary options to develop accurate correlations. The examples illustrate the appropriate application of these options

18. Recent developments in free-viewpoint interpolation for 3DTV

NARCIS (Netherlands)

Zinger, S.; Do, Q.L.; With, de P.H.N.

2012-01-01

Current development of 3D technologies brings 3DTV within reach for the customers. We discuss in this article the recent advancements in free-viewpoint interpolation for 3D video. This technology is still a research topic and many efforts are dedicated to creation, evaluation and improvement of new

19. A temporal interpolation approach for dynamic reconstruction in perfusion CT

International Nuclear Information System (INIS)

Montes, Pau; Lauritsch, Guenter

2007-01-01

This article presents a dynamic CT reconstruction algorithm for objects with time dependent attenuation coefficient. Projection data acquired over several rotations are interpreted as samples of a continuous signal. Based on this idea, a temporal interpolation approach is proposed which provides the maximum temporal resolution for a given rotational speed of the CT scanner. Interpolation is performed using polynomial splines. The algorithm can be adapted to slow signals, reducing the amount of data acquired and the computational cost. A theoretical analysis of the approximations made by the algorithm is provided. In simulation studies, the temporal interpolation approach is compared with three other dynamic reconstruction algorithms based on linear regression, linear interpolation, and generalized Parker weighting. The presented algorithm exhibits the highest temporal resolution for a given sampling interval. Hence, our approach needs less input data to achieve a certain quality in the reconstruction than the other algorithms discussed or, equivalently, less x-ray exposure and computational complexity. The proposed algorithm additionally allows the possibility of using slow rotating scanners for perfusion imaging purposes

20. Twitch interpolation technique in testing of maximal muscle strength

DEFF Research Database (Denmark)

Bülow, P M; Nørregaard, J; Danneskiold-Samsøe, B

1993-01-01

The aim was to study the methodological aspects of the muscle twitch interpolation technique in estimating the maximal force of contraction in the quadriceps muscle utilizing commercial muscle testing equipment. Six healthy subjects participated in seven sets of experiments testing the effects...

1. Limiting reiteration for real interpolation with slowly varying functions

Czech Academy of Sciences Publication Activity Database

Gogatishvili, Amiran; Opic, Bohumír; Trebels, W.

2005-01-01

Roč. 278, 1-2 (2005), s. 86-107 ISSN 0025-584X R&D Projects: GA ČR(CZ) GA201/01/0333 Institutional research plan: CEZ:AV0Z10190503 Keywords : real interpolation * K-functional * limiting reiteration Subject RIV: BA - General Mathematics Impact factor: 0.465, year: 2005

2. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

Science.gov (United States)

Gordon, Sheldon P.; Yang, Yajun

2017-01-01

This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

3. Blind Authentication Using Periodic Properties ofInterpolation

Czech Academy of Sciences Publication Activity Database

Mahdian, Babak; Saic, Stanislav

2008-01-01

Roč. 3, č. 3 (2008), s. 529-538 ISSN 1556-6013 R&D Projects: GA ČR GA102/08/0470 Institutional research plan: CEZ:AV0Z10750506 Keywords : image forensics * digital forgery * image tampering * interpolation detection * resampling detection Subject RIV: IN - Informatics, Computer Science Impact factor: 2.230, year: 2008

4. Interpolation Inequalities and Spectral Estimates for Magnetic Operators

Science.gov (United States)

Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael

2018-05-01

We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.

5. Research on Electronic Transformer Data Synchronization Based on Interpolation Methods and Their Error Analysis

Directory of Open Access Journals (Sweden)

Pang Fubin

2015-09-01

Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.

6. Spatial interpolation schemes of daily precipitation for hydrologic modeling

Science.gov (United States)

Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

2012-01-01

Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

7. Nuclear data banks generation by interpolation; Generacion de bancos de datos nucleares mediante interpolacion

Energy Technology Data Exchange (ETDEWEB)

Castillo M, J A

1999-07-01

Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks.

8. Colony formation by sublethally heat-injured Zygosaccharomyces rouxii as affected by solutes in the recovery medium and procedure for sterilizing medium.

Science.gov (United States)

Golden, D A; Beuchat, L R

1990-01-01

Recovery and colony formation by healthy and sublethally heat-injured cells of Zygosaccharomyces rouxii as influenced by the procedure for sterilizing recovery media (YM agar [YMA], wort agar, cornmeal agar, and oatmeal agar) were investigated. Media were supplemented with various concentrations of glucose, sucrose, glycerol, or sorbitol and sterilized by autoclaving (110 degrees C, 15 min) and by repeated treatment with steam (100 degrees C). An increase in sensitivity was observed when heat-injured cells were plated on glucose-supplemented YMA at an aw of 0.880 compared with aws of 0.933 and 0.998. Colonies which developed from unheated and heated cells on YMA at aws of 0.998 and 0.933 generally exceeded 0.5 mm in diameter within 3.5 to 4 days of incubation at 25 degrees C, whereas colonies formed on YMA at an aw of 0.880 typically did not exceed 0.5 mm in diameter until after 5.5 to 6.5 days of incubation. The number of colonies exceeding 0.5 mm in diameter which were formed by heat-injured cells on YMA at an aw of 0.880 was 2 to 3 logs less than the total number of colonies detected, i.e., on YMA at an aw of 0.933 and using no limits of exclusion based on colony diameter. A substantial portion of cells which survived heat treatment were sublethally injured as evidenced by increased sensitivity to a suboptimum aw (0.880). In no instance was recovery of Z. rouxii significantly affected by medium sterilization procedure when glucose or sorbitol was used as the aw-suppressing solute.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:2403251

9. Colony formation by sublethally heat-injured Zygosaccharomyces rouxii as affected by solutes in the recovery medium and procedure for sterilizing medium.

Science.gov (United States)

Golden, D A; Beuchat, L R

1990-08-01

Recovery and colony formation by healthy and sublethally heat-injured cells of Zygosaccharomyces rouxii as influenced by the procedure for sterilizing recovery media (YM agar [YMA], wort agar, cornmeal agar, and oatmeal agar) were investigated. Media were supplemented with various concentrations of glucose, sucrose, glycerol, or sorbitol and sterilized by autoclaving (110 degrees C, 15 min) and by repeated treatment with steam (100 degrees C). An increase in sensitivity was observed when heat-injured cells were plated on glucose-supplemented YMA at an aw of 0.880 compared with aws of 0.933 and 0.998. Colonies which developed from unheated and heated cells on YMA at aws of 0.998 and 0.933 generally exceeded 0.5 mm in diameter within 3.5 to 4 days of incubation at 25 degrees C, whereas colonies formed on YMA at an aw of 0.880 typically did not exceed 0.5 mm in diameter until after 5.5 to 6.5 days of incubation. The number of colonies exceeding 0.5 mm in diameter which were formed by heat-injured cells on YMA at an aw of 0.880 was 2 to 3 logs less than the total number of colonies detected, i.e., on YMA at an aw of 0.933 and using no limits of exclusion based on colony diameter. A substantial portion of cells which survived heat treatment were sublethally injured as evidenced by increased sensitivity to a suboptimum aw (0.880). In no instance was recovery of Z. rouxii significantly affected by medium sterilization procedure when glucose or sorbitol was used as the aw-suppressing solute.(ABSTRACT TRUNCATED AT 250 WORDS)

10. Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation

KAUST Repository

Murarasu, Alin; Weidendorfer, Josef

2012-01-01

bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation

11. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

Science.gov (United States)

Ledbetter, F. E.

1994-01-01

Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With

12. Effect of interpolation on parameters extracted from seating interface pressure arrays

OpenAIRE

Michael Wininger, PhD; Barbara Crane, PhD, PT

2015-01-01

Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pre...

13. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

International Nuclear Information System (INIS)

Zhang Guiyong; Liu Guirong

2010-01-01

In the framework of a weakened weak (W 2 ) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W 2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H 1 space, but in a G 1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H 1 space and G 1 space can be viewed as a space of functions with weakened weak (W 2 ) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W 2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W 2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model and

14. Efficient GPU-based texture interpolation using uniform B-splines

NARCIS (Netherlands)

Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.

2008-01-01

This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and

15. A parameterization of observer-based controllers: Bumpless transfer by covariance interpolation

DEFF Research Database (Denmark)

2009-01-01

This paper presents an algorithm to interpolate between two observer-based controllers for a linear multivariable system such that the closed loop system remains stable throughout the interpolation. The method interpolates between the inverse Lyapunov functions for the two original state feedback...

16. Digital elevation model production from scanned topographic contour maps via thin plate spline interpolation

International Nuclear Information System (INIS)

Soycan, Arzu; Soycan, Metin

2009-01-01

GIS (Geographical Information System) is one of the most striking innovation for mapping applications supplied by the developing computer and software technology to users. GIS is a very effective tool which can show visually combination of the geographical and non-geographical data by recording these to allow interpretations and analysis. DEM (Digital Elevation Model) is an inalienable component of the GIS. The existing TM (Topographic Map) can be used as the main data source for generating DEM by amanual digitizing or vectorization process for the contours polylines. The aim of this study is to examine the DEM accuracies, which were obtained by TMs, as depending on the number of sampling points and grid size. For these purposes, the contours of the several 1/1000 scaled scanned topographical maps were vectorized. The different DEMs of relevant area have been created by using several datasets with different numbers of sampling points. We focused on the DEM creation from contour lines using gridding with RBF (Radial Basis Function) interpolation techniques, namely TPS as the surface fitting model. The solution algorithm and a short review of the mathematical model of TPS (Thin Plate Spline) interpolation techniques are given. In the test study, results of the application and the obtained accuracies are drawn and discussed. The initial object of this research is to discuss the requirement of DEM in GIS, urban planning, surveying engineering and the other applications with high accuracy (a few deci meters). (author)

17. Development of high fidelity soot aerosol dynamics models using method of moments with interpolative closure

KAUST Repository

Roy, Subrata P.

2014-01-28

The method of moments with interpolative closure (MOMIC) for soot formation and growth provides a detailed modeling framework maintaining a good balance in generality, accuracy, robustness, and computational efficiency. This study presents several computational issues in the development and implementation of the MOMIC-based soot modeling for direct numerical simulations (DNS). The issues of concern include a wide dynamic range of numbers, choice of normalization, high effective Schmidt number of soot particles, and realizability of the soot particle size distribution function (PSDF). These problems are not unique to DNS, but they are often exacerbated by the high-order numerical schemes used in DNS. Four specific issues are discussed in this article: the treatment of soot diffusion, choice of interpolation scheme for MOMIC, an approach to deal with strongly oxidizing environments, and realizability of the PSDF. General, robust, and stable approaches are sought to address these issues, minimizing the use of ad hoc treatments such as clipping. The solutions proposed and demonstrated here are being applied to generate new physical insight into complex turbulence-chemistry-soot-radiation interactions in turbulent reacting flows using DNS. © 2014 Copyright Taylor and Francis Group, LLC.

18. Interpolation Environment of Tensor Mathematics at the Corpuscular Stage of Computational Experiments in Hydromechanics

Science.gov (United States)

Bogdanov, Alexander; Degtyarev, Alexander; Khramushin, Vasily; Shichkina, Yulia

2018-02-01

Stages of direct computational experiments in hydromechanics based on tensor mathematics tools are represented by conditionally independent mathematical models for calculations separation in accordance with physical processes. Continual stage of numerical modeling is constructed on a small time interval in a stationary grid space. Here coordination of continuity conditions and energy conservation is carried out. Then, at the subsequent corpuscular stage of the computational experiment, kinematic parameters of mass centers and surface stresses at the boundaries of the grid cells are used in modeling of free unsteady motions of volume cells that are considered as independent particles. These particles can be subject to vortex and discontinuous interactions, when restructuring of free boundaries and internal rheological states has place. Transition from one stage to another is provided by interpolation operations of tensor mathematics. Such interpolation environment formalizes the use of physical laws for mechanics of continuous media modeling, provides control of rheological state and conditions for existence of discontinuous solutions: rigid and free boundaries, vortex layers, their turbulent or empirical generalizations.

19. Single image interpolation via adaptive nonlocal sparsity-based modeling.

Science.gov (United States)

Romano, Yaniv; Protter, Matan; Elad, Michael

2014-07-01

Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

20. Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery.

Science.gov (United States)

Ratliff, Bradley M; LaCasse, Charles F; Tyo, J Scott

2009-05-25

Microgrid polarimeters are composed of an array of micro-polarizing elements overlaid upon an FPA sensor. In the past decade systems have been designed and built in all regions of the optical spectrum. These systems have rugged, compact designs and the ability to obtain a complete set of polarimetric measurements during a single image capture. However, these systems acquire the polarization measurements through spatial modulation and each measurement has a varying instantaneous field-of-view (IFOV). When these measurements are combined to estimate the polarization images, strong edge artifacts are present that severely degrade the estimated polarization imagery. These artifacts can be reduced when interpolation strategies are first applied to the intensity data prior to Stokes vector estimation. Here we formally study IFOV error and the performance of several bilinear interpolation strategies used for reducing it.

1. Bi-local baryon interpolating fields with two flavors

Energy Technology Data Exchange (ETDEWEB)

Dmitrasinovic, V. [Belgrade University, Institute of Physics, Pregrevica 118, Zemun, P.O. Box 57, Beograd (RS); Chen, Hua-Xing [Institutos de Investigacion de Paterna, Departamento de Fisica Teorica and IFIC, Centro Mixto Universidad de Valencia-CSIC, Valencia (Spain); Peking University, Department of Physics and State Key Laboratory of Nuclear Physics and Technology, Beijing (China)

2011-02-15

We construct bi-local interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We use the restrictions following from the Pauli principle to derive relations/identities among the baryon operators with identical quantum numbers. Such relations that follow from the combined spatial, Dirac, color, and isospin Fierz transformations may be called the (total/complete) Fierz identities. These relations reduce the number of independent baryon operators with any given spin and isospin. We also study the Abelian and non-Abelian chiral transformation properties of these fields and place them into baryon chiral multiplets. Thus we derive the independent baryon interpolating fields with given values of spin (Lorentz group representation), chiral symmetry (U{sub L}(2) x U{sub R}(2) group representation) and isospin appropriate for the first angular excited states of the nucleon. (orig.)

2. Kriging for interpolation of sparse and irregularly distributed geologic data

Energy Technology Data Exchange (ETDEWEB)

Campbell, K.

1986-12-31

For many geologic problems, subsurface observations are available only from a small number of irregularly distributed locations, for example from a handful of drill holes in the region of interest. These observations will be interpolated one way or another, for example by hand-drawn stratigraphic cross-sections, by trend-fitting techniques, or by simple averaging which ignores spatial correlation. In this paper we consider an interpolation technique for such situations which provides, in addition to point estimates, the error estimates which are lacking from other ad hoc methods. The proposed estimator is like a kriging estimator in form, but because direct estimation of the spatial covariance function is not possible the parameters of the estimator are selected by cross-validation. Its use in estimating subsurface stratigraphy at a candidate site for geologic waste repository provides an example.

3. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

Science.gov (United States)

1994-01-01

LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

4. The modal surface interpolation method for damage localization

Science.gov (United States)

Pina Limongelli, Maria

2017-05-01

The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

5. Reconstruction of reflectance data using an interpolation technique.

Science.gov (United States)

2009-03-01

A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

6. Direct Trajectory Interpolation on the Surface using an Open CNC

OpenAIRE

Beudaert , Xavier; Lavernhe , Sylvain; Tournier , Christophe

2014-01-01

International audience; Free-form surfaces are used for many industrial applications from aeronautical parts, to molds or biomedical implants. In the common machining process, computer-aided manufacturing (CAM) software generates approximated tool paths because of the limitation induced by the input tool path format of the industrial CNC. Then, during the tool path interpolation, marks on finished surfaces can appear induced by non smooth feedrate planning. Managing the geometry of the tool p...

7. Strip interpolation in silicon and germanium strip detectors

International Nuclear Information System (INIS)

Wulf, E. A.; Phlips, B. F.; Johnson, W. N.; Kurfess, J. D.; Lister, C. J.; Kondev, F.; Physics; Naval Research Lab.

2004-01-01

The position resolution of double-sided strip detectors is limited by the strip pitch and a reduction in strip pitch necessitates more electronics. Improved position resolution would improve the imaging capabilities of Compton telescopes and PET detectors. Digitizing the preamplifier waveform yields more information than can be extracted with regular shaping electronics. In addition to the energy, depth of interaction, and which strip was hit, the digitized preamplifier signals can locate the interaction position to less than the strip pitch of the detector by looking at induced signals in neighboring strips. This allows the position of the interaction to be interpolated in three dimensions and improve the imaging capabilities of the system. In a 2 mm thick silicon strip detector with a strip pitch of 0.891 mm, strip interpolation located the interaction of 356 keV gamma rays to 0.3 mm FWHM. In a 2 cm thick germanium detector with a strip pitch of 5 mm, strip interpolation of 356 keV gamma rays yielded a position resolution of 1.5 mm FWHM

8. Importance of interpolation and coincidence errors in data fusion

Directory of Open Access Journals (Sweden)

S. Ceccherini

2018-02-01

Full Text Available The complete data fusion (CDF method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

9. Global sensitivity analysis using sparse grid interpolation and polynomial chaos

International Nuclear Information System (INIS)

Buzzard, Gregery T.

2012-01-01

Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990) [1] and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) [2]. Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008) [3] to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial. - Highlights: ► Efficient estimation of variance-based sensitivity coefficients. ► Efficient estimation of derivative-based sensitivity coefficients. ► Use of homotopy methods for approximation of local maxima and minima.

10. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

Science.gov (United States)

Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

2017-12-01

Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

11. On removing interpolation and resampling artifacts in rigid image registration.

Science.gov (United States)

Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

2013-02-01

We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

12. The bases for the use of interpolation in helical computed tomography: an explanation for radiologists

International Nuclear Information System (INIS)

Garcia-Santos, J. M.; Cejudo, J.

2002-01-01

In contrast to conventional computed tomography (CT), helical CT requires the application of interpolators to achieve image reconstruction. This is because the projections processed by the computer are not situated in the same plane. Since the introduction of helical CT. a number of interpolators have been designed in the attempt to maintain the thickness of the reconstructed section as close as possible to the thickness of the X-ray beam. The purpose of this article is to discuss the function of these interpolators, stressing the advantages and considering the possible inconveniences of high-grade curved interpolators with respect to standard linear interpolators. (Author) 7 refs

13. Fabrication of targets for transmutation of americium : synthesis of inertial matrix by sol-gel method. Procedure study on the infiltration of a radioactive solutions

International Nuclear Information System (INIS)

Fernandez Carretero, A.

2002-01-01

14. Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation

Science.gov (United States)

Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

2017-04-01

In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

15. Comparison of interpolation methods for sparse data: Application to wind and concentration fields

International Nuclear Information System (INIS)

Goodin, W.R.; McRae, G.J.; Seinfield, J.H.

1979-01-01

in order to produce gridded fields of pollutant concentration data and surface wind data for use in an air quality model, a number of techniques for interpolating sparse data values are compared. The techniques are compared using three data sets. One is an idealized concentration distribution to which the exact solution is known, the second is a potential flow field, while the third consists of surface ozone concentrations measured in the Los Angeles Basin on a particular day. The results of the study indicate that fitting a second-degree polynomial to each subregion (triangle) in the plane with each data point weighted according to its distance form the subregion provides a good compromise between accuracy and computational cost

16. Interpolating and Estimating Horizontal Diffuse Solar Irradiation to Provide UK-Wide Coverage: Selection of the Best Performing Models

Directory of Open Access Journals (Sweden)

Diane Palmer

2017-02-01

Full Text Available Plane-of-array (PoA irradiation data is a requirement to simulate the energetic performance of photovoltaic devices (PVs. Normally, solar data is only available as global horizontal irradiation, for a limited number of locations, and typically in hourly time resolution. One approach to handling this restricted data is to enhance it initially by interpolation to the location of interest; next, it must be translated to PoA data by separately considering the diffuse and the beam components. There are many methods of interpolation. This research selects ordinary kriging as the best performing technique by studying mathematical properties, experimentation and leave-one-out-cross validation. Likewise, a number of different translation models has been developed, most of them parameterised for specific measurement setups and locations. The work presented identifies the optimum approach for the UK on a national scale. The global horizontal irradiation will be split into its constituent parts. Divers separation models were tried. The results of each separation algorithm were checked against measured data distributed across the UK. It became apparent that while there is little difference between procedures (14 Wh/m2 mean bias error (MBE, 12 Wh/m2 root mean square error (RMSE, the Ridley, Boland, Lauret equation (a universal split algorithm consistently performed well. The combined interpolation/separation RMSE is 86 Wh/m2.

17. Numerical analysis for multi-group neutron-diffusion equation using Radial Point Interpolation Method (RPIM)

International Nuclear Information System (INIS)

Kim, Kyung-O; Jeong, Hae Sun; Jo, Daeseong

2017-01-01

Highlights: • Employing the Radial Point Interpolation Method (RPIM) in numerical analysis of multi-group neutron-diffusion equation. • Establishing mathematical formation of modified multi-group neutron-diffusion equation by RPIM. • Performing the numerical analysis for 2D critical problem. - Abstract: A mesh-free method is introduced to overcome the drawbacks (e.g., mesh generation and connectivity definition between the meshes) of mesh-based (nodal) methods such as the finite-element method and finite-difference method. In particular, the Point Interpolation Method (PIM) using a radial basis function is employed in the numerical analysis for the multi-group neutron-diffusion equation. The benchmark calculations are performed for the 2D homogeneous and heterogeneous problems, and the Multiquadrics (MQ) and Gaussian (EXP) functions are employed to analyze the effect of the radial basis function on the numerical solution. Additionally, the effect of the dimensionless shape parameter in those functions on the calculation accuracy is evaluated. According to the results, the radial PIM (RPIM) can provide a highly accurate solution for the multiplication eigenvalue and the neutron flux distribution, and the numerical solution with the MQ radial basis function exhibits the stable accuracy with respect to the reference solutions compared with the other solution. The dimensionless shape parameter directly affects the calculation accuracy and computing time. Values between 1.87 and 3.0 for the benchmark problems considered in this study lead to the most accurate solution. The difference between the analytical and numerical results for the neutron flux is significantly increased in the edge of the problem geometry, even though the maximum difference is lower than 4%. This phenomenon seems to arise from the derivative boundary condition at (x,0) and (0,y) positions, and it may be necessary to introduce additional strategy (e.g., the method using fictitious points and

18. High-Order Finite-Difference Solution of the Poisson Equation with Interface Jump Conditions II

Science.gov (United States)

Marques, Alexandre; Nave, Jean-Christophe; Rosales, Rodolfo

2010-11-01

The Poisson equation with jump discontinuities across an interface is of central importance in Computational Fluid Dynamics. In prior work, Marques, Nave, and Rosales have introduced a method to obtain fourth-order accurate solutions for the constant coefficient Poisson problem. Here we present an extension of this method to solve the variable coefficient Poisson problem to fourth-order of accuracy. The extended method is based on local smooth extrapolations of the solution field across the interface. The extrapolation procedure uses a combination of cubic Hermite interpolants and a high-order representation of the interface using the Gradient-Augmented Level-Set technique. This procedure is compatible with the use of standard discretizations for the Laplace operator, and leads to modified linear systems which have the same sparsity pattern as the standard discretizations. As a result, standard Poisson solvers can be used with only minimal modifications. Details of the method and applications will be presented.

19. Wayside Bearing Fault Diagnosis Based on Envelope Analysis Paved with Time-Domain Interpolation Resampling and Weighted-Correlation-Coefficient-Guided Stochastic Resonance

Directory of Open Access Journals (Sweden)

Yongbin Liu

2017-01-01

Full Text Available Envelope spectrum analysis is a simple, effective, and classic method for bearing fault identification. However, in the wayside acoustic health monitoring system, owing to the high relative moving speed between the railway vehicle and the wayside mounted microphone, the recorded signal is embedded with Doppler effect, which brings in shift and expansion of the bearing fault characteristic frequency (FCF. What is more, the background noise is relatively heavy, which makes it difficult to identify the FCF. To solve the two problems, this study introduces solutions for the wayside acoustic fault diagnosis of train bearing based on Doppler effect reduction using the improved time-domain interpolation resampling (TIR method and diagnosis-relevant information enhancement using Weighted-Correlation-Coefficient-Guided Stochastic Resonance (WCCSR method. First, the traditional TIR method is improved by incorporating the original method with kinematic parameter estimation based on time-frequency analysis and curve fitting. Based on the estimated parameters, the Doppler effect is removed using the TIR easily. Second, WCCSR is employed to enhance the diagnosis-relevant period signal component in the obtained Doppler-free signal. Finally, paved with the above two procedures, the local fault is identified using envelope spectrum analysis. Simulated and experimental cases have verified the effectiveness of the proposed method.

20. Computer assisted procedure maintenance

International Nuclear Information System (INIS)

Bisio, R.; Hulsund, J. E.; Nilsen, S.

2004-04-01

The maintenance of operating procedures in a NPP is a tedious and complicated task. Through the whole life cycle of the procedures they will be dynamic, 'living' documents. Several aspects of the procedure must be considered in a revision process. Pertinent details and attributes of the procedure must be checked. An organizational structure must be created and responsibilities allotted for drafting, revising, reviewing and publishing procedures. Available powerful computer technology provides solutions within document management and computerisation of procedures. These solutions can also support the maintenance of procedures. Not all parts of the procedure life cycle are equally amenable to computerized support. This report looks at the procedure life cycle in todays NPPs and discusses the possibilities associated with introduction of computer technology to assist the maintenance of procedures. (Author)

1. Optimal interpolation method for intercomparison of atmospheric measurements.

Science.gov (United States)

Ridolfi, Marco; Ceccherini, Simone; Carli, Bruno

2006-04-01

Intercomparison of atmospheric measurements is often a difficult task because of the different spatial response functions of the experiments considered. We propose a new method for comparison of two atmospheric profiles characterized by averaging kernels with different vertical resolutions. The method minimizes the smoothing error induced by the differences in the averaging kernels by exploiting an optimal interpolation rule to map one profile into the retrieval grid of the other. Compared with the techniques published so far, this method permits one to retain the vertical resolution of the less-resolved profile involved in the intercomparison.

2. Rate of convergence of Bernstein quasi-interpolants

International Nuclear Information System (INIS)

Diallo, A.T.

1995-09-01

We show that if f is an element of C[0,1] and B (2r-1) n f (r integer ≥ 1) is the Bernstein Quasi-Interpolant defined by Sablonniere, then parallel B (2r-1) n f - f parallel C[0,1] ≤ ω 2r φ (f, 1/√n) where ω 2r φ is the Ditzian-Totik modulus of smoothness with φ(x) = √ x(1-x), x is an element of [0,1]. (author). 6 refs

3. Hörmander spaces, interpolation, and elliptic problems

CERN Document Server

Mikhailets, Vladimir A; Malyshev, Peter V

2014-01-01

The monograph gives a detailed exposition of the theory of general elliptic operators (scalar and matrix) and elliptic boundary value problems in Hilbert scales of Hörmander function spaces. This theory was constructed by the authors in a number of papers published in 2005-2009. It is distinguished by a systematic use of the method of interpolation with a functional parameter of abstract Hilbert spaces and Sobolev inner product spaces. This method, the theory and their applications are expounded for the first time in the monographic literature. The monograph is written in detail and in a

4. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

International Nuclear Information System (INIS)

Nakata, Susumu

2008-01-01

This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

5. Calibration method of microgrid polarimeters with image interpolation.

Science.gov (United States)

Chen, Zhenyue; Wang, Xia; Liang, Rongguang

2015-02-10

Microgrid polarimeters have large advantages over conventional polarimeters because of the snapshot nature and because they have no moving parts. However, they also suffer from several error sources, such as fixed pattern noise (FPN), photon response nonuniformity (PRNU), pixel cross talk, and instantaneous field-of-view (IFOV) error. A characterization method is proposed to improve the measurement accuracy in visible waveband. We first calibrate the camera with uniform illumination so that the response of the sensor is uniform over the entire field of view without IFOV error. Then a spline interpolation method is implemented to minimize IFOV error. Experimental results show the proposed method can effectively minimize the FPN and PRNU.

6. Cardinal Basis Piecewise Hermite Interpolation on Fuzzy Data

Directory of Open Access Journals (Sweden)

H. Vosoughi

2016-01-01

Full Text Available A numerical method along with explicit construction to interpolation of fuzzy data through the extension principle results by widely used fuzzy-valued piecewise Hermite polynomial in general case based on the cardinal basis functions, which satisfy a vanishing property on the successive intervals, has been introduced here. We have provided a numerical method in full detail using the linear space notions for calculating the presented method. In order to illustrate the method in computational examples, we take recourse to three prime cases: linear, cubic, and quintic.

7. New extended interpolating operators for hadron correlation functions

Energy Technology Data Exchange (ETDEWEB)

Scardino, Francesco; Papinutto, Mauro [Roma ' ' Sapienza' ' Univ. (Italy). Dipt. di Fisica; INFN, Sezione di Roma (Italy); Schaefer, Stefan [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

2016-12-22

New extended interpolating operators made of quenched three dimensional fermions are introduced in the context of lattice QCD. The mass of the 3D fermions can be tuned in a controlled way to find a better overlap of the extended operators with the states of interest. The extended operators have good renormalisation properties and are easy to control when taking the continuum limit. Moreover the short distance behaviour of the two point functions built from these operators is greatly improved. The operators have been numerically implemented and a comparison to point sources and Jacobi smeared sources has been performed on the new CLS configurations.

8. New extended interpolating operators for hadron correlation functions

International Nuclear Information System (INIS)

Scardino, Francesco; Papinutto, Mauro; Schaefer, Stefan

2016-01-01

New extended interpolating operators made of quenched three dimensional fermions are introduced in the context of lattice QCD. The mass of the 3D fermions can be tuned in a controlled way to find a better overlap of the extended operators with the states of interest. The extended operators have good renormalisation properties and are easy to control when taking the continuum limit. Moreover the short distance behaviour of the two point functions built from these operators is greatly improved. The operators have been numerically implemented and a comparison to point sources and Jacobi smeared sources has been performed on the new CLS configurations.

9. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

Science.gov (United States)

Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

2013-08-01

In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

10. Geometries and interpolations for symmetric positive definite matrices

DEFF Research Database (Denmark)

Feragen, Aasa; Fuster, Andrea

2017-01-01

. In light of the simulation results, we discuss the mathematical and qualitative properties of these new metrics in comparison with the classical ones. Finally, we explore the nonlinear variation of properties such as shape and scale throughout principal geodesics in different metrics, which affects...... the visualization of scale and shape variation in tensorial data. With the paper, we will release a software package with Matlab scripts for computing the interpolations and statistics used for the experiments in the paper (Code is available at https://sites.google.com/site/aasaferagen/home/software)....

11. Trends in Continuity and Interpolation for Computer Graphics.

Science.gov (United States)

Gonzalez Garcia, Francisco

2015-01-01

In every computer graphics oriented application today, it is a common practice to texture 3D models as a way to obtain realistic material. As part of this process, mesh texturing, deformation, and visualization are all key parts of the computer graphics field. This PhD dissertation was completed in the context of these three important and related fields in computer graphics. The article presents techniques that improve on existing state-of-the-art approaches related to continuity and interpolation in texture space (texturing), object space (deformation), and screen space (rendering).

12. Effect of interpolation on parameters extracted from seating interface pressure arrays.

Science.gov (United States)

Wininger, Michael; Crane, Barbara

2014-01-01

Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

13. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

Science.gov (United States)

Ding, Qian; Wang, Yong; Zhuang, Dafang

2018-04-15

The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for

14. Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data

Energy Technology Data Exchange (ETDEWEB)

Mueller, Kerstin; Schwemmer, Chris; Hornegger, Joachim [Pattern Recognition Lab, Department of Computer Science, Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander-Universitaet Erlangen-Nuernberg, Erlangen 91058 (Germany); Zheng Yefeng; Wang Yang [Imaging and Computer Vision, Siemens Corporate Research, Princeton, New Jersey 08540 (United States); Lauritsch, Guenter; Rohkohl, Christopher; Maier, Andreas K. [Siemens AG, Healthcare Sector, Forchheim 91301 (Germany); Schultz, Carl [Thoraxcenter, Erasmus MC, Rotterdam 3000 (Netherlands); Fahrig, Rebecca [Department of Radiology, Stanford University, Stanford, California 94305 (United States)

2013-03-15

Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In this approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of

15. Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data

International Nuclear Information System (INIS)

Müller, Kerstin; Schwemmer, Chris; Hornegger, Joachim; Zheng Yefeng; Wang Yang; Lauritsch, Günter; Rohkohl, Christopher; Maier, Andreas K.; Schultz, Carl; Fahrig, Rebecca

2013-01-01

Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In this approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all

16. An Interpolation Procedure to Patch Holes in a Ground and Flight Test Data Base (MARS)

Science.gov (United States)

2010-08-01

FAIRFAX VA 22030 DR N RAO CHAGANTY 1 DEPT OF MATHEMATICS AND STATISTICS OLD DOMINION UNIVERSITY HAMPTON BLVD NORFOLK VA 23529 DR SAID E SAID 1 DEPT OF...DR EDWARD R SCHEINERMAN 1 DEPT OF MATHEMATICS JOHNS HOPKINS UNIVERSITY 104 WHITEHEAD HALL BALTIMORE MD 21218 DR BENJAMIN KADEM 1 DEPT OF MATHEMATICS ... ACTUARIAL SCIENCE UNIVERSITY OF IOWA 241 SCHAEFFER HALL IOWA CITY IA 52242-1409 DR JOHN E BOYER 1 DEPT OF STATISTICS KANSAS STATE UNIVERSITY DICKENS HALL

17. THE EFFECT OF STIMULUS ANTICIPATION ON THE INTERPOLATED TWITCH TECHNIQUE

Directory of Open Access Journals (Sweden)

Duane C. Button

2008-12-01

Full Text Available The objective of this study was to investigate the effect of expected and unexpected interpolated stimuli (IT during a maximum voluntary contraction on quadriceps force output and activation. Two groups of male subjects who were either inexperienced (MI: no prior experience with IT tests or experienced (ME: previously experienced 10 or more series of IT tests received an expected or unexpected IT while performing quadriceps isometric maximal voluntary contractions (MVCs. Measurements included MVC force, quadriceps and hamstrings electromyographic (EMG activity, and quadriceps inactivation as measured by the interpolated twitch technique (ITT. When performing MVCs with the expectation of an IT, the knowledge or lack of knowledge of an impending IT occurring during a contraction did not result in significant overall differences in force, ITT inactivation, quadriceps or hamstrings EMG activity. However, the expectation of an IT significantly (p < 0.0001 reduced MVC force (9.5% and quadriceps EMG activity (14.9% when compared to performing MVCs with prior knowledge that stimulation would not occur. While ME exhibited non-significant decreases when expecting an IT during a MVC, MI force and EMG activity significantly decreased 12.4% and 20.9% respectively. Overall, ME had significantly (p < 0.0001 higher force (14.5% and less ITT inactivation (10.4% than MI. The expectation of the noxious stimuli may account for the significant decrements in force and activation during the ITT

18. Flip-avoiding interpolating surface registration for skull reconstruction.

Science.gov (United States)

Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

2018-03-30

Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

19. Optimal Interpolation scheme to generate reference crop evapotranspiration

Science.gov (United States)

Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco

2018-05-01

We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.

20. A New Interpolation Approach for Linearly Constrained Convex Optimization

KAUST Repository

Espinoza, Francisco

2012-08-01

In this thesis we propose a new class of Linearly Constrained Convex Optimization methods based on the use of a generalization of Shepard\\'s interpolation formula. We prove the properties of the surface such as the interpolation property at the boundary of the feasible region and the convergence of the gradient to the null space of the constraints at the boundary. We explore several descent techniques such as steepest descent, two quasi-Newton methods and the Newton\\'s method. Moreover, we implement in the Matlab language several versions of the method, particularly for the case of Quadratic Programming with bounded variables. Finally, we carry out performance tests against Matab Optimization Toolbox methods for convex optimization and implementations of the standard log-barrier and active-set methods. We conclude that the steepest descent technique seems to be the best choice so far for our method and that it is competitive with other standard methods both in performance and empirical growth order.

1. Interpolation methods for creating a scatter radiation exposure map

Energy Technology Data Exchange (ETDEWEB)

Gonçalves, Elicardo A. de S., E-mail: elicardo.goncalves@ifrj.edu.br [Instituto Federal do Rio de Janeiro (IFRJ), Paracambi, RJ (Brazil); Gomes, Celio S.; Lopes, Ricardo T. [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F. [Universidade do Estado do Rio de Janeiro (UFRJ), RJ (Brazil). Instituto de Física

2017-07-01

A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)

2. Interpolation on the manifold of K component GMMs.

Science.gov (United States)

Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas

2015-12-01

Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.

3. MAGIC: A Tool for Combining, Interpolating, and Processing Magnetograms

Science.gov (United States)

Allred, Joel

2012-01-01

Transients in the solar coronal magnetic field are ultimately the source of space weather. Models which seek to track the evolution of the coronal field require magnetogram images to be used as boundary conditions. These magnetograms are obtained by numerous instruments with different cadences and resolutions. A tool is required which allows modelers to fmd all available data and use them to craft accurate and physically consistent boundary conditions for their models. We have developed a software tool, MAGIC (MAGnetogram Interpolation and Composition), to perform exactly this function. MAGIC can manage the acquisition of magneto gram data, cast it into a source-independent format, and then perform the necessary spatial and temporal interpolation to provide magnetic field values as requested onto model-defined grids. MAGIC has the ability to patch magneto grams from different sources together providing a more complete picture of the Sun's field than is possible from single magneto grams. In doing this, care must be taken so as not to introduce nonphysical current densities along the seam between magnetograms. We have designed a method which minimizes these spurious current densities. MAGIC also includes a number of post-processing tools which can provide additional information to models. For example, MAGIC includes an interface to the DA VE4VM tool which derives surface flow velocities from the time evolution of surface magnetic field. MAGIC has been developed as an application of the KAMELEON data formatting toolkit which has been developed by the CCMC.

4. Image re-sampling detection through a novel interpolation kernel.

Science.gov (United States)

Hilal, Alaa

2018-06-01

Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

5. Interpolation methods for creating a scatter radiation exposure map

International Nuclear Information System (INIS)

Gonçalves, Elicardo A. de S.; Gomes, Celio S.; Lopes, Ricardo T.; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F.

2017-01-01

A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)

6. Motion compensated frame interpolation with a symmetric optical flow constraint

DEFF Research Database (Denmark)

Rakêt, Lars Lau; Roholm, Lars; Bruhn, Andrés

2012-01-01

We consider the problem of interpolating frames in an image sequence. For this purpose accurate motion estimation can be very helpful. We propose to move the motion estimation from the surrounding frames directly to the unknown frame by parametrizing the optical flow objective function such that ......We consider the problem of interpolating frames in an image sequence. For this purpose accurate motion estimation can be very helpful. We propose to move the motion estimation from the surrounding frames directly to the unknown frame by parametrizing the optical flow objective function...... methods. The proposed reparametrization is generic and can be applied to almost every existing algorithm. In this paper we illustrate its advantages by considering the classic TV-L1 optical flow algorithm as a prototype. We demonstrate that this widely used method can produce results that are competitive...... with current state-of-the-art methods. Finally we show that the scheme can be implemented on graphics hardware such that it be- comes possible to double the frame rate of 640 × 480 video footage at 30 fps, i.e. to perform frame doubling in realtime....

7. Some observations on interpolating gauges and non-covariant gauges

International Nuclear Information System (INIS)

Joglekar, Satish D.

2003-01-01

We discuss the viability of using interpolating gauges to define the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition defining term. We show that the boundary condition needed to maintain gauge invariance as the interpolating parameter θ varies, depends very sensitively on the parameter variation. We do this with a gauge used by Doust. We also consider the Lagrangian path-integrals in Minkowski space for gauges with a residual gauge-invariance. We point out the necessity of inclusion of an ε-term (even) in the formal treatments, without which one may reach incorrect conclusions. We, further, point out that the ε-term can contribute to the BRST WT-identities in a non-trivial way (even as ε → 0). We point out that these contributions lead to additional constraints on Green's function that are not normally taken into account in the BRST formalism that ignores the ε-term, and that they are characteristic of the way the singularities in propagators are handled. We argue that a prescription, in general, will require renormalization; if at all it is to be viable. (author)

8. Anisotropic interpolation theorems of Musielak-Orlicz type

Directory of Open Access Journals (Sweden)

Jinxia Li

2016-10-01

Full Text Available Abstract Anisotropy is a common attribute of Nature, which shows different characterizations in different directions of all or part of the physical or chemical properties of an object. The anisotropic property, in mathematics, can be expressed by a fairly general discrete group of dilations { A k : k ∈ Z } $\\{A^{k}: k\\in\\mathbb{Z}\\}$ , where A is a real n × n $n\\times n$ matrix with all its eigenvalues λ satisfy | λ | > 1 $|\\lambda|>1$ . Let φ : R n × [ 0 , ∞ → [ 0 , ∞ $\\varphi: \\mathbb{R}^{n}\\times[0, \\infty\\to[0,\\infty$ be an anisotropic Musielak-Orlicz function such that φ ( x , ⋅ $\\varphi(x,\\cdot$ is an Orlicz function and φ ( ⋅ , t $\\varphi(\\cdot,t$ is a Muckenhoupt A ∞ ( A $\\mathbb {A}_{\\infty}(A$ weight. The aim of this article is to obtain two anisotropic interpolation theorems of Musielak-Orlicz type, which are weighted anisotropic extension of Marcinkiewicz interpolation theorems. The above results are new even for the isotropic weighted settings.

9. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

Science.gov (United States)

Hoffmann, Aswin L.; Siem, Alex Y. D.; den Hertog, Dick; Kaanders, Johannes H. A. M.; Huizenga, Henk

2006-12-01

In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning.

10. Derivative-free generation and interpolation of convex Pareto optimal IMRT plans

International Nuclear Information System (INIS)

Hoffmann, Aswin L; Siem, Alex Y D; Hertog, Dick den; Kaanders, Johannes H A M; Huizenga, Henk

2006-01-01

In inverse treatment planning for intensity-modulated radiation therapy (IMRT), beamlet intensity levels in fluence maps of high-energy photon beams are optimized. Treatment plan evaluation criteria are used as objective functions to steer the optimization process. Fluence map optimization can be considered a multi-objective optimization problem, for which a set of Pareto optimal solutions exists: the Pareto efficient frontier (PEF). In this paper, a constrained optimization method is pursued to iteratively estimate the PEF up to some predefined error. We use the property that the PEF is convex for a convex optimization problem to construct piecewise-linear upper and lower bounds to approximate the PEF from a small initial set of Pareto optimal plans. A derivative-free Sandwich algorithm is presented in which these bounds are used with three strategies to determine the location of the next Pareto optimal solution such that the uncertainty in the estimated PEF is maximally reduced. We show that an intelligent initial solution for a new Pareto optimal plan can be obtained by interpolation of fluence maps from neighbouring Pareto optimal plans. The method has been applied to a simplified clinical test case using two convex objective functions to map the trade-off between tumour dose heterogeneity and critical organ sparing. All three strategies produce representative estimates of the PEF. The new algorithm is particularly suitable for dynamic generation of Pareto optimal plans in interactive treatment planning

11. Penyelesaian Numerik Persamaan Advection Dengan Radial Point Interpolation Method dan Integrasi Waktu Dengan Discontinuous Galerkin Method

Directory of Open Access Journals (Sweden)

2016-12-01

12. Linear, Transﬁnite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

DEFF Research Database (Denmark)

Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

2018-01-01

of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

13. An application of gain-scheduled control using state-space interpolation to hydroactive gas bearings

DEFF Research Database (Denmark)

Theisen, Lukas Roy Svane; Camino, Juan F.; Niemann, Hans Henrik

2016-01-01

with a gain-scheduling strategy using state-space interpolation, which avoids both the performance loss and the increase of controller order associated to the Youla parametrisation. The proposed state-space interpolation for gain-scheduling is applied for mass imbalance rejection for a controllable gas...... bearing scheduled in two parameters. Comparisons against the Youla-based scheduling demonstrate the superiority of the state-space interpolation....

14. Convergence acceleration of quasi-periodic and quasi-periodic-rational interpolations by polynomial corrections

OpenAIRE

Lusine Poghosyan

2014-01-01

The paper considers convergence acceleration of the quasi-periodic and the quasi-periodic-rational interpolations by application of polynomial corrections. We investigate convergence of the resultant quasi-periodic-polynomial and quasi-periodic-rational-polynomial interpolations and derive exact constants of the main terms of asymptotic errors in the regions away from the endpoints. Results of numerical experiments clarify behavior of the corresponding interpolations for moderate number of in...

15. Emergency procedures

International Nuclear Information System (INIS)

Abd Nasir Ibrahim; Azali Muhammad; Ab Razak Hamzah; Abd Aziz Mohamed; Mohammad Pauzi Ismail

2004-01-01

The following subjects are discussed - Emergency Procedures: emergency equipment, emergency procedures; emergency procedure involving X-Ray equipment; emergency procedure involving radioactive sources

16. Spatial and temporal interpolation of satellite-based aerosol optical depth measurements over North America using B-splines

Science.gov (United States)

Pfister, Nicolas; O'Neill, Norman T.; Aube, Martin; Nguyen, Minh-Nghia; Bechamp-Laganiere, Xavier; Besnier, Albert; Corriveau, Louis; Gasse, Geremie; Levert, Etienne; Plante, Danick

2005-08-01

Satellite-based measurements of aerosol optical depth (AOD) over land are obtained from an inversion procedure applied to dense dark vegetation pixels of remotely sensed images. The limited number of pixels over which the inversion procedure can be applied leaves many areas with little or no AOD data. Moreover, satellite coverage by sensors such as MODIS yields only daily images of a given region with four sequential overpasses required to straddle mid-latitude North America. Ground based AOD data from AERONET sun photometers are available on a more continuous basis but only at approximately fifty locations throughout North America. The object of this work is to produce a complete and coherent mapping of AOD over North America with a spatial resolution of 0.1 degree and a frequency of three hours by interpolating MODIS satellite-based data together with available AERONET ground based measurements. Before being interpolated, the MODIS AOD data extracted from different passes are synchronized to the mapping time using analyzed wind fields from the Global Multiscale Model (Meteorological Service of Canada). This approach amounts to a trajectory type of simplified atmospheric dynamics correction method. The spatial interpolation is performed using a weighted least squares method applied to bicubic B-spline functions defined on a rectangular grid. The least squares method enables one to weight the data accordingly to the measurement errors while the B-splines properties of local support and C2 continuity offer a good approximation of AOD behaviour viewed as a function of time and space.

17. Interpolation in Time Series: An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

Directory of Open Access Journals (Sweden)

Mathieu Lepot

2017-10-01

Full Text Available A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are many methods and criteria to estimate efficiencies of these methods, but uncertainties on the interpolated values are rarely calculated. Furthermore, while they are estimated according to standard methods, the prediction uncertainty is not taken into account: a discussion is thus presented on the uncertainty estimation of interpolated/extrapolated data. Finally, some suggestions for further research and a new method are proposed.

18. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

Science.gov (United States)

Chen, Hao; Yu, Haizhong

2014-04-01

Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

19. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

DEFF Research Database (Denmark)

Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

2015-01-01

We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

20. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

Directory of Open Access Journals (Sweden)

Mingjian Sun

2015-01-01

Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

1. ANGELO-LAMBDA, Covariance matrix interpolation and mathematical verification

International Nuclear Information System (INIS)

Kodeli, Ivo

2007-01-01

1 - Description of program or function: The codes ANGELO-2.3 and LAMBDA-2.3 are used for the interpolation of the cross section covariance data from the original to a user defined energy group structure, and for the mathematical tests of the matrices, respectively. The LAMBDA-2.3 code calculates the eigenvalues of the matrices (both for the original or the converted) and lists them accordingly into positive and negative matrices. This verification is strongly recommended before using any covariance matrices. These versions of the two codes are the extended versions of the previous codes available in the Packages NEA-1264 - ZZ-VITAMIN-J/COVA. They were specifically developed for the purposes of the OECD LWR UAM benchmark, in particular for the processing of the ZZ-SCALE5.1/COVA-44G cross section covariance matrix library retrieved from the SCALE-5.1 package. Either the original SCALE-5.1 libraries or the libraries separated into several files by Nuclides can be (in principle) processed by ANGELO/LAMBDA codes, but the use of the one-nuclide data is strongly recommended. Due to large deviations of the correlation matrix terms from unity observed in some SCALE5.1 covariance matrices, the previous more severe acceptance condition in the ANGELO2.3 code was released. In case the correlation coefficients exceed 1.0, only a warning message is issued, and coefficients are replaced by 1.0. 2 - Methods: ANGELO-2.3 interpolates the covariance matrices to a union grid using flat weighting. LAMBDA-2.3 code includes the mathematical routines to calculate the eigenvalues of the covariance matrices. 3 - Restrictions on the complexity of the problem: The algorithm used in ANGELO is relatively simple, therefore the interpolations involving energy group structure which are very different from the original (e.g. large difference in the number of energy groups) may not be accurate. In particular in the case of the MT=1018 data (fission spectra covariances) the algorithm may not be

2. Diabat Interpolation for Polymorph Free-Energy Differences.

Science.gov (United States)

Kamat, Kartik; Peters, Baron

2017-02-02

Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

3. Interpolation method by whole body computed tomography, Artronix 1120

International Nuclear Information System (INIS)

Fujii, Kyoichi; Koga, Issei; Tokunaga, Mitsuo

1981-01-01

Reconstruction of the whole body CT images by interpolation method was investigated by rapid scanning. Artronix 1120 with fixed collimator was used to obtain the CT images every 5 mm. X-ray source was circully movable to obtain perpendicular beam to the detector. A length of 150 mm was scanned in about 15 min., with the slice width of 5 mm. The images were reproduced every 7.5 mm, which was able to reduce every 1.5 mm when necessary. Out of 420 inspection in the chest, abdomen, and pelvis, 5 representative cases for which this method was valuable were described. The cases were fibrous histiocytoma of upper mediastinum, left adrenal adenoma, left ureter fibroma, recurrence of colon cancer in the pelvis, and abscess around the rectum. This method improved the image quality of lesions in the vicinity of the ureters, main artery, and rectum. The time required and exposure dose were reduced to 50% by this method. (Nakanishi, T.)

4. Estimating Frequency by Interpolation Using Least Squares Support Vector Regression

Directory of Open Access Journals (Sweden)

Changwei Ma

2015-01-01

Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.

5. Spatial Interpolation of Historical Seasonal Rainfall Indices over Peninsular Malaysia

Directory of Open Access Journals (Sweden)

Hassan Zulkarnain

2018-01-01

Full Text Available The inconsistency in inter-seasonal rainfall due to climate change will cause a different pattern in the rainfall characteristics and distribution. Peninsular Malaysia is not an exception for this inconsistency, in which it is resulting extreme events such as flood and water scarcity. This study evaluates the seasonal patterns in rainfall indices such as total amount of rainfall, the frequency of wet days, rainfall intensity, extreme frequency, and extreme intensity in Peninsular Malaysia. 40 years (1975-2015 data records have been interpolated using Inverse Distance Weighted method. The results show that the formation of rainfall characteristics are significance during the Northeast monsoon (NEM, as compared to Southwest monsoon (SWM. Also, there is a high rainfall intensity and frequency related to extreme over eastern coasts of Peninsula during the NEM season.

6. Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation

Directory of Open Access Journals (Sweden)

Budi I. Setiawan

2007-09-01

Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.

7. An algorithm for centerline extraction using natural neighbour interpolation

DEFF Research Database (Denmark)

Mioc, Darka; Antón Castro, Francesc/François; Dharmaraj, Girija

2004-01-01

, especially due to the lack of explicit topology in commercial GIS systems. Indeed, each map update might require the batch processing of the whole map. Currently, commercial GIS do not offer completely automatic raster/vector conversion even for simple scanned black and white maps. Various commercial raster...... they need user defined tolerances settings, what causes difficulties in the extraction of complex spatial features, for example: road junctions, curved or irregular lines and complex intersections of linear features. The approach we use here is based on image processing filtering techniques to extract...... to the improvement of data caption and conversion in GIS and to develop a software toolkit for automated raster/vector conversion. The approach is based on computing the skeleton from Voronoi diagrams using natural neighbour interpolation. In this paper we present the algorithm for skeleton extraction from scanned...

8. Spatial Interpolation of Historical Seasonal Rainfall Indices over Peninsular Malaysia

Science.gov (United States)

Hassan, Zulkarnain; Haidir, Ahmad; Saad, Farah Naemah Mohd; Ayob, Afizah; Rahim, Mustaqqim Abdul; Ghazaly, Zuhayr Md.

2018-03-01

The inconsistency in inter-seasonal rainfall due to climate change will cause a different pattern in the rainfall characteristics and distribution. Peninsular Malaysia is not an exception for this inconsistency, in which it is resulting extreme events such as flood and water scarcity. This study evaluates the seasonal patterns in rainfall indices such as total amount of rainfall, the frequency of wet days, rainfall intensity, extreme frequency, and extreme intensity in Peninsular Malaysia. 40 years (1975-2015) data records have been interpolated using Inverse Distance Weighted method. The results show that the formation of rainfall characteristics are significance during the Northeast monsoon (NEM), as compared to Southwest monsoon (SWM). Also, there is a high rainfall intensity and frequency related to extreme over eastern coasts of Peninsula during the NEM season.

9. On the exact interpolating function in ABJ theory

Energy Technology Data Exchange (ETDEWEB)

Cavaglià, Andrea [Dipartimento di Fisica and INFN, Università di Torino,Via P. Giuria 1, 10125 Torino (Italy); Gromov, Nikolay [Mathematics Department, King’s College London,The Strand, London WC2R 2LS (United Kingdom); St. Petersburg INP,Gatchina, 188 300, St.Petersburg (Russian Federation); Levkovich-Maslyuk, Fedor [Mathematics Department, King’s College London,The Strand, London WC2R 2LS (United Kingdom); Nordita, KTH Royal Institute of Technology and Stockholm University,Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden)

2016-12-16

Based on the recent indications of integrability in the planar ABJ model, we conjecture an exact expression for the interpolating function h(λ{sub 1},λ{sub 2}) in this theory. Our conjecture is based on the observation that the integrability structure of the ABJM theory given by its Quantum Spectral Curve is very rigid and does not allow for a simple consistent modification. Under this assumption, we revised the previous comparison of localization results and exact all loop integrability calculations done for the ABJM theory by one of the authors and Grigory Sizov, fixing h(λ{sub 1},λ{sub 2}). We checked our conjecture against various weak coupling expansions, at strong coupling and also demonstrated its invariance under the Seiberg-like duality. This match also gives further support to the integrability of the model. If our conjecture is correct, it extends all the available integrability results in the ABJM model to the ABJ model.

10. A fast and accurate dihedral interpolation loop subdivision scheme

Science.gov (United States)

Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

2018-04-01

In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

11. Differential maps, difference maps, interpolated maps, and long term prediction

International Nuclear Information System (INIS)

Talman, R.

1988-06-01

Mapping techniques may be thought to be attractive for the long term prediction of motion in accelerators, especially because a simple map can approximately represent an arbitrarily complicated lattice. The intention of this paper is to develop prejudices as to the validity of such methods by applying them to a simple, exactly solveable, example. It is shown that a numerical interpolation map, such as can be generated in the accelerator tracking program TEAPOT, predicts the evolution more accurately than an analytically derived differential map of the same order. Even so, in the presence of ''appreciable'' nonlinearity, it is shown to be impractical to achieve ''accurate'' prediction beyond some hundreds of cycles of oscillation. This suggests that the value of nonlinear maps is restricted to the parameterization of only the ''leading'' deviation from linearity. 41 refs., 6 figs

12. Hybrid kriging methods for interpolating sparse river bathymetry point data

Directory of Open Access Journals (Sweden)

Pedro Velloso Gomes Batista

Full Text Available ABSTRACT Terrain models that represent riverbed topography are used for analyzing geomorphologic changes, calculating water storage capacity, and making hydrologic simulations. These models are generated by interpolating bathymetry points. River bathymetry is usually surveyed through cross-sections, which may lead to a sparse sampling pattern. Hybrid kriging methods, such as regression kriging (RK and co-kriging (CK employ the correlation with auxiliary predictors, as well as inter-variable correlation, to improve the predictions of the target variable. In this study, we use the orthogonal distance of a (x, y point to the river centerline as a covariate for RK and CK. Given that riverbed elevation variability is abrupt transversely to the flow direction, it is expected that the greater the Euclidean distance of a point to the thalweg, the greater the bed elevation will be. The aim of this study was to evaluate if the use of the proposed covariate improves the spatial prediction of riverbed topography. In order to asses such premise, we perform an external validation. Transversal cross-sections are used to make the spatial predictions, and the point data surveyed between sections are used for testing. We compare the results from CK and RK to the ones obtained from ordinary kriging (OK. The validation indicates that RK yields the lowest RMSE among the interpolators. RK predictions represent the thalweg between cross-sections, whereas the other methods under-predict the river thalweg depth. Therefore, we conclude that RK provides a simple approach for enhancing the quality of the spatial prediction from sparse bathymetry data.

13. Improving the accuracy of livestock distribution estimates through spatial interpolation.

Science.gov (United States)

Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

2012-11-01

Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level

14. A method to generate fully multi-scale optimal interpolation by combining efficient single process analyses, illustrated by a DINEOF analysis spiced with a local optimal interpolation

Directory of Open Access Journals (Sweden)

J.-M. Beckers

2014-10-01

Full Text Available We present a method in which the optimal interpolation of multi-scale processes can be expanded into a succession of simpler interpolations. First, we prove how the optimal analysis of a superposition of two processes can be obtained by different mathematical formulations involving iterations and analysis focusing on a single process. From the different mathematical equivalent formulations, we then select the most efficient ones by analyzing the behavior of the different possibilities in a simple and well-controlled test case. The clear guidelines deduced from this experiment are then applied to a real situation in which we combine large-scale analysis of hourly Spinning Enhanced Visible and Infrared Imager (SEVIRI satellite images using data interpolating empirical orthogonal functions (DINEOF with a local optimal interpolation using a Gaussian covariance. It is shown that the optimal combination indeed provides the best reconstruction and can therefore be exploited to extract the maximum amount of useful information from the original data.

15. Leak Isolation in Pressurized Pipelines using an Interpolation Function to approximate the Fitting Losses

Science.gov (United States)

Badillo-Olvera, A.; Begovich, O.; Peréz-González, A.

2017-01-01

The present paper is motivated by the purpose of detection and isolation of a single leak considering the Fault Model Approach (FMA) focused on pipelines with changes in their geometry. These changes generate a different pressure drop that those produced by the friction, this phenomenon is a common scenario in real pipeline systems. The problem arises, since the dynamical model of the fluid in a pipeline only considers straight geometries without fittings. In order to address this situation, several papers work with a virtual model of a pipeline that generates a equivalent straight length, thus, friction produced by the fittings is taking into account. However, when this method is applied, the leak is isolated in a virtual length, which for practical reasons does not represent a complete solution. This research proposes as a solution to the problem of leak isolation in a virtual length, the use of a polynomial interpolation function in order to approximate the conversion of the virtual position to a real-coordinates value. Experimental results in a real prototype are shown, concluding that the proposed methodology has a good performance.

16. Use of Adipose-Derived Mesenchymal Stem Cells to Accelerate Neovascularization in Interpolation Flaps.

Science.gov (United States)

Izmirli, Hakki Hayrettin; Alagoz, Murat Sahin; Gercek, Huseyin; Eren, Guler Gamze; Yucel, Ergin; Subasi, Cansu; Isgoren, Serkan; Muezzinoglu, Bahar; Karaoz, Erdal

2016-01-01

Interpolation flaps are commonly used in plastic surgery to cover wide and deep defects. The need to, wait for 2 to 3 weeks until the division of the pedicle still, however, poses a serious challenge, not only extending treatment and hospital stay, but also increasing hospital expenses. To solve this problem, we have aimed to use the angiogenic potential of stem cells to selectively accelerate neovascularization with a view to increasing the viability of interpolation flaps and achieving early pedicle removal. A total of 32 rats were allocated to 2 groups as control (N = 16) and experiment (N = 16). The cranial flaps 6 × 5 cm in size located on the back of the rats were raised. Then, a total suspension containing 3 × 10(6) adipose-derived mesenchymal stem cells (ADSC) tagged with a green fluorescent protein (GFP) was injected diffusely into the distal part of the flap, receiving bed, and wound edges. In the control group, only a medium solution was injected into the same sites. After covering the 3 × 5 cm region in the proximal part of the area where the flap was removed, the distal part of the flap was adapted to the uncovered distal area. The pedicles of 4 rats in each group were divided on postoperative days 5, 8, 11, and 14. The areas were photographed 7 days after the pedicles were released. The photographs were processed using Adobe Acrobat 9 Pro software (San Jose, CA) to measure the flap survival area in millimeters and to compare groups. Seven days after the flap pedicle was divided, the rats were injected with 250 mCi Tc-99 mm (methoxy-isobutyl-isonitrie) from the penile vein, and scintigraphic images were obtained. The images obtained from each group were subjected to a numerical evaluation, which was then used in the comparison between groups. The flaps were then examined by histology to numerically compare the number of newly formed vessels. Neovascularization was also assessed by microangiography. In addition, radiographic images were obtained by

17. Comparison of multimesh hp-FEM to interpolation and projection methods for spatial coupling of thermal and neutron diffusion calculations

International Nuclear Information System (INIS)

Dubcova, Lenka; Solin, Pavel; Hansen, Glen; Park, HyeongKae

2011-01-01

Multiphysics solution challenges are legion within the field of nuclear reactor design and analysis. One major issue concerns the coupling between heat and neutron flow (neutronics) within the reactor assembly. These phenomena are usually very tightly interdependent, as large amounts of heat are quickly produced with an increase in fission events within the fuel, which raises the temperature that affects the neutron cross section of the fuel. Furthermore, there typically is a large diversity of time and spatial scales between mathematical models of heat and neutronics. Indeed, the different spatial resolution requirements often lead to the use of very different meshes for the two phenomena. As the equations are coupled, one must take care in exchanging solution data between them, or significant error can be introduced into the coupled problem. We propose a novel approach to the discretization of the coupled problem on different meshes based on an adaptive multimesh higher-order finite element method (hp-FEM), and compare it to popular interpolation and projection methods. We show that the multimesh hp-FEM method is significantly more accurate than the interpolation and projection approaches considered in this study.

18. Can a polynomial interpolation improve on the Kaplan-Yorke dimension?

International Nuclear Information System (INIS)

Richter, Hendrik

2008-01-01

The Kaplan-Yorke dimension can be derived using a linear interpolation between an h-dimensional Lyapunov exponent λ (h) >0 and an h+1-dimensional Lyapunov exponent λ (h+1) <0. In this Letter, we use a polynomial interpolation to obtain generalized Lyapunov dimensions and study the relationships among them for higher-dimensional systems

19. Kriging interpolation in seismic attribute space applied to the South Arne Field, North Sea

DEFF Research Database (Denmark)

Hansen, Thomas Mejer; Mosegaard, Klaus; Schiøtt, Christian

2010-01-01

Seismic attributes can be used to guide interpolation in-between and extrapolation away from well log locations using for example linear regression, neural networks, and kriging. Kriging-based estimation methods (and most other types of interpolation/extrapolation techniques) are intimately linke...

20. Application of ordinary kriging for interpolation of micro-structured technical surfaces

International Nuclear Information System (INIS)

Raid, Indek; Kusnezowa, Tatjana; Seewig, Jörg

2013-01-01

Kriging is an interpolation technique used in geostatistics. In this paper we present kriging applied in the field of three-dimensional optical surface metrology. Technical surfaces are not always optically cooperative, meaning that measurements of technical surfaces contain invalid data points because of different effects. These data points need to be interpolated to obtain a complete area in order to fulfil further processing. We present an elementary type of kriging, known as ordinary kriging, and apply it to interpolate measurements of different technical surfaces containing different kinds of realistic defects. The result of the interpolation with kriging is compared to six common interpolation techniques: nearest neighbour, natural neighbour, inverse distance to a power, triangulation with linear interpolation, modified Shepard's method and radial basis function. In order to quantify the results of different interpolations, the topographies are compared to defect-free reference topographies. Kriging is derived from a stochastic model that suggests providing an unbiased, linear estimation with a minimized error variance. The estimation with kriging is based on a preceding statistical analysis of the spatial structure of the surface. This comprises the choice and adaptation of specific models of spatial continuity. In contrast to common methods, kriging furthermore considers specific anisotropy in the data and adopts the interpolation accordingly. The gained benefit requires some additional effort in preparation and makes the overall estimation more time-consuming than common methods. However, the adaptation to the data makes this method very flexible and accurate. (paper)

1. Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.

Science.gov (United States)

Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong

2017-11-01

Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.

2. Okounkov's BC-Type Interpolation Macdonald Polynomials and Their q=1 Limit

NARCIS (Netherlands)

Koornwinder, T.H.

2015-01-01

This paper surveys eight classes of polynomials associated with A-type and BC-type root systems: Jack, Jacobi, Macdonald and Koornwinder polynomials and interpolation (or shifted) Jack and Macdonald polynomials and their BC-type extensions. Among these the BC-type interpolation Jack polynomials were

3. Interpolation in Time Series : An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

NARCIS (Netherlands)

Lepot, M.J.; Aubin, Jean Baptiste; Clemens, F.H.L.R.

2017-01-01

A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are

4. Spatiotemporal interpolation of elevation changes derived from satellite altimetry for Jakobshavn Isbræ, Greenland

DEFF Research Database (Denmark)

Hurkmans, R.T.W.L.; Bamber, J.L.; Sørensen, Louise Sandberg

2012-01-01

. In those areas, straightforward interpolation of data is unlikely to reflect the true patterns of dH/dt. Here, four interpolation methods are compared and evaluated over Jakobshavn Isbræ, an outlet glacier for which widespread airborne validation data are available from NASA's Airborne Topographic Mapper...

5. Interaction-Strength Interpolation Method for Main-Group Chemistry : Benchmarking, Limitations, and Perspectives

NARCIS (Netherlands)

Fabiano, E.; Gori-Giorgi, P.; Seidl, M.W.J.; Della Sala, F.

2016-01-01

We have tested the original interaction-strength-interpolation (ISI) exchange-correlation functional for main group chemistry. The ISI functional is based on an interpolation between the weak and strong coupling limits and includes exact-exchange as well as the Görling–Levy second-order energy. We

6. Researches Regarding The Circular Interpolation Algorithms At CNC Laser Cutting Machines

Science.gov (United States)

Tîrnovean, Mircea Sorin

2015-09-01

This paper presents an integrated simulation approach for studying the circular interpolation regime of CNC laser cutting machines. The circular interpolation algorithm is studied, taking into consideration the numerical character of the system. A simulation diagram, which is able to generate the kinematic inputs for the feed drives of the CNC laser cutting machine is also presented.

7. Interpolation of polytopic control Lyapunov functions for discrete–time linear systems

NARCIS (Netherlands)

Nguyen, T.T.; Lazar, M.; Spinu, V.; Boje, E.; Xia, X.

2014-01-01

This paper proposes a method for interpolating two (or more) polytopic control Lyapunov functions (CLFs) for discrete--time linear systems subject to polytopic constraints, thereby combining different control objectives. The corresponding interpolated CLF is used for synthesis of a stabilizing

8. Abstract interpolation in vector-valued de Branges-Rovnyak spaces

NARCIS (Netherlands)

Ball, J.A.; Bolotnikov, V.; ter Horst, S.

2011-01-01

Following ideas from the Abstract Interpolation Problem of Katsnelson et al. (Operators in spaces of functions and problems in function theory, vol 146, pp 83–96, Naukova Dumka, Kiev, 1987) for Schur class functions, we study a general metric constrained interpolation problem for functions from a

9. A Hybrid Interpolation Method for Geometric Nonlinear Spatial Beam Elements with Explicit Nodal Force

Directory of Open Access Journals (Sweden)

Huiqing Fang

2016-01-01

Full Text Available Based on geometrically exact beam theory, a hybrid interpolation is proposed for geometric nonlinear spatial Euler-Bernoulli beam elements. First, the Hermitian interpolation of the beam centerline was used for calculating nodal curvatures for two ends. Then, internal curvatures of the beam were interpolated with a second interpolation. At this point, C1 continuity was satisfied and nodal strain measures could be consistently derived from nodal displacement and rotation parameters. The explicit expression of nodal force without integration, as a function of global parameters, was founded by using the hybrid interpolation. Furthermore, the proposed beam element can be degenerated into linear beam element under the condition of small deformation. Objectivity of strain measures and patch tests are also discussed. Finally, four numerical examples are discussed to prove the validity and effectivity of the proposed beam element.

10. Fast digital zooming system using directionally adaptive image interpolation and restoration.

Science.gov (United States)

Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

2014-01-01

This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

11. Conformal Interpolating Algorithm Based on Cubic NURBS in Aspheric Ultra-Precision Machining

International Nuclear Information System (INIS)

Li, C G; Zhang, Q R; Cao, C G; Zhao, S L

2006-01-01

Numeric control machining and on-line compensation for aspheric surface are key techniques in ultra-precision machining. In this paper, conformal cubic NURBS interpolating curve is applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal cubic NURBS interpolation, we compare it with the linear interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by cubic NURBS is higher than by line. The algorithm is benefit to increasing the surface form precision of workpieces in ultra-precision machining

12. Comment on: Path integral solution of the Schroedinger equation in curvilinear coordinates: A straightforward procedure [J. Math. Phys. 37, 4310 endash 4319 (1996)

International Nuclear Information System (INIS)

Wurm, A.; LaChapelle, J.

1997-01-01

The authors comment on the paper by J. LaChapelle, J. Math. Phys. 37, 4310 (1996), and give explicit expressions for the parametrization, its solution, and the Lie derivatives of the Schroedinger equation for the case of n-dimensional spherical coordinates

13. A numerical model for the determination of periodic solutions of pipes subjected to non-conservative loads

International Nuclear Information System (INIS)

Velloso, P.A.; Galeao, A.C.

1989-05-01

This paper deals with nonlinear vibrations of pipes subjected to non-conservative loads. Periodic solutions of these problems are determined using a variational approach based on Hamilton's Principle combined with a Fourier series expansion to describe the displacement field time dependence. A finite element model which utilizes Hemite's cubic interpolation for both axial and transversal displacement amplitudes is used. This model is applied to the problem of a pipe subjected to a tangential and a normal follower force. The numerical results obtained with this model are compared with the corespondent solutions determined using a total lagrangian description for the Principle of Virtual Work, coupled with Newmark's step-by-step integration procedure. It is shown that for small to moderate displacement amplitudes the one-term Fourier series approximation compares fairly well with the predicted solution. For large displacements as least a two-term approximation should be utilized [pt

14. Solution of the non-stationary electron Boltzmann equation for a weakly ionized collision dominated plasma

International Nuclear Information System (INIS)

Winkler, R.; Wilhelm, J.

A detailed description is presented of calculating the nonstationary electron distribution function in a weakly ionized collision-dominated plasma from the Boltzmann kinetic equation respecting the effects of the time-dependent electric field, collision processes and the electron formation and loss. The finite difference approximation was used for numerical solution. Using the Crank-Nicolson method and parabolic interpolation between the grid points the Boltzmann equation was transformed to a system of linear equations which was then solved by iterations at a preset accuracy. Using the calculated distribution function values, the macroscopic plasma parameters were determined and the balance of electron density and energy checked in each time step. The mathematical procedure is illustrated using a neon plasma perturbed by a rectangular electric pulse. The time development shown of the distribution function at moments when the pulse was switched on and off demonstrates the great stability of the numerical solution. (J.U.)

15. On the exact solution for the multi-group kinetic neutron diffusion equation in a rectangle

International Nuclear Information System (INIS)

Petersen, C.Z.; Vilhena, M.T.M.B. de; Bodmann, B.E.J.

2011-01-01

In this work we consider the two-group bi-dimensional kinetic neutron diffusion equation. The solution procedure formalism is general with respect to the number of energy groups, neutron precursor families and regions with different chemical compositions. The fast and thermal flux and the delayed neutron precursor yields are expanded in a truncated double series in terms of eigenfunctions that, upon insertion into the kinetic equation and upon taking moments, results in a first order linear differential matrix equation with source terms. We split the matrix appearing in the transformed problem into a sum of a diagonal matrix plus the matrix containing the remaining terms and recast the transformed problem into a form that can be solved in the spirit of Adomian's recursive decomposition formalism. Convergence of the solution is guaranteed by the Cardinal Interpolation Theorem. We give numerical simulations and comparisons with available results in the literature. (author)

16. Ion binding by humic and fulvic acids: A computational procedure based on functional site heterogeneity and the physical chemistry of polyelectrolyte solutions

International Nuclear Information System (INIS)

Marinsky, J.A.; Reddy, M.M.; Ephraim, J.; Mathuthu, A.

1988-04-01

Ion binding equilibria for humic and fulvic acids are examined from the point of view of functional site heterogeneity and the physical chemistry of polyelectrolyte solutions. A detailed explanation of the potentiometric properties of synthetic polyelectrolytes and ion-exchange gels is presented first to provide the basis for a parallel consideration of the potentiometric properties exhibited by humic and fulvic acids. The treatment is then extended to account for functional site heterogeneity. Sample results are presented for analysis of the ion-binding reactions of a standard soil fulvic acid (Armadale Horizons Bh) with this approach to test its capability for anticipation of metal ion removal from solution. The ultimate refined model is shown to be adaptable, after appropriate consideration of the heterogeneity and polyelectrolyte factors, to programming already available for the consideration of ion binding by inorganics in natural waters. (orig.)

17. Exploring a new S U (4 ) symmetry of meson interpolators

Science.gov (United States)

Glozman, L. Ya.; Pak, M.

2015-07-01

In recent lattice calculations it has been discovered that mesons upon truncation of the quasizero modes of the Dirac operator obey a symmetry larger than the S U (2 )L×S U (2 )R×U (1 )A symmetry of the QCD Lagrangian. This symmetry has been suggested to be S U (4 )⊃S U (2 )L×S U (2 )R×U (1 )A that mixes not only the u- and d-quarks of a given chirality, but also the left- and right-handed components. Here it is demonstrated that bilinear q ¯q interpolating fields of a given spin J ≥1 transform into each other according to irreducible representations of S U (4 ) or, in general, S U (2 NF). This fact together with the coincidence of the correlation functions establishes S U (4 ) as a symmetry of the J ≥1 mesons upon quasizero mode reduction. It is shown that this symmetry is a symmetry of the confining instantaneous charge-charge interaction in QCD. Different subgroups of S U (4 ) as well as the S U (4 ) algebra are explored.

18. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

Energy Technology Data Exchange (ETDEWEB)

Bouland, Adam; Easther, Richard; Rosenfeld, Katherine, E-mail: adam.bouland@aya.yale.edu, E-mail: richard.easther@yale.edu, E-mail: krosenfeld@cfa.harvard.edu [Department of Physics, Yale University, New Haven CT 06520 (United States)

2011-05-01

We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user.

19. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

International Nuclear Information System (INIS)

Bouland, Adam; Easther, Richard; Rosenfeld, Katherine

2011-01-01

We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user

20. Statistical analysis and interpolation of compositional data in materials science.

Science.gov (United States)

Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M

2015-02-09

Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.

1. Insect brains use image interpolation mechanisms to recognise rotated objects.

Directory of Open Access Journals (Sweden)

Full Text Available Recognising complex three-dimensional objects presents significant challenges to visual systems when these objects are rotated in depth. The image processing requirements for reliable individual recognition under these circumstances are computationally intensive since local features and their spatial relationships may significantly change as an object is rotated in the horizontal plane. Visual experience is known to be important in primate brains learning to recognise rotated objects, but currently it is unknown how animals with comparatively simple brains deal with the problem of reliably recognising objects when seen from different viewpoints. We show that the miniature brain of honeybees initially demonstrate a low tolerance for novel views of complex shapes (e.g. human faces, but can learn to recognise novel views of stimuli by interpolating between or 'averaging' views they have experienced. The finding that visual experience is also important for bees has important implications for understanding how three dimensional biologically relevant objects like flowers are recognised in complex environments, and for how machine vision might be taught to solve related visual problems.

2. Combining the Hanning windowed interpolated FFT in both directions

Science.gov (United States)

Chen, Kui Fu; Li, Yan Feng

2008-06-01

The interpolated fast Fourier transform (IFFT) has been proposed as a way to eliminate the picket fence effect (PFE) of the fast Fourier transform. The modulus based IFFT, cited in most relevant references, makes use of only the 1st and 2nd highest spectral lines. An approach using three principal spectral lines is proposed. This new approach combines both directions of the complex spectrum based IFFT with the Hanning window. The optimal weight to minimize the estimation variance is established on the first order Taylor series expansion of noise interference. A numerical simulation is carried out, and the results are compared with the Cramer-Rao bound. It is demonstrated that the proposed approach has a lower estimation variance than the two-spectral-line approach. The improvement depends on the extent of sampling deviating from the coherent condition, and the best is decreasing variance by 2/7. However, it is also shown that the estimation variance of the windowed IFFT with the Hanning is significantly higher than that of without windowing.

3. Off-site radiation exposure review project: computer-aided surface interpolation and graphical display

International Nuclear Information System (INIS)

Foley, T.A. Jr.

1981-08-01

This report presents the implementation of an iterative procedure that solves the following bivariate interpolation problem: Given N distinct points in the plane (x/sub i/, y/sub i/) and N real numbers Z/sub i/, construct a function F(x,y) that satisfies F(x/sub i/, y/sub i/) = Z/sub i/, for i = 1, ..., N. This problem can be interpreted as fitting a surface through N points in three dimensional space. The application of primary concern to the Offsite Radiation Exposure Review Project is the characterization of the radionuclide activity resulting from nuclear tests. Samples of activity were measured at various locations. The location of the sample point is represented by (x/sub i/, y/sub i/), and the magnitude of the reading is represented by Z/sub i/. The method presented in this report is constructed to be efficient on large data sets, stable on the large variations of the Z/sub i/ magnitudes, and capable of smoothly filling in areas that are void of data. This globally defined icode was initiateminednitial shock but to two later eriological invaders are Staphylococcus albus, Beta-hemolytic Streptococcus e to the same general semiclassical treatment

4. Coupled electrostatic-elastic analysis for topology optimization using material interpolation

International Nuclear Information System (INIS)

Alwan, A; Ananthasuresh, G K

2006-01-01

In this paper, we present a novel analytical formulation for the coupled partial differential equations governing electrostatically actuated constrained elastic structures of inhomogeneous material composition. We also present a computationally efficient numerical framework for solving the coupled equations over a reference domain with a fixed finiteelement mesh. This serves two purposes: (i) a series of problems with varying geometries and piece-wise homogeneous and/or inhomogeneous material distribution can be solved with a single pre-processing step (ii) topology optimization methods can be easily implemented by interpolating the material at each point in the reference domain from a void to a dielectric or a conductor. This is attained by considering the steady-state electrical current conduction equation with a 'leaky capacitor' model instead of the usual electrostatic equation. This formulation is amenable for both static and transient problems in the elastic domain coupled with the quasi-electrostatic electric field. The procedure is numerically implemented on the COMSOL Multiphysics (registered) platform using the weak variational form of the governing equations. Examples have been presented to show the accuracy and versatility of the scheme. The accuracy of the scheme is validated for the special case of piece-wise homogeneous material in the limit of the leaky-capacitor model approaching the ideal case

5. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

Science.gov (United States)

Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

2018-04-01

In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

6. Linear and Quadratic Interpolators Using Truncated-Matrix Multipliers and Squarers

Directory of Open Access Journals (Sweden)

E. George Walters III

2015-11-01

Full Text Available This paper presents a technique for designing linear and quadratic interpolators for function approximation using truncated multipliers and squarers. Initial coefficient values are found using a Chebyshev-series approximation and then adjusted through exhaustive simulation to minimize the maximum absolute error of the interpolator output. This technique is suitable for any function and any precision up to 24 bits (IEEE single precision. Designs for linear and quadratic interpolators that implement the 1/x, 1/ √ x, log2(1+2x, log2(x and 2x functions are presented and analyzed as examples. Results show that a proposed 24-bit interpolator computing 1/x with a design specification of ±1 unit in the last place of the product (ulp error uses 16.4% less area and 15.3% less power than a comparable standard interpolator with the same error specification. Sixteen-bit linear interpolators for other functions are shown to use up to 17.3% less area and 12.1% less power, and 16-bit quadratic interpolators are shown to use up to 25.8% less area and 24.7% less power.

7. EBSDinterp 1.0: A MATLAB® Program to Perform Microstructurally Constrained Interpolation of EBSD Data.

Science.gov (United States)

Pearce, Mark A

2015-08-01

EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.

8. Short-term prediction method of wind speed series based on fractal interpolation

International Nuclear Information System (INIS)

Xiu, Chunbo; Wang, Tiantian; Tian, Meng; Li, Yanqing; Cheng, Yi

2014-01-01

Highlights: • An improved fractal interpolation prediction method is proposed. • The chaos optimization algorithm is used to obtain the iterated function system. • The fractal extrapolate interpolation prediction of wind speed series is performed. - Abstract: In order to improve the prediction performance of the wind speed series, the rescaled range analysis is used to analyze the fractal characteristics of the wind speed series. An improved fractal interpolation prediction method is proposed to predict the wind speed series whose Hurst exponents are close to 1. An optimization function which is composed of the interpolation error and the constraint items of the vertical scaling factors in the fractal interpolation iterated function system is designed. The chaos optimization algorithm is used to optimize the function to resolve the optimal vertical scaling factors. According to the self-similarity characteristic and the scale invariance, the fractal extrapolate interpolation prediction can be performed by extending the fractal characteristic from internal interval to external interval. Simulation results show that the fractal interpolation prediction method can get better prediction result than others for the wind speed series with the fractal characteristic, and the prediction performance of the proposed method can be improved further because the fractal characteristic of its iterated function system is similar to that of the predicted wind speed series

9. KTOE, KEDAK to ENDF/B Format Conversion with Linear Linear Interpolation

International Nuclear Information System (INIS)

Panini, Gian Carlo

1985-01-01

1 - Nature of physical problem solved: This code performs a fully automated translation from KEDAK into ENDF-4 or -5 format. Output is on tape in card image format. 2 - Method of solution: Before translation the reactions are sorted in the ENDF format order. Linear-linear interpolation rule is preserved. The resonance parameters for both resolved and unresolved, could also be translated and a background cross section is formed as the difference of the calculated contribution from the parameters and the point-wise data given in the original file. Elastic angular distributions originally given in tabulated form are converted into Legendre polynomial coefficients. Energy distributions are calculated using a simple evaporation model with the temperature expressed as a function of the incident mass. 3 - Restrictions on the complexity of the problem: The existing restrictions both on KEDAK and ENDF have been applied to the array sizes used in the code, except for the number of points in a section which in the ENDF format are limited to 5000 points. The code only translates one material at a time

10. Study on Meshfree Hermite Radial Point Interpolation Method for Flexural Wave Propagation Modeling and Damage Quantification

Directory of Open Access Journals (Sweden)

Full Text Available Abstract This paper investigates the numerical modeling of the flexural wave propagation in Euler-Bernoulli beams using the Hermite-type radial point interpolation method (HRPIM under the damage quantification approach. HRPIM employs radial basis functions (RBFs and their derivatives for shape function construction as a meshfree technique. The performance of Multiquadric(MQ RBF to the assessment of the reflection ratio was evaluated. HRPIM signals were compared with the theoretical and finite element responses. Results represent that MQ is a suitable RBF for HRPIM and wave propagation. However, the range of the proper shape parameters is notable. The number of field nodes is the main parameter for accurate wave propagation modeling using HRPIM. The size of support domain should be less thanan upper bound in order to prevent high error. With regard to the number of quadrature points, providing the minimum numbers of points are adequate for the stable solution, but the existence of more points in damage region does not leads to necessarily the accurate responses. It is concluded that the pure HRPIM, without any polynomial terms, is acceptable but considering a few terms will improve the accuracy; even though more terms make the problem unstable and inaccurate.

11. ZZ POINT-2007, linearly interpolable ENDF/B-VII.0 data for 14 temperatures

International Nuclear Information System (INIS)

Cullen, Dermott E.

2007-01-01

A - Description or function: The ENDF/B data library, ENDF/B-VII.0 was processed into the form of temperature dependent cross sections. The original evaluated data include cross sections represented in the form of a combination of resonance parameters and/or tabulated energy dependent cross sections, nominally at 0 Kelvin temperature. For use in applications, these ENDF/B-VII.0 data were processed into the form of temperature dependent cross sections at eight temperatures: 0, 300, 600, 900, 1200, 1500, 1800 and 2100 Kelvin. It has also been processed to six astrophysics like temperatures: 0.1, 1, 10, 100 eV, 1 and 10 keV. At each temperature the cross sections are tabulated and linearly interpolable in energy with a tolerance of 0.1 %. POINT 2007 contains all of the evaluations in the ENDF/B-VII general purpose library, which contains 78 new evaluations + 315 old ones: total 393 nuclides. It also includes 16 new elemental evaluations replaced by isotopic evaluations + 19 old ones. No special purpose ENDF/B-VII libraries, such as fission products, thermal scattering, photon interaction data are included. These evaluations include all cross sections over the energy range 10 e-5 eV to at least 20 MeV. The list of nuclides is indicated. B - Methods: The PREPRO 2007 code system was used to process the ENDF/B data. Listed below are the steps, including the PREPRO2007 codes, which were used to process the data in the order in which the codes were run. 1) Linearly interpolable, tabulated cross sections (LINEAR) 2) Including the resonance contribution (RECENT) 3) Doppler broaden all cross sections to temperature (SIGMA1) 4) Check data, define redundant cross sections by summation (FIXUP) 5) Update evaluation dictionary in MF/MT=1/451 (DICTIN) C - Restrictions: Due to recent changes in ENDF-6 Formats and Procedures only the latest version of the ENDF/B Pre-processing codes, namely PREPRO 2007, can be used to accurately process all current ENDF/B-VII evaluations. The use of

12. Supersymmetric Janus solutions in four dimensions

International Nuclear Information System (INIS)

Bobev, Nikolay; Pilch, Krzysztof; Warner, Nicholas P.

2014-01-01

We use maximal gauged supergravity in four dimensions to construct the gravity dual of a class of supersymmetric conformal interfaces in the theory on the world-volume of multiple M2-branes. We study three classes of examples in which the (1+1)-dimensional defects preserve (4,4), (0,2) or (0,1) supersymmetry. Many of the solutions have the maximally supersymmetric AdS 4 vacuum dual to the N=8 ABJM theory on both sides of the interface. We also find new special classes of solutions including one that interpolates between the maximally supersymmetric vacuum and a conformal fixed point with N=1 supersymmetry and G 2 global symmetry. We find another solution that interpolates between two distinct conformal fixed points with N=1 supersymmetry and G 2 global symmetry. In eleven dimensions, this G 2 to G 2 solution corresponds to a domain wall across which a magnetic flux reverses orientation

13. THE PRECAUTIONARY PROCEDURES IN THE CASE OF NON-COMPLIANCE WITH THE BALLAST WATER MANAGEMENT CON-VENTION’S STANDARDS – POSSIBLE SOLUTIONS FOR POLISH PORTS

Directory of Open Access Journals (Sweden)

Magdalena Klopott

2016-12-01

Full Text Available On September 8, 2017 the International Convention for the Control and Manage-ment of Ships’ Ballast Water and Sediments (BWMC adopted in 2004 will enter into force. It imposes a lot of requirements on shipowners and port states. The aim of this article is to elaborate on the possible solutions that may be adopted in Polish ports as precau-tionary measures in the case of non-compliance with the provisions of BWMC. The article starts with a brief overview of BWMC and ballast water quality stand-ards. Further, it discusses the possible implications of not meeting the ballast water quality standards under BWMC. The elaboration of potential solutions and mitigation measures in the event of non-compliance with the BWMC constitutes the main part of the article. These are crucial to developing a port contingency plan and include, for example, shore-based reception facility for ballast water, mobile ballast water treatment systems, and using potable water. The article ends with a brief analysis of a possible fee systems for reception of ballast water. The research was based on a comprehensive analysis of the Convention and related legal documents, interviews with ports’ representatives as well as e-mail interviews with maritime authorities in the Baltic Sea countries.

14. Neutron fluence-to-dose equivalent conversion factors: a comparison of data sets and interpolation methods

International Nuclear Information System (INIS)

Sims, C.S.; Killough, G.G.

1983-01-01

Various segments of the health physics community advocate the use of different sets of neutron fluence-to-dose equivalent conversion factors as a function of energy and different methods of interpolation between discrete points in those data sets. The major data sets and interpolation methods are used to calculate the spectrum average fluence-to-dose equivalent conversion factors for five spectra associated with the various shielded conditions of the Health Physics Research Reactor. The results obtained by use of the different data sets and interpolation methods are compared and discussed. (author)

15. A comparison of linear interpolation models for iterative CT reconstruction.

Science.gov (United States)

Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric

2016-12-01

Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects

16. Blend Shape Interpolation and FACS for Realistic Avatar

Science.gov (United States)

2015-03-01

The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.

17. Improvement of the impedance measurement reliability by some new experimental and data treatment procedures applied to the behavior of copper in neutral chloride solutions containing small heterocycle molecules

International Nuclear Information System (INIS)

Blajiev, O.L.; Breugelmans, T.; Pintelon, R.; Hubin, A.

2006-01-01

The electrochemical behavior of copper in chloride solutions containing 0.001 M concentrations of small five- and six-ring member heterocyclic molecules was investigated by means of impedance spectroscopy. The investigation was performed by a new technique based on a broadband multisine excitation. This method allows for a quantification and separation of the measurement and stohastic nonlinear noises and for an estimation of the bias non-linear contribution. It as well reduces the perturbation brought to studied system by the measurement process itself. The measurement data for some experimental conditions was quantified by fitting into a equivalent circuit corresponding to a physical model both of them developed earlier. In general, the experimental results obtained show that the number of atoms in the heterocyclic ring and the molecular conformation have a significant influence on the electrochemical response of copper in the investigated environments

18. Interpolating from Bianchi attractors to Lifshitz and AdS spacetimes

International Nuclear Information System (INIS)

Kachru, Shamit; Kundu, Nilay; Saha, Arpan; Samanta, Rickmoy; Trivedi, Sandip P.

2014-01-01

We construct classes of smooth metrics which interpolate from Bianchi attractor geometries of Types II, III, VI and IX in the IR to Lifshitz or AdS 2 ×S 3 geometries in the UV. While we do not obtain these metrics as solutions of Einstein gravity coupled to a simple matter field theory, we show that the matter sector stress-energy required to support these geometries (via the Einstein equations) does satisfy the weak, and therefore also the null, energy condition. Since Lifshitz or AdS 2 ×S 3 geometries can in turn be connected to AdS 5 spacetime, our results show that there is no barrier, at least at the level of the energy conditions, for solutions to arise connecting these Bianchi attractor geometries to AdS 5 spacetime. The asymptotic AdS 5 spacetime has no non-normalizable metric deformation turned on, which suggests that furthermore, the Bianchi attractor geometries can be the IR geometries dual to field theories living in flat space, with the breaking of symmetries being either spontaneous or due to sources for other fields. Finally, we show that for a large class of flows which connect two Bianchi attractors, a C-function can be defined which is monotonically decreasing from the UV to the IR as long as the null energy condition is satisfied. However, except for special examples of Bianchi attractors (including AdS space), this function does not attain a finite and non-vanishing constant value at the end points

19. Sample Data Synchronization and Harmonic Analysis Algorithm Based on Radial Basis Function Interpolation

Directory of Open Access Journals (Sweden)

Huaiqing Zhang

2014-01-01

Full Text Available The spectral leakage has a harmful effect on the accuracy of harmonic analysis for asynchronous sampling. This paper proposed a time quasi-synchronous sampling algorithm which is based on radial basis function (RBF interpolation. Firstly, a fundamental period is evaluated by a zero-crossing technique with fourth-order Newton’s interpolation, and then, the sampling sequence is reproduced by the RBF interpolation. Finally, the harmonic parameters can be calculated by FFT on the synchronization of sampling data. Simulation results showed that the proposed algorithm has high accuracy in measuring distorted and noisy signals. Compared to the local approximation schemes as linear, quadric, and fourth-order Newton interpolations, the RBF is a global approximation method which can acquire more accurate results while the time-consuming is about the same as Newton’s.

20. Evaluation of intense rainfall parameters interpolation methods for the Espírito Santo State

Directory of Open Access Journals (Sweden)

José Eduardo Macedo Pezzopane

2009-12-01

Full Text Available Intense rainfalls are often responsible for the occurrence of undesirable processes in agricultural and forest areas, such as surface runoff, soil erosion and flooding. The knowledge of intense rainfall spatial distribution is important to agricultural watershed management, soil conservation and to the design of hydraulic structures. The present paper evaluated methods of spatial interpolation of the intense rainfall parameters (“K”, “a”, “b” and “c” for the Espírito Santo State, Brazil. Were compared real intense rainfall rates with those calculated by the interpolated intense rainfall parameters, considering different durations and return periods. Inverse distance to the 5th power IPD5 was the spatial interpolation method with better performance to spatial interpolated intense rainfall parameters.

1. Digital x-ray tomosynthesis with interpolated projection data for thin slab objects

Science.gov (United States)

Ha, S.; Yun, J.; Kim, H. K.

2017-11-01

In relation with a thin slab-object inspection, we propose a digital tomosynthesis reconstruction with fewer numbers of measured projections in combinations with additional virtual projections, which are produced by interpolating the measured projections. Hence we can reconstruct tomographic images with less few-view artifacts. The projection interpolation assumes that variations in cone-beam ray path-lengths through an object are negligible and the object is rigid. The interpolation is performed in the projection-space domain. Pixel values in the interpolated projection are the weighted sum of pixel values of the measured projections considering their projection angles. The experimental simulation shows that the proposed method can enhance the contrast-to-noise performance in reconstructed images while sacrificing the spatial resolving power.

2. Interpolation of the discrete logarithm in a finite field of characteristic two by Boolean functions

DEFF Research Database (Denmark)

Brandstaetter, Nina; Lange, Tanja; Winterhof, Arne

2005-01-01

We obtain bounds on degree, weight, and the maximal Fourier coefficient of Boolean functions interpolating the discrete logarithm in finite fields of characteristic two. These bounds complement earlier results for finite fields of odd characteristic....

3. Interpolation-Based Condensation Model Reduction Part 1: Frequency Window Reduction Method Application to Structural Acoustics

National Research Council Canada - National Science Library

Ingel, R

1999-01-01

.... Projection operators are employed for the model reduction or condensation process. Interpolation is then introduced over a user defined frequency window, which can have real and imaginary boundaries and be quite large. Hermitian...

4. Interpolation Filter Design for Hearing-Aid Audio Class-D Output Stage Application

DEFF Research Database (Denmark)

Pracný, Peter; Bruun, Erik; Llimos Muntal, Pere

2012-01-01

This paper deals with a design of a digital interpolation filter for a 3rd order multi-bit ΣΔ modulator with over-sampling ratio OSR = 64. The interpolation filter and the ΣΔ modulator are part of the back-end of an audio signal processing system in a hearing-aid application. The aim in this paper...... is to compare this design to designs presented in other state-of-the-art works ranging from hi-fi audio to hearing-aids. By performing comparison, trends and tradeoffs in interpolation filter design are indentified and hearing-aid specifications are derived. The possibilities for hardware reduction...... in the interpolation filter are investigated. Proposed design simplifications presented here result in the least hardware demanding combination of oversampling ratio, number of stages and number of filter taps among a number of filters reported for audio applications....

5. Patch-based frame interpolation for old films via the guidance of motion paths

Science.gov (United States)

Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi

2018-04-01

Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.

6. NOAA Optimum Interpolation 1/4 Degree Daily Sea Surface Temperature (OISST) Analysis, Version 2

Data.gov (United States)

National Oceanic and Atmospheric Administration, Department of Commerce — This high-resolution sea surface temperature (SST) analysis product was developed using an optimum interpolation (OI) technique. The SST analysis has a spatial grid...

7. Interpolation of property-values between electron numbers is inconsistent with ensemble averaging

Energy Technology Data Exchange (ETDEWEB)

Miranda-Quintana, Ramón Alain [Laboratory of Computational and Theoretical Chemistry, Faculty of Chemistry, University of Havana, Havana (Cuba); Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada); Ayers, Paul W. [Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada)

2016-06-28

In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integer electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.

8. Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation

KAUST Repository

Murarasu, Alin

2012-12-01

The well-known power wall resulting in multi-cores requires special techniques for speeding up applications. In this sense, parallelization plays a crucial role. Besides standard serial optimizations, techniques such as input specialization can also bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation is an inherently hierarchical method of interpolation employed for example in computational steering applications for decompressing highdimensional simulation data. In this context, improving the speedup is essential for real-time visualization. Using input specialization, we report a speedup of up to 9x over the nonspecialized version. The paper covers the steps we took to reach this speedup by means of input adaptivity. Our algorithms will be integrated in fastsg, a library for fast sparse grid interpolation. © 2012 IEEE.

9. Gulf of Maine - Control Points Used to Validate the Accuracies of the Interpolated Water Density Rasters

Data.gov (United States)

National Oceanic and Atmospheric Administration, Department of Commerce — This feature dataset contains the control points used to validate the accuracies of the interpolated water density rasters for the Gulf of Maine. These control...

10. Comparison of elevation and remote sensing derived products as auxiliary data for climate surface interpolation

Science.gov (United States)

Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul

2013-01-01

Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra

11. Validation of China-wide interpolated daily climate variables from 1960 to 2011

Science.gov (United States)

Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang

2015-02-01

Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based

12. Interpolation of extensive routine water pollution monitoring datasets: methodology and discussion of implications for aquifer management.

Science.gov (United States)

Yuval, Yuval; Rimon, Yaara; Graber, Ellen R; Furman, Alex

2014-08-01

A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanisation often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data is thus an important tool for supplementing monitoring observations. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range of values (up to a few orders of magnitude) in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. A local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. The inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the coastal aquifer along the Israeli

13. Comparison of two interpolative background subtraction methods using phantom and clinical data

International Nuclear Information System (INIS)

Houston, A.S.; Sampson, W.F.D.

1989-01-01

Two interpolative background subtraction methods used in scintigraphy are tested using both phantom and clinical data. Cauchy integral subtraction was found to be relatively free of artefacts but required more computing time than bilinear interpolation. Both methods may be used with reasonable confidence for the quantification of relative measurements such as left ventricular ejection fraction and myocardial perfusion index but should be avoided if at all possible in the quantification of absolute measurements such as glomerular filtration rate. (author)

14. Optimum quantization and interpolation of projections in X-ray computerized tomography

International Nuclear Information System (INIS)

Vajnberg, Eh.I.; Fajngojz, M.L.

1984-01-01

Two methods to increase the accuracy of image reconstruction due to optimization of quantization and interpolation of proections with separate reduction of the main types of errors are described and experimentally studied. A high metrological and calculation efficiency of increasing the count frequency in the reconstructed tomogram 2-4 times is found. The optimum structure of interpolation functions of a minimum extent is calculated

15. Pricing and simulation for real estate index options: Radial basis point interpolation

Science.gov (United States)

Gong, Pu; Zou, Dong; Wang, Jiayue

2018-06-01

This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.

16. Interpolation of extensive routine water pollution monitoring datasets: methodology and discussion of implications for aquifer management

Science.gov (United States)

Yuval; Rimon, Y.; Graber, E. R.; Furman, A.

2013-07-01

A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanization often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data between points is thus an important tool for supplementing measured data. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range (up to a few orders of magnitude) of values in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. Local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. That inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the Coastal aquifer along the Israeli

17. Comparison of different wind data interpolation methods for a region with complex terrain in Central Asia

Science.gov (United States)

Reinhardt, Katja; Samimi, Cyrus

2018-01-01

While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and

18. Improvements in Off Design Aeroengine Performance Prediction Using Analytic Compressor Map Interpolation

Science.gov (United States)

Mist'e, Gianluigi Alberto; Benini, Ernesto

2012-06-01

Compressor map interpolation is usually performed through the introduction of auxiliary coordinates (β). In this paper, a new analytical bivariate β function definition to be used in compressor map interpolation is studied. The function has user-defined parameters that must be adjusted to properly fit to a single map. The analytical nature of β allows for rapid calculations of the interpolation error estimation, which can be used as a quantitative measure of interpolation accuracy and also as a valid tool to compare traditional β function interpolation with new approaches (artificial neural networks, genetic algorithms, etc.). The quality of the method is analyzed by comparing the error output to the one of a well-known state-of-the-art methodology. This comparison is carried out for two different types of compressor and, in both cases, the error output using the method presented in this paper is found to be consistently lower. Moreover, an optimization routine able to locally minimize the interpolation error by shape variation of the β function is implemented. Further optimization introducing other important criteria is discussed.

19. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

Science.gov (United States)

Ren, Maodong; Liang, Jin; Wei, Bin

2016-12-01

An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

20. A Hybrid Method for Interpolating Missing Data in Heterogeneous Spatio-Temporal Datasets

Directory of Open Access Journals (Sweden)

Min Deng

2016-02-01

Full Text Available Space-time interpolation is widely used to estimate missing or unobserved values in a dataset integrating both spatial and temporal records. Although space-time interpolation plays a key role in space-time modeling, existing methods were mainly developed for space-time processes that exhibit stationarity in space and time. It is still challenging to model heterogeneity of space-time data in the interpolation model. To overcome this limitation, in this study, a novel space-time interpolation method considering both spatial and temporal heterogeneity is developed for estimating missing data in space-time datasets. The interpolation operation is first implemented in spatial and temporal dimensions. Heterogeneous covariance functions are constructed to obtain the best linear unbiased estimates in spatial and temporal dimensions. Spatial and temporal correlations are then considered to combine the interpolation results in spatial and temporal dimensions to estimate the missing data. The proposed method is tested on annual average temperature and precipitation data in China (1984–2009. Experimental results show that, for these datasets, the proposed method outperforms three state-of-the-art methods—e.g., spatio-temporal kriging, spatio-temporal inverse distance weighting, and point estimation model of biased hospitals-based area disease estimation methods.

1. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

Science.gov (United States)

Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

2017-10-12

In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

2. Using high-order methods on adaptively refined block-structured meshes - discretizations, interpolations, and filters.

Energy Technology Data Exchange (ETDEWEB)

Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.

2006-01-01

Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.

3. Comparison of spatial interpolation techniques to predict soil properties in the colombian piedmont eastern plains

Directory of Open Access Journals (Sweden)

Mauricio Castro Franco

2017-07-01

Full Text Available Context: Interpolating soil properties at field-scale in the Colombian piedmont eastern plains is challenging due to: the highly and complex variable nature of some processes; the effects of the soil; the land use; and the management. While interpolation techniques are being adapted to include auxiliary information of these effects, the soil data are often difficult to predict using conventional techniques of spatial interpolation. Method: In this paper, we evaluated and compared six spatial interpolation techniques: Inverse Distance Weighting (IDW, Spline, Ordinary Kriging (KO, Universal Kriging (UK, Cokriging (Ckg, and Residual Maximum Likelihood-Empirical Best Linear Unbiased Predictor (REML-EBLUP, from conditioned Latin Hypercube as a sampling strategy. The ancillary information used in Ckg and REML-EBLUP was indexes calculated from a digital elevation model (MDE. The “Random forest” algorithm was used for selecting the most important terrain index for each soil properties. Error metrics were used to validate interpolations against cross validation. Results: The results support the underlying assumption that HCLc captured adequately the full distribution of variables of ancillary information in the Colombian piedmont eastern plains conditions. They also suggest that Ckg and REML-EBLUP perform best in the prediction in most of the evaluated soil properties. Conclusions: Mixed interpolation techniques having auxiliary soil information and terrain indexes, provided a significant improvement in the prediction of soil properties, in comparison with other techniques.

4. Interpolated sagittal and coronal reconstruction of CT images in the screening of neck abnormalities

International Nuclear Information System (INIS)

Koga, Issei

1983-01-01

Recontructed sagittal and coronal images were analyzed for their usefulness during clinical applications and to determine the correct use of recontruction techniques. Recontructed stereoscopic images can be formed by continuous or interrupted image reconstruction using interpolation. This study showed that lesions less than 10 mm in diameter should be made continuously and recontructed with uninterrupted technique. However, 5 mm interrupted distances are acceptable for interpolated reconstruction except in cases of lesions less than 10 mm in diameter. Clinically, interpolated reconstruction is not adequated for semicircular lesions less than 10 mm. Blood vessels and linear lesions are good condiated for the application of interpolated recontruction. Reconstruction of images using interrupted interpolation is therefore recommended for screening and for demonstrating correct stereoscopic information, except cases of small lesions less than 10 mm in diameter. Results of this study underscore the fact that obscure information in transverse CT images should be routinely utilized by interporating recontruction techniques, if transverse images are not made continuously. Interpolated recontruction may be helpful in obtaining stereoscopic information. (author)

5. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

Directory of Open Access Journals (Sweden)

Peilu Liu

2017-10-01

Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

6. ERRORS MEASUREMENT OF INTERPOLATION METHODS FOR GEOID MODELS: STUDY CASE IN THE BRAZILIAN REGION

Directory of Open Access Journals (Sweden)

Daniel Arana

Full Text Available Abstract: The geoid is an equipotential surface regarded as the altimetric reference for geodetic surveys and it therefore, has several practical applications for engineers. In recent decades the geodetic community has concentrated efforts on the development of highly accurate geoid models through modern techniques. These models are supplied through regular grids which users need to make interpolations. Yet, little information can be obtained regarding the most appropriate interpolation method to extract information from the regular grid of geoidal models. The use of an interpolator that does not represent the geoid surface appropriately can impair the quality of geoid undulations and consequently the height transformation. This work aims to quantify the magnitude of error that comes from a regular mesh of geoid models. The analysis consisted of performing a comparison between the interpolation of the MAPGEO2015 program and three interpolation methods: bilinear, cubic spline and neural networks Radial Basis Function. As a result of the experiments, it was concluded that 2.5 cm of the 18 cm error of the MAPGEO2015 validation is caused by the use of interpolations in the 5'x5' grid.

7. Analysis of time resolution in a dual head LSO+PSPMT PET system using low pass filter interpolation and digital constant fraction discriminator techniques

International Nuclear Information System (INIS)

Monzo, Jose M.; Lerche, Christoph W.; Martinez, Jorge D.; Esteve, Raul; Toledo, Jose; Gadea, Rafael; Colom, Ricardo J.; Herrero, Vicente; Ferrando, Nestor; Aliaga, Ramon J.; Mateo, Fernando; Sanchez, Filomeno; Mora, Francisco J.; Benlloch, Jose M.; Sebastia, Angel

2009-01-01

PET systems need good time resolution to improve the true event rate, random event rejection, and pile-up rejection. In this study we propose a digital procedure for this task using a low pass filter interpolation plus a Digital Constant Fraction Discriminator (DCFD). We analyzed the best way to implement this algorithm on our dual head PET system and how varying the quality of the acquired signal and electronic noise analytically affects timing resolution. Our detector uses two continuous LSO crystals with a position sensitive PMT. Six signals per detector are acquired using an analog electronics front-end and these signals are processed using an in-house digital acquisition board. The test bench developed simulates the electronics and digital algorithms using Matlab. Results show that electronic noise and other undesired effects have a significant effect on the timing resolution of the system. Interpolated DCFD gives better results than non-interpolated DCFD. In high noise environments, differences are reduced. An optimum delay selection, based on the environment noise, improves time resolution.

8. Analysis of time resolution in a dual head LSO+PSPMT PET system using low pass filter interpolation and digital constant fraction discriminator techniques

Energy Technology Data Exchange (ETDEWEB)

Monzo, Jose M. [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain)], E-mail: jmonfer@aaa.upv.es; Lerche, Christoph W.; Martinez, Jorge D.; Esteve, Raul; Toledo, Jose; Gadea, Rafael; Colom, Ricardo J.; Herrero, Vicente; Ferrando, Nestor; Aliaga, Ramon J.; Mateo, Fernando [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Sanchez, Filomeno [Nuclear Medical Physics Group, IFIC Institute, Consejo Superior de Investigaciones Cientificas (CSIC), 46980 Paterna (Spain); Mora, Francisco J. [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Benlloch, Jose M. [Nuclear Medical Physics Group, IFIC Institute, Consejo Superior de Investigaciones Cientificas (CSIC), 46980 Paterna (Spain); Sebastia, Angel [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain)

2009-06-01

PET systems need good time resolution to improve the true event rate, random event rejection, and pile-up rejection. In this study we propose a digital procedure for this task using a low pass filter interpolation plus a Digital Constant Fraction Discriminator (DCFD). We analyzed the best way to implement this algorithm on our dual head PET system and how varying the quality of the acquired signal and electronic noise analytically affects timing resolution. Our detector uses two continuous LSO crystals with a position sensitive PMT. Six signals per detector are acquired using an analog electronics front-end and these signals are processed using an in-house digital acquisition board. The test bench developed simulates the electronics and digital algorithms using Matlab. Results show that electronic noise and other undesired effects have a significant effect on the timing resolution of the system. Interpolated DCFD gives better results than non-interpolated DCFD. In high noise environments, differences are reduced. An optimum delay selection, based on the environment noise, improves time resolution.

9. Monopole Solutions in Topologically Massive Gauge Theory

International Nuclear Information System (INIS)

Teh, Rosy; Wong, Khai-Ming; Koh, Pin-Wai

2010-01-01

Monopoles in topologically massive SU(2) Yang-Mils-Higgs gauge theory in 2+1 dimensions with a Chern-Simon mass term have been studied by Pisarski some years ago. He argued that there is a monopole solution that is regular everywhere, but found that it does not possess finite action. There were no exact or numerical solutions being presented by him. Hence it is our purpose to further investigate this solution in more detail. We obtained numerical regular solutions that smoothly interpolates between the behavior at small and large distances for different values of Chern-Simon term strength and for several fixed values of Higgs field strength.

10. Creating high-resolution digital elevation model using thin plate spline interpolation and Monte Carlo simulation

International Nuclear Information System (INIS)

Pohjola, J.; Turunen, J.; Lipping, T.

2009-07-01

In this report creation of the digital elevation model of Olkiluoto area incorporating a large area of seabed is described. The modeled area covers 960 square kilometers and the apparent resolution of the created elevation model was specified to be 2.5 x 2.5 meters. Various elevation data like contour lines and irregular elevation measurements were used as source data in the process. The precision and reliability of the available source data varied largely. Digital elevation model (DEM) comprises a representation of the elevation of the surface of the earth in particular area in digital format. DEM is an essential component of geographic information systems designed for the analysis and visualization of the location-related data. DEM is most often represented either in raster or Triangulated Irregular Network (TIN) format. After testing several methods the thin plate spline interpolation was found to be best suited for the creation of the elevation model. The thin plate spline method gave the smallest error in the test where certain amount of points was removed from the data and the resulting model looked most natural. In addition to the elevation data the confidence interval at each point of the new model was required. The Monte Carlo simulation method was selected for this purpose. The source data points were assigned probability distributions according to what was known about their measurement procedure and from these distributions 1 000 (20 000 in the first version) values were drawn for each data point. Each point of the newly created DEM had thus as many realizations. The resulting high resolution DEM will be used in modeling the effects of land uplift and evolution of the landscape in the time range of 10 000 years from the present. This time range comes from the requirements set for the spent nuclear fuel repository site. (orig.)

11. The Flipped Classroom in Emergency Medicine Using Online Videos with Interpolated Questions.

Science.gov (United States)

Rose, Emily; Claudius, Ilene; Tabatabai, Ramin; Kearl, Liza; Behar, Solomon; Jhun, Paul

2016-09-01

Utilizing the flipped classroom is an opportunity for a more engaged classroom session. This educational approach is theorized to improve learner engagement and retention and allows for more complex learning during class. No studies to date have been conducted in the postgraduate medical education setting investigating the effects of interactive, interpolated questions in preclassroom online video material. We created a flipped classroom for core pediatric emergency medicine (PEM) topics using recorded online video lectures for preclassroom material and interactive simulations for the in-classroom session. Lectures were filmed and edited to include integrated questions on an online platform called Zaption. One-half of the residents viewed the lectures uninterrupted (Group A) and the remainder (Group B) viewed with integrated questions (2-6 per 5-15-min segment). Residents were expected to view the lectures prior to in-class time (total viewing time of approximately 2½ h). The 2½-h in-class session included four simulation and three procedure stations, with six PEM faculty available for higher-level management discussion throughout the stations. Total educational time of home preparation and in-class time was approximately 5 h. Residents performed better on the posttest as compared to the pretest, and their satisfaction was high with this educational innovation. In 2014, performance on the posttest between the two groups was similar. However, in 2015, the group with integrated questions performed better on the posttest. An online format combined with face-to-face interaction is an effective educational model for teaching core PEM topics. Copyright © 2016 Elsevier Inc. All rights reserved.

12. Space-Mapping-Based Interpolation for Engineering Optimization

DEFF Research Database (Denmark)

Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

2006-01-01

of the fine model at off&8209;grid points and, as a result, increases the effective resolution of the design variable domain search and improves the quality of the fine model solution found by the SM optimization algorithm. The proposed method requires little computational effort; in particular no additional...

13. Exploring the Role of Genetic Algorithms and Artificial Neural Networks for Interpolation of Elevation in Geoinformation Models

Science.gov (United States)

2013-09-01

One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.

14. Performance of an Interpolated Stochastic Weather Generator in Czechia and Nebraska

Science.gov (United States)

Dubrovsky, M.; Trnka, M.; Hayes, M. J.; Svoboda, M. D.; Semeradova, D.; Metelka, L.; Hlavinka, P.

2008-12-01

Met&Roll is a WGEN-like parametric four-variate daily weather generator (WG), with an optional extension allowing the user to generate additional variables (i.e. wind and water vapor pressure). It is designed to produce synthetic weather series representing present and/or future climate conditions to be used as an input into various models (e.g. crop growth and rainfall runoff models). The present contribution will summarize recent experiments, in which we tested the performance of the interpolated WG, with the aim to examine whether the WG may be used to produce synthetic weather series even for sites having no meteorological observations. The experiments being discussed include: (1) the comparison of various interpolation methods where the performance of the candidate methods is compared in terms of the accuracy of the interpolation for selected WG parameters; (2) assessing the ability of the interpolated WG in the territories of Czechia and Nebraska to reproduce extreme temperature and precipitation characteristics; (3) indirect validation of the interpolated WG in terms of the modeled crop yields simulated by STICS crop growth model (in Czechia); and (4) indirect validation of interpolated WG in terms of soil climate regime characteristics simulated by the SoilClim model (Czechia and Nebraska). The experiments are based on observed daily weather series from two regions: Czechia (area = 78864 km2, 125 stations available) and Nebraska (area = 200520 km2, 28 stations available). Even though Nebraska exhibits a much lower density of stations, this is offset by the state's relatively flat topography, which is an advantage in using the interpolated WG. Acknowledgements: The present study is supported by the AMVIS-KONTAKT project (ME 844) and the GAAV Grant Agency (project IAA300420806).

15. Improvement of image quality using interpolated projection data estimation method in SPECT

International Nuclear Information System (INIS)

Takaki, Akihiro; Soma, Tsutomu; Murase, Kenya; Kojima, Akihiro; Asao, Kimie; Kamada, Shinya; Matsumoto, Masanori

2009-01-01

General data acquisition for single photon emission computed tomography (SPECT) is performed in 90 or 60 directions, with a coarse pitch of approximately 4-6 deg for a rotation of 360 deg or 180 deg, using a gamma camera. No data between adjacent projections will be sampled under these circumstances. The aim of the study was to develop a method to improve SPECT image quality by generating lacking projection data through interpolation of data obtained with a coarse pitch such as 6 deg. The projection data set at each individual degree in 360 directions was generated by a weighted average interpolation method from the projection data acquired with a coarse sampling angle (interpolated projection data estimation processing method, IPDE method). The IPDE method was applied to the numerical digital phantom data, actual phantom data and clinical brain data with Tc-99m ethyle cysteinate dimer (ECD). All SPECT images were reconstructed by the filtered back-projection method and compared with the original SPECT images. The results confirmed that streak artifacts decreased by apparently increasing a sampling number in SPECT after interpolation and also improved signal-to-noise (S/N) ratio of the root mean square uncertainty value. Furthermore, the normalized mean square error values, compared with standard images, had similar ones after interpolation. Moreover, the contrast and concentration ratios increased their effects after interpolation. These results indicate that effective improvement of image quality can be expected with interpolation. Thus, image quality and the ability to depict images can be improved while maintaining the present acquisition time and image quality. In addition, this can be achieved more effectively than at present even if the acquisition time is reduced. (author)

16. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

Science.gov (United States)

Sidek, Khairul Azami; Khalil, Ibrahim

2013-01-01

Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

17. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

Science.gov (United States)

Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

2016-06-01

This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

18. Stepwise effects of the BCR sequential chemical extraction procedure on dissolution and metal release from common ferromagnesian clay minerals: A combined solution chemistry and X-ray powder diffraction study

Energy Technology Data Exchange (ETDEWEB)

Ryan, P.C. [Geology Department, Middlebury College, Middlebury, Vermont 05753 (United States)], E-mail: pryan@middlebury.edu; Hillier, S. [Macaulay Institute, Aberdeen, AB15 8QH UK (United Kingdom); Wall, A.J. [Department of Geosciences, Penn State University, University Park, Pennsylvania, 16802 (United States)

2008-12-15

Sequential extraction procedures (SEPs) are commonly used to determine speciation of trace metals in soils and sediments. However, the non-selectivity of reagents for targeted phases has remained a lingering concern. Furthermore, potentially reactive phases such as phyllosilicate clay minerals often contain trace metals in structural sites, and their reactivity has not been quantified. Accordingly, the objective of this study is to analyze the behavior of trace metal-bearing clay minerals exposed to the revised BCR 3-step plus aqua regia SEP. Mineral quantification based on stoichiometric analysis and quantitative powder X-ray diffraction (XRD) documents progressive dissolution of chlorite (CCa-2 ripidolite) and two varieties of smectite (SapCa-2 saponite and SWa-1 nontronite) during steps 1-3 of the BCR procedure. In total, 8 ({+-} 1) % of ripidolite, 19 ({+-} 1) % of saponite, and 19 ({+-} 3) % of nontronite (% mineral mass) dissolved during extractions assumed by many researchers to release trace metals from exchange sites, carbonates, hydroxides, sulfides and organic matter. For all three reference clays, release of Ni into solution is correlated with clay dissolution. Hydrolysis of relatively weak Mg-O bonds (362 kJ/mol) during all stages, reduction of Fe(III) during hydroxylamine hydrochloride extraction and oxidation of Fe(II) during hydrogen peroxide extraction are the main reasons for clay mineral dissolution. These findings underscore the need for precise mineral quantification when using SEPs to understand the origin/partitioning of trace metals with solid phases.

19. Solution of the linearly anisotropic neutron transport problem in a infinite cylinder combining the decomposition and HTSN methods

International Nuclear Information System (INIS)

Goncalves, Glenio A.; Bodmann, Bardo; Bogado, Sergio; Vilhena, Marco T.

2008-01-01

Analytical solutions for neutron transport in cylindrical geometry is available for isotropic problems, but to the best of our knowledge for anisotropic problems are not available, yet. In this work, an analytical solution for the neutron transport equation in an infinite cylinder assuming anisotropic scattering is reported. Here we specialize the solution, without loss of generality, for the linearly anisotropic problem using the combined decomposition and HTS N methods. The key feature of this method consists in the application of the decomposition method to the anisotropic problem by virtue of the fact that the inverse of the operator associated to isotropic problem is well know and determined by the HTS N approach. So far, following the idea of the decomposition method, we apply this operator to the integral term, assuming that the angular flux appearing in the integrand is considered to be equal to the HTS N solution interpolated by polynomial considering only even powers. This leads to the first approximation for an anisotropic solution. Proceeding further, we replace this solution for the angular flux in the integral and apply again the inverse operator for the isotropic problem in the integral term and obtain a new approximation for the angular flux. This iterative procedure yields a closed form solution for the angular flux. This methodology can be generalized, in a straightforward manner, for transport problems with any degree of anisotropy. For the sake of illustration, we report numerical simulations for linearly anisotropic transport problems. (author)

20. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

Science.gov (United States)

Liu, Derek; Sloboda, Ron S

2014-05-01

Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

1. Comparison of Spatial Interpolation Schemes for Rainfall Data and Application in Hydrological Modeling

Directory of Open Access Journals (Sweden)

Tao Chen

2017-05-01

Full Text Available The spatial distribution of precipitation is an important aspect of water-related research. The use of different interpolation schemes in the same catchment may cause large differences and deviations from the actual spatial distribution of rainfall. Our study analyzes different methods of spatial rainfall interpolation at annual, daily, and hourly time scales to provide a comprehensive evaluation. An improved regression-based scheme is proposed using principal component regression with residual correction (PCRR and is compared with inverse distance weighting (IDW and multiple linear regression (MLR interpolation methods. In this study, the meso-scale catchment of the Fuhe River in southeastern China was selected as a typical region. Furthermore, a hydrological model HEC-HMS was used to calculate streamflow and to evaluate the impact of rainfall interpolation methods on the results of the hydrological model. Results show that the PCRR method performed better than the other methods tested in the study and can effectively eliminate the interpolation anomalies caused by terrain differences between observation points and surrounding areas. Simulated streamflow showed different characteristics based on the mean, maximum, minimum, and peak flows. The results simulated by PCRR exhibited the lowest streamflow error and highest correlation with measured values at the daily time scale. The application of the PCRR method is found to be promising because it considers multicollinearity among variables.

2. The effect of interpolation methods in temperature and salinity trends in the Western Mediterranean

Directory of Open Access Journals (Sweden)

M. VARGAS-YANEZ

2012-04-01

Full Text Available Temperature and salinity data in the historical record are scarce and unevenly distributed in space and time and the estimation of linear trends is sensitive to different factors. In the case of the Western Mediterranean, previous works have studied the sensitivity of these trends to the use of bathythermograph data, the averaging methods or the way in which gaps in time series are dealt with. In this work, a new factor is analysed: the effect of data interpolation. Temperature and salinity time series are generated averaging existing data over certain geographical areas and also by means of interpolation. Linear trends from both types of time series are compared. There are some differences between both estimations for some layers and geographical areas, while in other cases the results are consistent. Those results which do not depend on the use of interpolated or non-interpolated data, neither are influenced by data analysis methods can be considered as robust ones. Those results influenced by the interpolation process or the factors analysed in previous sensitivity tests are not considered as robust results.

3. DrawFromDrawings: 2D Drawing Assistance via Stroke Interpolation with a Sketch Database.

Science.gov (United States)

Matsui, Yusuke; Shiratori, Takaaki; Aizawa, Kiyoharu

2017-07-01

We present DrawFromDrawings, an interactive drawing system that provides users with visual feedback for assistance in 2D drawing using a database of sketch images. Following the traditional imitation and emulation training from art education, DrawFromDrawings enables users to retrieve and refer to a sketch image stored in a database and provides them with various novel strokes as suggestive or deformation feedback. Given regions of interest (ROIs) in the user and reference sketches, DrawFromDrawings detects as-long-as-possible (ALAP) stroke segments and the correspondences between user and reference sketches that are the key to computing seamless interpolations. The stroke-level interpolations are parametrized with the user strokes, the reference strokes, and new strokes created by warping the reference strokes based on the user and reference ROI shapes, and the user study indicated that the interpolation could produce various reasonable strokes varying in shapes and complexity. DrawFromDrawings allows users to either replace their strokes with interpolated strokes (deformation feedback) or overlays interpolated strokes onto their strokes (suggestive feedback). The other user studies on the feedback modes indicated that the suggestive feedback enabled drawers to develop and render their ideas using their own stroke style, whereas the deformation feedback enabled them to finish the sketch composition quickly.

4. Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances

Science.gov (United States)

Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.

2018-04-01

Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.

5. The interpolation method based on endpoint coordinate for CT three-dimensional image

International Nuclear Information System (INIS)

Suto, Yasuzo; Ueno, Shigeru.

1997-01-01

Image interpolation is frequently used to improve slice resolution to reach spatial resolution. Improved quality of reconstructed three-dimensional images can be attained with this technique as a result. Linear interpolation is a well-known and widely used method. The distance-image method, which is a non-linear interpolation technique, is also used to convert CT value images to distance images. This paper describes a newly developed method that makes use of end-point coordinates: CT-value images are initially converted to binary images by thresholding them and then sequences of pixels with 1-value are arranged in vertical or horizontal directions. A sequence of pixels with 1-value is defined as a line segment which has starting and end points. For each pair of adjacent line segments, another line segment was composed by spatial interpolation of the start and end points. Binary slice images are constructed from the composed line segments. Three-dimensional images were reconstructed from clinical X-ray CT images, using three different interpolation methods and their quality and processing speed were evaluated and compared. (author)

6. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

Science.gov (United States)

Mei, Gang; Xu, Nengxiong; Xu, Liangliang

2016-01-01

This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

7. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

Science.gov (United States)

Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

2014-06-16

Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

8. Spatial interpolation of fine particulate matter concentrations using the shortest wind-field path distance.

Directory of Open Access Journals (Sweden)

Longxiang Li

Full Text Available Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.

9. Efficacy of 1.5% Dish Washing Solution and 95% Lemon Water in Substituting Perilous Xylene as a Deparaffinizing Agent for Routine H and E Staining Procedure: A Short Study

Directory of Open Access Journals (Sweden)

2014-01-01

Full Text Available Aim. To assess the efficacy of dish washing solution and diluted lemon water in deparaffinizing sections during conventional hematoxylin and eosin staining technique. Objective. The objective is to utilize eco-friendly economical substitute for xylene. Materials and Methods. Using twenty paraffin embedded tissue blocks, three sections each were prepared. One section was stained with conventional H and E method (Group A and the other two sections with xylene-free (XF H and E (Groups B and C. Staining characteristics were compared with xylene and scoring was given. Total score of 3–5 was regarded as adequate for diagnosis and less than that inadequate for diagnosis. Statistical Analysis. Chi-square test, Kruskal Wallis ANOVA test, and Mann-Whitney U test were used. Results. Adequacy of nuclear staining, crispness, and staining for diagnosis were greater in both Groups A and C (100% than Group B (95%. Adequacy of cytoplasmic staining was similar in all the three groups (100%. Group B showed comparatively superior uniform staining and less retention of wax. Conclusion. Dish washing solution or diluted lemon water can be replaced for xylene as deparaffinizing agent in hematoxylin and eosin procedure.

10. Further Studies, About New Elements Production, by Electrolysis of Cathodic Pd Thin–Long Wires, in Alcohol-Water Solutions (H, D) and Th-Hg Salts. New Procedures to Produce Pd Nano-Structures

CERN Document Server

Celani, F; Righi, E; Trenta, G; Catena, C; D’Agostaro, G; Quercia, P; Andreassi, V; Marini, P; Di Stefano, V; Nakamura, M; Mancini, A; Sona, P G; Fontana, F; Gamberale, L; Garbelli, D; Celia, E; Falcioni, F; Marchesini, M; Novaro, E; Mastromatteo, U

2005-01-01

Abstract They were continued, at National Institute of Nuclear Physics, Frascati National Laboratories-Italy, the systematic studies about detection of new elements, some even with isotopic composition different from natural one, after prolonged electrolysis of Pd wires. The electrolytic solution adopted is the, unusual, used from our experimental group since 1999. In short, it was a mixture of both heavy ethyl alcohol (C2H5OD at 90-95%) and water (D2O, at 10-5%), with Th salts at micromolar concentration and Hg at even lower concentration (both of spectroscopic purity). The liquid solutions, before use, were carefully vacuum distilled (and on line 100nm filtered) at low temperatures (30-40°C) and analysed by ICP-MS. The pH was kept quite mild (acidic at about 3-4). The cathode is Pd (99.9% purity) in the shape of long (60cm) and thin wires (diameter only 0.05mm). Before use, it is carefully cleaned and oxidised by Joule heating in air following a (complex) procedure from us continuously improved (since 1995...

11. Quantization Procedures

International Nuclear Information System (INIS)

Cabrera, J. A.; Martin, R.

1976-01-01

We present in this work a review of the conventional quantization procedure, the proposed by I.E. Segal and a new quantization procedure similar to this one for use in non linear problems. We apply this quantization procedures to different potentials and we obtain the appropriate equations of motion. It is shown that for the linear case the three procedures exposed are equivalent but for the non linear cases we obtain different equations of motion and different energy spectra. (Author) 16 refs

12. Validation study of an interpolation method for calculating whole lung volumes and masses from reduced numbers of CT-images in ponies.

Science.gov (United States)

Reich, H; Moens, Y; Braun, C; Kneissl, S; Noreikat, K; Reske, A

2014-12-01

Quantitative computer tomographic analysis (qCTA) is an accurate but time intensive method used to quantify volume, mass and aeration of the lungs. The aim of this study was to validate a time efficient interpolation technique for application of qCTA in ponies. Forty-one thoracic computer tomographic (CT) scans obtained from eight anaesthetised ponies positioned in dorsal recumbency were included. Total lung volume and mass and their distribution into four compartments (non-aerated, poorly aerated, normally aerated and hyperaerated; defined based on the attenuation in Hounsfield Units) were determined for the entire lung from all 5 mm thick CT-images, 59 (55-66) per animal. An interpolation technique validated for use in humans was then applied to calculate qCTA results for lung volumes and masses from only 10, 12, and 14 selected CT-images per scan. The time required for both procedures was recorded. Results were compared statistically using the Bland-Altman approach. The bias ± 2 SD for total lung volume calculated from interpolation of 10, 12, and 14 CT-images was -1.2 ± 5.8%, 0.1 ± 3.5%, and 0.0 ± 2.5%, respectively. The corresponding results for total lung mass were -1.1 ± 5.9%, 0.0 ± 3.5%, and 0.0 ± 3.0%. The average time for analysis of one thoracic CT-scan using the interpolation method was 1.5-2 h compared to 8 h for analysis of all images of one complete thoracic CT-scan. The calculation of pulmonary qCTA data by interpolation from 12 CT-images was applicable for equine lung CT-scans and reduced the time required for analysis by 75%. Copyright © 2014 Elsevier Ltd. All rights reserved.

13. Seismic Experiment at North Arizona To Locate Washington Fault - 3D Data Interpolation

KAUST Repository

Hanafy, Sherif M.

2008-10-01

The recorded data is interpolated using sinc technique to create the following two data sets 1. Data Set # 1: Here, we interpolated only in the receiver direction to regularize the receiver interval to 1 m, however, the source locations are the same as the original data (2 and 4 m source intervals). Now the data contains 6 lines, each line has 121 receivers and a total of 240 shot gathers. 2. Data Set # 2: Here, we used the result from the previous step, and interpolated only in the shot direction to regularize the shot interval to 1 m. Now, both shot and receivers has 1 m interval. The data contains 6 lines, each line has 121 receivers and a total of 726 shot gathers.

14. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

Directory of Open Access Journals (Sweden)

Hyojin Lee

2015-01-01

Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

15. Comparison of the accuracy of kriging and IDW interpolations in estimating groundwater arsenic concentrations in Texas.

Science.gov (United States)

Gong, Gordon; Mattevada, Sravan; O'Bryant, Sid E

2014-04-01

Exposure to arsenic causes many diseases. Most Americans in rural areas use groundwater for drinking, which may contain arsenic above the currently allowable level, 10µg/L. It is cost-effective to estimate groundwater arsenic levels based on data from wells with known arsenic concentrations. We compared the accuracy of several commonly used interpolation methods in estimating arsenic concentrations in >8000 wells in Texas by the leave-one-out-cross-validation technique. Correlation coefficient between measured and estimated arsenic levels was greater with inverse distance weighted (IDW) than kriging Gaussian, kriging spherical or cokriging interpolations when analyzing data from wells in the entire Texas (pgroundwater arsenic level depends on both interpolation methods and wells' geographic distributions and characteristics in Texas. Taking well depth and elevation into regression analysis as covariates significantly increases the accuracy in estimating groundwater arsenic level in Texas with IDW in particular. Published by Elsevier Inc.

16. Reconfiguration of face expressions based on the discrete capture data of radial basis function interpolation

Institute of Scientific and Technical Information of China (English)

ZHENG Guangguo; ZHOU Dongsheng; WEI Xiaopeng; ZHANG Qiang

2012-01-01

Compactly supported radial basis function can enable the coefficient matrix of solving weigh linear system to have a sparse banded structure, thereby reducing the complexity of the algorithm. Firstly, based on the compactly supported radial basis function, the paper makes the complex quadratic function （Multiquadric, MQ for short） to be transformed and proposes a class of compactly supported MQ function. Secondly, the paper describes a method that interpolates discrete motion capture data to solve the motion vectors of the interpolation points and they are used in facial expression reconstruction. Finally, according to this characteris- tic of the uneven distribution of the face markers, the markers are numbered and grouped in accordance with the density level, and then be interpolated in line with each group. The approach not only ensures the accuracy of the deformation of face local area and smoothness, but also reduces the time complexity of computing.

17. Correction of Sample-Time Error for Time-Interleaved Sampling System Using Cubic Spline Interpolation

Directory of Open Access Journals (Sweden)

Qin Guo-jie

2014-08-01

Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.

18. The analysis of decimation and interpolation in the linear canonical transform domain.

Science.gov (United States)

Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li

2016-01-01

Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.

19. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

Energy Technology Data Exchange (ETDEWEB)

Brislawn, Christopher M. [Los Alamos National Laboratory

2012-08-13

How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementation techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.

20. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

Science.gov (United States)

Peters, Andre; Nehls, Thomas; Wessolek, Gerd

2016-06-01

Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

1. Interpolating precipitation and its relation to runoff and non-point source pollution.

Science.gov (United States)

Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L

2005-01-01

When rainfall spatially varies, complete rainfall data for each region with different rainfall characteristics are very important. Numerous interpolation methods have been developed for estimating unknown spatial characteristics. However, no interpolation method is suitable for all circumstances. In this study, several methods, including the arithmetic average method, the Thiessen Polygons method, the traditional inverse distance method, and the modified inverse distance method, were used to interpolate precipitation. The modified inverse distance method considers not only horizontal distances but also differences between the elevations of the region with no rainfall records and of its surrounding rainfall stations. The results show that when the spatial variation of rainfall is strong, choosing a suitable interpolation method is very important. If the rainfall is uniform, the precipitation estimated using any interpolation method would be quite close to the actual precipitation. When rainfall is heavy in locations with high elevation, the rainfall changes with the elevation. In this situation, the modified inverse distance method is much more effective than any other method discussed herein for estimating the rainfall input for WinVAST to estimate runoff and non-point source pollution (NPSP). When the spatial variation of rainfall is random, regardless of the interpolation method used to yield rainfall input, the estimation errors of runoff and NPSP are large. Moreover, the relationship between the relative error of the predicted runoff and predicted pollutant loading of SS is high. However, the pollutant concentration is affected by both runoff and pollutant export, so the relationship between the relative error of the predicted runoff and the predicted pollutant concentration of SS may be unstable.

2. Interpolation Approaches for Characterizing Spatial Variability of Soil Properties in Tuz Lake Basin of Turkey

Science.gov (United States)

Gorji, Taha; Sertel, Elif; Tanik, Aysegul

2017-12-01

Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters

3. A study of interpolation method in diagnosis of carpal tunnel syndrome

Directory of Open Access Journals (Sweden)

Alireza Ashraf

2013-01-01

Full Text Available Context: The low correlation between the patients′ signs and symptoms of carpal tunnel syndrome (CTS and results of electrodiagnostic tests makes the diagnosis challenging in mild cases. Interpolation is a mathematical method for finding median nerve conduction velocity (NCV exactly at carpal tunnel site. Therefore, it may be helpful in diagnosis of CTS in patients with equivocal test results. Aim: The aim of this study is to evaluate interpolation method as a CTS diagnostic test. Settings and Design: Patients with two or more clinical symptoms and signs of CTS in a median nerve territory with 3.5 ms ≤ distal median sensory latency <4.6 ms from those who came to our electrodiagnostic clinics and also, age matched healthy control subjects were recruited in the study. Materials and Methods: Median compound motor action potential and median sensory nerve action potential latencies were measured by a MEDLEC SYNERGY VIASIS electromyography and conduction velocities were calculated by both routine method and interpolation technique. Statistical Analysis Used: Chi-square and Student′s t-test were used for comparing group differences. Cut-off points were calculated using receiver operating characteristic curve. Results: A sensitivity of 88%, specificity of 67%, positive predictive value (PPV and negative predictive value (NPV of 70.8% and 84.7% were obtained for median motor NCV and a sensitivity of 98.3%, specificity of 91.7%, PPV and NPV of 91.9% and 98.2% were obtained for median sensory NCV with interpolation technique. Conclusions: Median motor interpolation method is a good technique, but it has less sensitivity and specificity than median sensory interpolation method.

4. The twitch interpolation technique for study of fatigue of human quadriceps muscle

DEFF Research Database (Denmark)

Bülow, P M; Nørregaard, J; Mehlsen, J

1995-01-01

The aim of the study was to examine if the twitch interpolation technique could be used to objectively measure fatigue in the quadriceps muscle in subjects performing submaximally. The 'true' maximum isometric quadriceps torque was determined in 21 healthy subject using the twitch interpolation...... technique. Then an endurance test was performed in which the subjects made repeated isometric contractions at 50% of the 'true' maximum torque for 4 s, separated by 6 s rest periods. During the test, the force response to single electrical stimulation (twitch amplitude) was measured at 50% and 25......). In conclusion, the twitch technique can be used for objectively measuring fatigue of the quadriceps muscle....

5. Angular interpolations and splice options for three-dimensional transport computations

International Nuclear Information System (INIS)

Abu-Shumays, I.K.; Yehnert, C.E.

1996-01-01

New, accurate and mathematically rigorous angular Interpolation strategies are presented. These strategies preserve flow and directionality separately over each octant of the unit sphere, and are based on a combination of spherical harmonics expansions and least squares algorithms. Details of a three-dimensional to three-dimensional (3-D to 3-D) splice method which utilizes the new angular interpolations are summarized. The method has been implemented in a multidimensional discrete ordinates transport computer program. Various features of the splice option are illustrated by several applications to a benchmark Dog-Legged Void Neutron (DLVN) streaming and transport experimental assembly

6. Interface information transfer between non-matching, nonconforming interfaces using radial basis function interpolation

CSIR Research Space (South Africa)

Bogaers, Alfred EJ

2016-10-01

Full Text Available words, gB = [ φBA PB ] [ MAA PA P TA 0 ]−1 [ gA 0 ] . (15) NAME: DEFINITION C0 compactly supported piecewise polynomial (C0): (1− (||x|| /r))2+ C2 compactly supported piecewise polynomial (C2): (1− (||x|| /r))4+ (4 (||x|| /r) + 1) Thin-plate spline (TPS... a numerical comparison to Kriging and the moving least-squares method, see Krishnamurthy [16]). RBF interpolation is based on fitting a series of splines, or basis functions to interpolate information from one point cloud to another. Let us assume we...

7. Energy band structure of Cr by the Slater-Koster interpolation scheme

International Nuclear Information System (INIS)

Seifu, D.; Mikusik, P.

1986-04-01

The matrix elements of the Hamiltonian between nine localized wave-functions in tight-binding formalism are derived. The symmetry adapted wave-functions and the secular equations are formed by the group theory method for high symmetry points in the Brillouin zone. A set of interaction integrals is chosen on physical ground and fitted via the Slater-Koster interpolation scheme to the abinito band structure of chromium calculated by the Green function method. Then the energy band structure of chromium is interpolated and extrapolated in the Brillouin zone. (author)

8. Assessment of interaction-strength interpolation formulas for gold and silver clusters

Science.gov (United States)

Giarrusso, Sara; Gori-Giorgi, Paola; Della Sala, Fabio; Fabiano, Eduardo

2018-04-01

The performance of functionals based on the idea of interpolating between the weak- and the strong-interaction limits the global adiabatic-connection integrand is carefully studied for the challenging case of noble-metal clusters. Different interpolation formulas are considered and various features of this approach are analyzed. It is found that these functionals, when used as a correlation correction to Hartree-Fock, are quite robust for the description of atomization energies, while performing less well for ionization potentials. Future directions that can be envisaged from this study and a previous one on main group chemistry are discussed.

9. Study of liquid-vapor equilibrium with the help of interpolation equation of state

International Nuclear Information System (INIS)

Vorob'ev, V.S.

1995-01-01

The paper proposes an interpolation equation of state for the ideal gas, in a majority of cases in the Mie-Grueneisen equation. Its interpolation properties are defined by the dependence of the Grueneisen coefficient on density in the rarefaction region which contains two arbitrary constants. Density, Debye temperature, Grueneisen coefficient, heat capacity in the solid phase, static atomic sum in the gaseous phase, critical density, pressure and temperature are assigned as the initial data of the equation. This equation was used to describe set of experimental data by the coexistance curves and saturation pressure for Cs and Hg. 19 refs.; 8 figs.; 2 tabs

10. Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations

Science.gov (United States)

2014-06-01

The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.

11. Effect of quantization and interpolation of projections on the sensitivity of computerized tomography

International Nuclear Information System (INIS)

Vajnberg, Eh.I.; Fajngojz, M.L.

1984-01-01

The sources and forms of manifestation of errors in quantization and interpolation of projections in case of X-ray computerized tomography are considered and quantitative criteria of their evaluation are formulated. The dominating role of the interaction of two successive quantizations of projections - one-dimensional and two-dimensional ones is revealed. The necessity of joint optimization of the two-dimensional quantization range, expansion and form of interpolation function, quantized convolution nucleus is substantiated. The experimental results at aspect ratio of tomograms 256x256 and 480 projections are presented

12. A Study on the Improvement of Digital Periapical Images using Image Interpolation Methods

International Nuclear Information System (INIS)

Song, Nam Kyu; Koh, Kwang Joon

1998-01-01

Image resampling is of particular interest in digital radiology. When resampling an image to a new set of coordinate, there appears blocking artifacts and image changes. To enhance image quality, interpolation algorithms have been used. Resampling is used to increase the number of points in an image to improve its appearance for display. The process of interpolation is fitting a continuous function to the discrete points in the digital image. The purpose of this study was to determine the effects of the seven interpolation functions when image resampling in digital periapical images. The images were obtained by Digora, CDR and scanning of Ektaspeed plus periapical radiograms on the dry skull and human subject. The subjects were exposed to intraoral X-ray machine at 60 kVp and 70 kVp with exposure time varying between 0.01 and 0.50 second. To determine which interpolation method would provide the better image, seven functions were compared ; (1) nearest neighbor (2) linear (3) non-linear (4) facet model (5) cubic convolution (6) cubic spline (7) gray segment expansion. And resampled images were compared in terms of SNR (Signal to Noise Ratio) and MTF (Modulation Transfer Function) coefficient value. The obtained results were as follows ; 1. The highest SNR value (75.96 dB) was obtained with cubic convolution method and the lowest SNR value (72.44 dB) was obtained with facet model method among seven interpolation methods. 2. There were significant differences of SNR values among CDR, Digora and film scan (P 0.05). 4. There were significant differences of MTF coefficient values between linear interpolation method and the other six interpolation methods (P<0.05). 5. The speed of computation time was the fastest with nearest neighbor method and the slowest with non-linear method. 6. The better image was obtained with cubic convolution, cubic spline and gray segment method in ROC analysis. 7. The better sharpness of edge was obtained with gray segment expansion method

13. High precision simple interpolation asynchronous FIFO based on ACEX1K30 for HIRFL-CSRe

International Nuclear Information System (INIS)

Li Guihua; Qiao Weimin; Jing Lan

2008-01-01

High precision simple interpolation asynchronous FIFO of HIRFL-CSRe was developed based on the ACEX1K30 FPGA in VHDL Hardware Description language. The FIFO runs in FPGA of DSP module of HIRFL-CSRe. The input data of FIFO is from DSP data bus and the output data is to DAC data bus. It's kernel adopts double buffer ping-pong mode and it can implement simple interpolation inside FPGA. The module can control out- put data time delay in 40 ns. The experimental results indicate that this module is practical and accurate to HIRFL-CSRe. (authors)

14. Formalizing physical security procedures

NARCIS (Netherlands)

Although the problems of physical security emerged more than 10,000 years before the problems of computer security, no formal methods have been developed for them, and the solutions have been evolving slowly, mostly through social procedures. But as the traffic on physical and social networks is now

15. Fast procedures for coincidence-summing correction in γ-ray spectrometry

International Nuclear Information System (INIS)

De Felice, Pierino; Angelini, Paola; Fazio, Aldo; Biagini, Roberto

2000-01-01

Simplified and fast procedures for coincidence-summing correction in γ-ray spectrometry were investigated. These procedures are based on the usual theoretical expressions of the correction factors, but differ in the determination of the total efficiency curve based on the following approximations: (a) replacement, below the knee efficiency value, of the total efficiency by the full-energy peak efficiency; and (b) use of linear interpolations (in log-log plot) between only two experimental points above the knee efficiency value; or (c) assumption of a peak-to-total efficiency ratio independent on the counting geometry; or (d) assumption of a constant relation between the peak-to-total efficiency ratios and the photoelectric-to-total cross section ratios. The above approximations were separately assumed for determination of the coincidence-summing correction factors for nuclides with complex decay scheme ( 133 Ba, 134 Cs, 152 Eu) and for 60 Co and 88 Y measured on a 15% relative efficiency p-type coaxial HPGe detector, for three source-detector geometries: point source placed on top of and at 10 cm from the detector window, and 1 l Marinelli beaker filled with aqueous solution. The results were compared with those based on more accurate experimental determinations of the total efficiency curve from measurements of standard sources of eight different single-γ-ray emitters. The usefulness of each simplified procedure is evaluated with respect to its accuracy and to the reduction of the number of standard sources and measurement time

16. Soil moisture mapping in torrential headwater catchments using a local interpolation method (Draix-Bléone field observatory, South Alps, France)

Science.gov (United States)

Mallet, Florian; Marc, Vincent; Douvinet, Johnny; Rossello, Philippe; Le Bouteiller, Caroline; Malet, Jean-Philippe

2016-04-01

Soil moisture is a key parameter that controls runoff processes at the watershed scale. It is characterized by a high area and time variability, controlled by site properties such as soil texture, topography, vegetation cover and climate. Several recent studies showed that changes in water storage was a key variable to understand the distribution of water residence time and the shape of flood's hydrograph (McDonnell and Beven, 2014; Davies and Beven, 2015). Knowledge of high frequency soil moisture variation across scales is a prerequisite for better understanding the areal distribution of runoff generation. The present study has been carried out in the torrential Draix-Bléone's experimental catchments, where water storage processes are expected to occur mainly on the first meter of soil. The 0,86 km2 Laval marly torrential watershed has a peculiar hydrological behavior during flood events with specific discharge among the highest in the world. To better understand the Laval internal behavior and to identify explanatory parameters of runoff generation, additional field equipment has been setup in sub-basins with various land use and morphological characteristics. From fall 2015 onwards this new instrumentation helped to supplement the routine measurements (rainfall rate, streamflow) and to develop a network of high frequency soil water content sensors (moisture probes, mini lysimeter). Data collected since early May and complementary measurement campaigns (itinerant soil moisture measurements, geophysical measurements) make it now possible to propose a soil water content mapping procedure. We use the LISDQS spatial extrapolation model based on a local interpolation method (Joly et. al, 2008). The interpolation is carried out from different geographical variables which are derived from a high resolution DEM (1m LIDAR) and a land cover image. Unlike conventional interpolation procedure, this method takes into account local forcing parameters such as slope, aspect

17. Gridded precipitation dataset for the Rhine basin made with the genRE interpolation method

NARCIS (Netherlands)

Osnabrugge, van B.; Uijlenhoet, R.

2017-01-01

A high resolution (1.2x1.2km) gridded precipitation dataset with hourly time step that covers the whole Rhine basin for the period 1997-2015. Made from gauge data with the genRE interpolation scheme. See "genRE: A method to extend gridded precipitation climatology datasets in near real-time for

18. Improving the visualization of electron-microscopy data through optical flow interpolation

KAUST Repository

Carata, Lucian

2013-01-01

Technical developments in neurobiology have reached a point where the acquisition of high resolution images representing individual neurons and synapses becomes possible. For this, the brain tissue samples are sliced using a diamond knife and imaged with electron-microscopy (EM). However, the technique achieves a low resolution in the cutting direction, due to limitations of the mechanical process, making a direct visualization of a dataset difficult. We aim to increase the depth resolution of the volume by adding new image slices interpolated from the existing ones, without requiring modifications to the EM image-capturing method. As classical interpolation methods do not provide satisfactory results on this type of data, the current paper proposes a re-framing of the problem in terms of motion volumes, considering the depth axis as a temporal axis. An optical flow method is adapted to estimate the motion vectors of pixels in the EM images, and this information is used to compute and insert multiple new images at certain depths in the volume. We evaluate the visualization results in comparison with interpolation methods currently used on EM data, transforming the highly anisotropic original dataset into a dataset with a larger depth resolution. The interpolation based on optical flow better reveals neurite structures with realistic undistorted shapes, and helps to easier map neuronal connections. © 2011 ACM.

19. New Method for Mesh Moving Based on Radial Basis Function Interpolation

NARCIS (Netherlands)

De Boer, A.; Van der Schoot, M.S.; Bijl, H.

2006-01-01

A new point-by-point mesh movement algorithm is developed for the deformation of unstructured grids. The method is based on using radial basis function, RBFs, to interpolate the displacements of the boundary nodes to the whole flow mesh. A small system of equations has to be solved, only involving

20. Response Requirement and Nature of Interpolated Stories in Retroactive Inhibition in Prose.

Science.gov (United States)

Van Mondfrans, Adrian P.; And Others

Retroactive inhibition, a loss of memory due to learning other materials between recall and exposure to the original materials, was investigated in relation to prose. Two variables were manipulated in the study: similarity of interpolated stories (dissimilar or similar), and the response requirements (completion-recall or multiple-choice). The 190…