WorldWideScience

Sample records for linear geometry interpolation

  1. Linear Methods for Image Interpolation

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  2. Linear Methods for Image Interpolation

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  3. Quadratic Interpolation and Linear Lifting Design

    Directory of Open Access Journals (Sweden)

    Joel Solé

    2007-03-01

    Full Text Available A quadratic image interpolation method is stated. The formulation is connected to the optimization of lifting steps. This relation triggers the exploration of several interpolation possibilities within the same context, which uses the theory of convex optimization to minimize quadratic functions with linear constraints. The methods consider possible knowledge available from a given application. A set of linear equality constraints that relate wavelet bases and coefficients with the underlying signal is introduced in the formulation. As a consequence, the formulation turns out to be adequate for the design of lifting steps. The resulting steps are related to the prediction minimizing the detail signal energy and to the update minimizing the l2-norm of the approximation signal gradient. Results are reported for the interpolation methods in terms of PSNR and also, coding results are given for the new update lifting steps.

  4. Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI

    Science.gov (United States)

    Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.

    2015-01-01

    Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085

  5. LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table

  6. Hybrid vehicle optimal control : Linear interpolation and singular control

    NARCIS (Netherlands)

    Delprat, S.; Hofman, T.

    2015-01-01

    Hybrid vehicle energy management can be formulated as an optimal control problem. Considering that the fuel consumption is often computed using linear interpolation over lookup table data, a rigorous analysis of the necessary conditions provided by the Pontryagin Minimum Principle is conducted. For

  7. Implementing fuzzy polynomial interpolation (FPI and fuzzy linear regression (LFR

    Directory of Open Access Journals (Sweden)

    Maria Cristina Floreno

    1996-05-01

    Full Text Available This paper presents some preliminary results arising within a general framework concerning the development of software tools for fuzzy arithmetic. The program is in a preliminary stage. What has been already implemented consists of a set of routines for elementary operations, optimized functions evaluation, interpolation and regression. Some of these have been applied to real problems.This paper describes a prototype of a library in C++ for polynomial interpolation of fuzzifying functions, a set of routines in FORTRAN for fuzzy linear regression and a program with graphical user interface allowing the use of such routines.

  8. LINTAB, Linear Interpolable Tables from any Continuous Variable Function

    International Nuclear Information System (INIS)

    1988-01-01

    1 - Description of program or function: LINTAB is designed to construct linearly interpolable tables from any function. The program will start from any function of a single continuous variable... FUNKY(X). By user input the function can be defined, (1) Over 1 to 100 X ranges. (2) Within each X range the function is defined by 0 to 50 constants. (3) At boundaries between X ranges the function may be continuous or discontinuous (depending on the constants used to define the function within each X range). 2 - Method of solution: LINTAB will construct a table of X and Y values where the tabulated (X,Y) pairs will be exactly equal to the function (Y=FUNKY(X)) and linear interpolation between the tabulated pairs will be within any user specified fractional uncertainty of the function for all values of X within the requested X range

  9. Geometries and interpolations for symmetric positive definite matrices

    DEFF Research Database (Denmark)

    Feragen, Aasa; Fuster, Andrea

    2017-01-01

    . In light of the simulation results, we discuss the mathematical and qualitative properties of these new metrics in comparison with the classical ones. Finally, we explore the nonlinear variation of properties such as shape and scale throughout principal geodesics in different metrics, which affects...... the visualization of scale and shape variation in tensorial data. With the paper, we will release a software package with Matlab scripts for computing the interpolations and statistics used for the experiments in the paper (Code is available at https://sites.google.com/site/aasaferagen/home/software)....

  10. A New Interpolation Approach for Linearly Constrained Convex Optimization

    KAUST Repository

    Espinoza, Francisco

    2012-08-01

    In this thesis we propose a new class of Linearly Constrained Convex Optimization methods based on the use of a generalization of Shepard\\'s interpolation formula. We prove the properties of the surface such as the interpolation property at the boundary of the feasible region and the convergence of the gradient to the null space of the constraints at the boundary. We explore several descent techniques such as steepest descent, two quasi-Newton methods and the Newton\\'s method. Moreover, we implement in the Matlab language several versions of the method, particularly for the case of Quadratic Programming with bounded variables. Finally, we carry out performance tests against Matab Optimization Toolbox methods for convex optimization and implementations of the standard log-barrier and active-set methods. We conclude that the steepest descent technique seems to be the best choice so far for our method and that it is competitive with other standard methods both in performance and empirical growth order.

  11. Interpolation from Grid Lines: Linear, Transfinite and Weighted Method

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2017-01-01

    When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid l...

  12. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

  13. A comparison of linear interpolation models for iterative CT reconstruction.

    Science.gov (United States)

    Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric

    2016-12-01

    Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects

  14. Interpolation of polytopic control Lyapunov functions for discrete–time linear systems

    NARCIS (Netherlands)

    Nguyen, T.T.; Lazar, M.; Spinu, V.; Boje, E.; Xia, X.

    2014-01-01

    This paper proposes a method for interpolating two (or more) polytopic control Lyapunov functions (CLFs) for discrete--time linear systems subject to polytopic constraints, thereby combining different control objectives. The corresponding interpolated CLF is used for synthesis of a stabilizing

  15. SIGMA1-2007, Doppler Broadening ENDF Format Linear-Linear. Interpolated Point Cross Section

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of problem or function: SIGMA-1 Doppler broadens evaluated Cross sections given in the linear-linear interpolation form of the ENDF/B Format to one final temperature. The data is Doppler broadened, thinned, and output in the ENDF/B Format. IAEA0854/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. 2 - Modifications from previous versions: Sigma-1 VERS. 2007-1 (Jan. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 360,000 energy points 3 - Method of solution: The energy grid is selected to ensure that the broadened data is linear-linear interpolable. SIGMA-1 starts from the free-atom Doppler broadening equations and adds the assumptions of linear data within the table and constant data outside the range of the table. If the Original data is not at zero Kelvin, the data is broadened by the effective temperature difference to the final temperature. If the data is already at a temperature higher than the final temperature, Doppler broadening is not performed. 4 - Restrictions on the complexity of the problem: The input to SIGMA-1 must be data which vary linearly in energy and cross section between tabulated points. The LINEAR program provides such data. LINEAR uses only the ENDF/B BCD Format tape and copies all sections except File 3 as read. Since File 3 data are in identical Format for ENDF/B Versions I through VI, the program can be used with all these versions. - The present version Doppler broadens only to one final temperature

  16. Linear and Quadratic Interpolators Using Truncated-Matrix Multipliers and Squarers

    Directory of Open Access Journals (Sweden)

    E. George Walters III

    2015-11-01

    Full Text Available This paper presents a technique for designing linear and quadratic interpolators for function approximation using truncated multipliers and squarers. Initial coefficient values are found using a Chebyshev-series approximation and then adjusted through exhaustive simulation to minimize the maximum absolute error of the interpolator output. This technique is suitable for any function and any precision up to 24 bits (IEEE single precision. Designs for linear and quadratic interpolators that implement the 1/x, 1/ √ x, log2(1+2x, log2(x and 2x functions are presented and analyzed as examples. Results show that a proposed 24-bit interpolator computing 1/x with a design specification of ±1 unit in the last place of the product (ulp error uses 16.4% less area and 15.3% less power than a comparable standard interpolator with the same error specification. Sixteen-bit linear interpolators for other functions are shown to use up to 17.3% less area and 12.1% less power, and 16-bit quadratic interpolators are shown to use up to 25.8% less area and 24.7% less power.

  17. KTOE, KEDAK to ENDF/B Format Conversion with Linear Linear Interpolation

    International Nuclear Information System (INIS)

    Panini, Gian Carlo

    1985-01-01

    1 - Nature of physical problem solved: This code performs a fully automated translation from KEDAK into ENDF-4 or -5 format. Output is on tape in card image format. 2 - Method of solution: Before translation the reactions are sorted in the ENDF format order. Linear-linear interpolation rule is preserved. The resonance parameters for both resolved and unresolved, could also be translated and a background cross section is formed as the difference of the calculated contribution from the parameters and the point-wise data given in the original file. Elastic angular distributions originally given in tabulated form are converted into Legendre polynomial coefficients. Energy distributions are calculated using a simple evaporation model with the temperature expressed as a function of the incident mass. 3 - Restrictions on the complexity of the problem: The existing restrictions both on KEDAK and ENDF have been applied to the array sizes used in the code, except for the number of points in a section which in the ENDF format are limited to 5000 points. The code only translates one material at a time

  18. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    Science.gov (United States)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  19. Interpolation between multi-dimensional histograms using a new non-linear moment morphing method

    NARCIS (Netherlands)

    Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.

    2015-01-01

    A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model׳s parameters and transformed according to a specific

  20. Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations

    Science.gov (United States)

    Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.

  1. The analysis of decimation and interpolation in the linear canonical transform domain.

    Science.gov (United States)

    Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li

    2016-01-01

    Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.

  2. Linearly interpolated sub-symbol optical phase noise suppression in CO-OFDM system.

    Science.gov (United States)

    Hong, Xuezhi; Hong, Xiaojian; He, Sailing

    2015-02-23

    An optical phase noise suppression algorithm, LI-SCPEC, based on phase linear interpolation and sub-symbol processing is proposed for CO-OFDM system. By increasing the temporal resolution of carrier phase tracking through dividing one symbol into several sub-blocks, i.e., sub-symbols, inter-carrier-interference (ICI) mitigation is achieved in the proposed algorithm. Linear interpolation is employed to obtain a reliable temporal reference for sub-symbol phase estimation. The new algorithm, with only a few number of sub-symbols (N(B) = 4), can provide a considerably larger laser linewidth tolerance than several other ICI mitigation algorithms as demonstrated by Monte-Carlo simulations. Numerical analysis verifies that the best performance is achieved with an optimal and moderate number of sub-symbols. Complexity analysis shows that the required number of complex-valued multiplications is independent of the number of sub-symbols used in the proposed algorithm.

  3. Subcellular localization for Gram positive and Gram negative bacterial proteins using linear interpolation smoothing model.

    Science.gov (United States)

    Saini, Harsh; Raicar, Gaurav; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2015-12-07

    Protein subcellular localization is an important topic in proteomics since it is related to a protein׳s overall function, helps in the understanding of metabolic pathways, and in drug design and discovery. In this paper, a basic approximation technique from natural language processing called the linear interpolation smoothing model is applied for predicting protein subcellular localizations. The proposed approach extracts features from syntactical information in protein sequences to build probabilistic profiles using dependency models, which are used in linear interpolation to determine how likely is a sequence to belong to a particular subcellular location. This technique builds a statistical model based on maximum likelihood. It is able to deal effectively with high dimensionality that hinders other traditional classifiers such as Support Vector Machines or k-Nearest Neighbours without sacrificing performance. This approach has been evaluated by predicting subcellular localizations of Gram positive and Gram negative bacterial proteins. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. An Online Method for Interpolating Linear Parametric Reduced-Order Models

    KAUST Repository

    Amsallem, David; Farhat, Charbel

    2011-01-01

    A two-step online method is proposed for interpolating projection-based linear parametric reduced-order models (ROMs) in order to construct a new ROM for a new set of parameter values. The first step of this method transforms each precomputed ROM into a consistent set of generalized coordinates. The second step interpolates the associated linear operators on their appropriate matrix manifold. Real-time performance is achieved by precomputing inner products between the reduced-order bases underlying the precomputed ROMs. The proposed method is illustrated by applications in mechanical and aeronautical engineering. In particular, its robustness is demonstrated by its ability to handle the case where the sampled parameter set values exhibit a mode veering phenomenon. © 2011 Society for Industrial and Applied Mathematics.

  5. Restoring the missing features of the corrupted speech using linear interpolation methods

    Science.gov (United States)

    Rassem, Taha H.; Makbol, Nasrin M.; Hasan, Ali Muttaleb; Zaki, Siti Syazni Mohd; Girija, P. N.

    2017-10-01

    One of the main challenges in the Automatic Speech Recognition (ASR) is the noise. The performance of the ASR system reduces significantly if the speech is corrupted by noise. In spectrogram representation of a speech signal, after deleting low Signal to Noise Ratio (SNR) elements, the incomplete spectrogram is obtained. In this case, the speech recognizer should make modifications to the spectrogram in order to restore the missing elements, which is one direction. In another direction, speech recognizer should be able to restore the missing elements due to deleting low SNR elements before performing the recognition. This is can be done using different spectrogram reconstruction methods. In this paper, the geometrical spectrogram reconstruction methods suggested by some researchers are implemented as a toolbox. In these geometrical reconstruction methods, the linear interpolation along time or frequency methods are used to predict the missing elements between adjacent observed elements in the spectrogram. Moreover, a new linear interpolation method using time and frequency together is presented. The CMU Sphinx III software is used in the experiments to test the performance of the linear interpolation reconstruction method. The experiments are done under different conditions such as different lengths of the window and different lengths of utterances. Speech corpus consists of 20 males and 20 females; each one has two different utterances are used in the experiments. As a result, 80% recognition accuracy is achieved with 25% SNR ratio.

  6. Interpolation between multi-dimensional histograms using a new non-linear moment morphing method

    Energy Technology Data Exchange (ETDEWEB)

    Baak, M., E-mail: max.baak@cern.ch [CERN, CH-1211 Geneva 23 (Switzerland); Gadatsch, S., E-mail: stefan.gadatsch@nikhef.nl [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands); Harrington, R. [School of Physics and Astronomy, University of Edinburgh, Mayfield Road, Edinburgh, EH9 3JZ, Scotland (United Kingdom); Verkerke, W. [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands)

    2015-01-21

    A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties.

  7. Interpolation between multi-dimensional histograms using a new non-linear moment morphing method

    International Nuclear Information System (INIS)

    Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.

    2015-01-01

    A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties

  8. Interpolation between multi-dimensional histograms using a new non-linear moment morphing method

    CERN Document Server

    Baak, Max; Harrington, Robert; Verkerke, Wouter

    2014-01-01

    A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.

  9. Interpolation between multi-dimensional histograms using a new non-linear moment morphing method

    CERN Document Server

    Baak, Max; Harrington, Robert; Verkerke, Wouter

    2015-01-01

    A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.

  10. Genetic design of interpolated non-linear controllers for linear plants

    International Nuclear Information System (INIS)

    Ajlouni, N.

    2000-01-01

    The techniques of genetic algorithms are proposed as a means of designing non-linear PID control systems. It is shown that the use of genetic algorithms for this purpose results in highly effective non-linear PID control systems. These results are illustrated by using genetic algorithms to design a non-linear PID control system and contrasting the results with an optimally tuned linear PID controller. (author)

  11. Multivariate interpolation

    Directory of Open Access Journals (Sweden)

    Pakhnutov I.A.

    2017-04-01

    Full Text Available the paper deals with iterative interpolation methods in forms of similar recursive procedures defined by a sort of simple functions (interpolation basis not necessarily real valued. These basic functions are kind of arbitrary type being defined just by wish and considerations of user. The studied interpolant construction shows virtue of versatility: it may be used in a wide range of vector spaces endowed with scalar product, no dimension restrictions, both in Euclidean and Hilbert spaces. The choice of basic interpolation functions is as wide as possible since it is subdued nonessential restrictions. The interpolation method considered in particular coincides with traditional polynomial interpolation (mimic of Lagrange method in real unidimensional case or rational, exponential etc. in other cases. The interpolation as iterative process, in fact, is fairly flexible and allows one procedure to change the type of interpolation, depending on the node number in a given set. Linear interpolation basis options (perhaps some nonlinear ones allow to interpolate in noncommutative spaces, such as spaces of nondegenerate matrices, interpolated data can also be relevant elements of vector spaces over arbitrary numeric field. By way of illustration, the author gives the examples of interpolation on the real plane, in the separable Hilbert space and the space of square matrices with vektorvalued source data.

  12. ZZ POINT-2004, Linearly Interpolable ENDF/B-VI.8 Data for 13 Temperatures

    International Nuclear Information System (INIS)

    Cullen, Dermott E.

    2004-01-01

    A - Description or function: The ENDF/B data library, ENDF/B-VI, Release 8 was processed into the form of temperature dependent cross sections. The original evaluated data include cross sections represented in the form of a combination of resonance parameters and/or tabulated energy dependent cross sections, nominally at 0 Kelvin temperature. For use in applications, these ENDF/B-VI, Release 8 data were processed into the form of temperature dependent cross sections at eight temperatures between 0 and 2100 Kelvin, in steps of 300 Kelvin. It has also been processed to five astrophysics like temperatures, 1, 10, 100 eV, 1 and 10 keV. At each temperature the cross sections are tabulated and linearly interpolable in energy with a tolerance of 0.1 %. POINT2004 contains all of the evaluations in the ENDF/B-VI general purpose library, which contains evaluations for 328 materials (isotopes or naturally occurring elemental mixtures of isotopes). No special purpose ENDF/B-VI libraries, such as fission products, thermal scattering, photon interaction data are included. The majority of these evaluations are complete, in the sense that they include all cross sections over the energy range 10 e-5 eV to at least 20 MeV. B - Methods: The PREPRO2002 code system was used to process the ENDF/B data. Listed below are the steps, including the PREPRO2002 codes, which were used to process the data in the order in which the codes were run. 1) Linearly interpolable, tabulated cross sections (LINEAR); 2) Including the resonance contribution (RECENT); 3) Doppler broaden all cross sections to temperature (SIGMA1); 4) Check data, define redundant cross sections by summation (FIXUP)

  13. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    Science.gov (United States)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  14. ZZ POINT-2007, linearly interpolable ENDF/B-VII.0 data for 14 temperatures

    International Nuclear Information System (INIS)

    Cullen, Dermott E.

    2007-01-01

    A - Description or function: The ENDF/B data library, ENDF/B-VII.0 was processed into the form of temperature dependent cross sections. The original evaluated data include cross sections represented in the form of a combination of resonance parameters and/or tabulated energy dependent cross sections, nominally at 0 Kelvin temperature. For use in applications, these ENDF/B-VII.0 data were processed into the form of temperature dependent cross sections at eight temperatures: 0, 300, 600, 900, 1200, 1500, 1800 and 2100 Kelvin. It has also been processed to six astrophysics like temperatures: 0.1, 1, 10, 100 eV, 1 and 10 keV. At each temperature the cross sections are tabulated and linearly interpolable in energy with a tolerance of 0.1 %. POINT 2007 contains all of the evaluations in the ENDF/B-VII general purpose library, which contains 78 new evaluations + 315 old ones: total 393 nuclides. It also includes 16 new elemental evaluations replaced by isotopic evaluations + 19 old ones. No special purpose ENDF/B-VII libraries, such as fission products, thermal scattering, photon interaction data are included. These evaluations include all cross sections over the energy range 10 e-5 eV to at least 20 MeV. The list of nuclides is indicated. B - Methods: The PREPRO 2007 code system was used to process the ENDF/B data. Listed below are the steps, including the PREPRO2007 codes, which were used to process the data in the order in which the codes were run. 1) Linearly interpolable, tabulated cross sections (LINEAR) 2) Including the resonance contribution (RECENT) 3) Doppler broaden all cross sections to temperature (SIGMA1) 4) Check data, define redundant cross sections by summation (FIXUP) 5) Update evaluation dictionary in MF/MT=1/451 (DICTIN) C - Restrictions: Due to recent changes in ENDF-6 Formats and Procedures only the latest version of the ENDF/B Pre-processing codes, namely PREPRO 2007, can be used to accurately process all current ENDF/B-VII evaluations. The use of

  15. Towards linearization of atmospheric radiative transfer in spherical geometry

    International Nuclear Information System (INIS)

    Walter, Holger H.; Landgraf, Jochen

    2005-01-01

    We present a general approach for the linearization of radiative transfer in a spherical planetary atmosphere. The approach is based on the forward-adjoint perturbation theory. In the first part we develop the theoretical background for a linearization of radiative transfer in spherical geometry. Using an operator formulation of radiative transfer allows one to derive the linearization principles in a universally valid notation. The application of the derived principles is demonstrated for a radiative transfer problem in simplified spherical geometry in the second part of this paper. Here, we calculate the derivatives of the radiance at the top of the atmosphere with respect to the absorption properties of a trace gas species in the case of a nadir-viewing satellite instrument

  16. Kernel reconstruction methods for Doppler broadening — Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    International Nuclear Information System (INIS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-01-01

    This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (T_j). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T_0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (T_j). The choice of the L_2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (T_j) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [T_m_i_n,T_m_a_x]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope "2"3"8U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of "2"3"8U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.

  17. Interpolation of final geometry and result fields in process parameter space

    NARCIS (Netherlands)

    Misiun, Grzegorz Stefan; Wang, Chao; Geijselaers, Hubertus J.M.; van den Boogaard, Antonius H.; Saanouni, K.

    2016-01-01

    Different routes to produce a product in a bulk forming process can be described by a limited set of process parameters. The parameters determine the final geometry as well as the distribution of state variables in the final shape. Ring rolling has been simulated using different parameter settings.

  18. Comparison of BiLinearly Interpolated Subpixel Sensitivity Mapping and Pixel-Level Decorrelation

    Science.gov (United States)

    Challener, Ryan C.; Harrington, Joseph; Cubillos, Patricio; Foster, Andrew S.; Deming, Drake; WASP Consortium

    2016-10-01

    Exoplanet eclipse signals are weaker than the systematics present in the Spitzer Space Telescope's Infrared Array Camera (IRAC), and thus the correction method can significantly impact a measurement. BiLinearly Interpolated Subpixel Sensitivity (BLISS) mapping calculates the sensitivity of the detector on a subpixel grid and corrects the photometry for any sensitivity variations. Pixel-Level Decorrelation (PLD) removes the sensitivity variations by considering the relative intensities of the pixels around the source. We applied both methods to WASP-29b, a Saturn-sized planet with a mass of 0.24 ± 0.02 Jupiter masses and a radius of 0.84 ± 0.06 Jupiter radii, which we observed during eclipse twice with the 3.6 µm and once with the 4.5 µm channels of IRAC aboard Spitzer in 2010 and 2011 (programs 60003 and 70084, respectively). We compared the results of BLISS and PLD, and comment on each method's ability to remove time-correlated noise. WASP-29b exhibits a strong detection at 3.6 µm and no detection at 4.5 µm. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.

  19. System theory as applied differential geometry. [linear system

    Science.gov (United States)

    Hermann, R.

    1979-01-01

    The invariants of input-output systems under the action of the feedback group was examined. The approach used the theory of Lie groups and concepts of modern differential geometry, and illustrated how the latter provides a basis for the discussion of the analytic structure of systems. Finite dimensional linear systems in a single independent variable are considered. Lessons of more general situations (e.g., distributed parameter and multidimensional systems) which are increasingly encountered as technology advances are presented.

  20. Pixel-Level Decorrelation and BiLinearly Interpolated Subpixel Sensitivity applied to WASP-29b

    Science.gov (United States)

    Challener, Ryan; Harrington, Joseph; Cubillos, Patricio; Blecic, Jasmina; Deming, Drake

    2017-10-01

    Measured exoplanet transit and eclipse depths can vary significantly depending on the methodology used, especially at the low S/N levels in Spitzer eclipses. BiLinearly Interpolated Subpixel Sensitivity (BLISS) models a physical, spatial effect, which is independent of any astrophysical effects. Pixel-Level Decorrelation (PLD) uses the relative variations in pixels near the target to correct for flux variations due to telescope motion. PLD is being widely applied to all Spitzer data without a thorough understanding of its behavior. It is a mathematical method derived from a Taylor expansion, and many of its parameters do not have a physical basis. PLD also relies heavily on binning the data to remove short time-scale variations, which can artifically smooth the data. We applied both methods to 4 eclipse observations of WASP-29b, a Saturn-sized planet, which was observed twice with the 3.6 µm and twice with the 4.5 µm channels of Spitzer's IRAC in 2010, 2011 and 2014 (programs 60003, 70084, and 10054, respectively). We compare the resulting eclipse depths and midpoints from each model, assess each method's ability to remove correlated noise, and discuss how to choose or combine the best data analysis methods. We also refined the orbit from eclipse timings, detecting a significant nonzero eccentricity, and we used our Bayesian Atmospheric Radiative Transfer (BART) code to retrieve the planet's atmosphere, which is consistent with a blackbody. Spitzer is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.

  1. Geometry optimization of linear and annular plasma synthetic jet actuators

    International Nuclear Information System (INIS)

    Neretti, G; Seri, P; Taglioli, M; Borghi, C A; Shaw, A; Iza, F

    2017-01-01

    The electrohydrodynamic (EHD) interaction induced in atmospheric air pressure by a surface dielectric barrier discharge (DBD) actuator has been experimentally investigated. Plasma synthetic jet actuators (PSJAs) are DBD actuators able to induce an air stream perpendicular to the actuator surface. These devices can be used in the field of aerodynamics to prevent or induce flow separation, modify the laminar to turbulent transition inside the boundary layer, and stabilize or mix air flows. They can also be used to enhance indirect plasma treatment effects, increasing the reactive species delivery rate onto surfaces and liquids. This can play a major role in plasma processing and chemical kinetics modelling, where often only diffusive mechanisms are considered. This paper reports on the importance that different electrode geometries can have on the performance of different PSJAs. A series of DBD aerodynamic actuators designed to produce perpendicular jets has been fabricated on two-layer printed circuit boards (PCBs). Both linear and annular geometries were considered, testing different upper electrode distances in the linear case and different diameters in the annular one. An AC voltage supplied at a peak of 11.5 kV and a frequency of 5 kHz was used. Lower electrodes were connected to the ground and buried in epoxy resin to avoid undesired plasma generation on the lower actuator surface. Voltage and current measurements were carried out to evaluate the active power delivered to the discharges. Schlieren imaging allowed the induced jets to be visualized and gave an estimate of their evolution and geometry. Pitot tube measurements were performed to obtain the velocity profiles of the PSJAs and to estimate the mechanical power delivered to the fluid. The optimal values of the inter-electrode distance and diameter were found in order to maximize jet velocity, mechanical power or efficiency. Annular geometries were found to achieve the best performance. (paper)

  2. Comparative analysis of linear motor geometries for Stirling coolers

    Science.gov (United States)

    R, Rajesh V.; Kuzhiveli, Biju T.

    2017-12-01

    Compared to rotary motor driven Stirling coolers, linear motor coolers are characterized by small volume and long life, making them more suitable for space and military applications. The motor design and operational characteristics have a direct effect on the operation of the cooler. In this perspective, ample scope exists in understanding the behavioural description of linear motor systems. In the present work, the authors compare and analyze different moving magnet linear motor geometries to finalize the most favourable one for Stirling coolers. The required axial force in the linear motors is generated by the interaction of magnetic fields of a current carrying coil and that of a permanent magnet. The compact size, commercial availability of permanent magnets and low weight requirement of the system are quite a few constraints for the design. The finite element analysis performed using Maxwell software serves as the basic tool to analyze the magnet movement, flux distribution in the air gap and the magnetic saturation levels on the core. A number of material combinations are investigated for core before finalizing the design. The effect of varying the core geometry on the flux produced in the air gap is also analyzed. The electromagnetic analysis of the motor indicates that the permanent magnet height ought to be taken in such a way that it is under the influence of electromagnetic field of current carrying coil as well as the outer core in the balanced position. This is necessary so that sufficient amount of thrust force is developed by efficient utilisation of the air gap flux density. Also, the outer core ends need to be designed to facilitate enough room for the magnet movement under the operating conditions.

  3. National Scale Rainfall Map Based on Linearly Interpolated Data from Automated Weather Stations and Rain Gauges

    Science.gov (United States)

    Alconis, Jenalyn; Eco, Rodrigo; Mahar Francisco Lagmay, Alfredo; Lester Saddi, Ivan; Mongaya, Candeze; Figueroa, Kathleen Gay

    2014-05-01

    In response to the slew of disasters that devastates the Philippines on a regular basis, the national government put in place a program to address this problem. The Nationwide Operational Assessment of Hazards, or Project NOAH, consolidates the diverse scientific research being done and pushes the knowledge gained to the forefront of disaster risk reduction and management. Current activities of the project include installing rain gauges and water level sensors, conducting LIDAR surveys of critical river basins, geo-hazard mapping, and running information education campaigns. Approximately 700 automated weather stations and rain gauges installed in strategic locations in the Philippines hold the groundwork for the rainfall visualization system in the Project NOAH web portal at http://noah.dost.gov.ph. The system uses near real-time data from these stations installed in critical river basins. The sensors record the amount of rainfall in a particular area as point data updated every 10 to 15 minutes. The sensor sends the data to a central server either via GSM network or satellite data transfer for redundancy. The web portal displays the sensors as a placemarks layer on a map. When a placemark is clicked, it displays a graph of the rainfall data for the past 24 hours. The rainfall data is harvested by batch determined by a one-hour time frame. The program uses linear interpolation as the methodology implemented to visually represent a near real-time rainfall map. The algorithm allows very fast processing which is essential in near real-time systems. As more sensors are installed, precision is improved. This visualized dataset enables users to quickly discern where heavy rainfall is concentrated. It has proven invaluable on numerous occasions, such as last August 2013 when intense to torrential rains brought about by the enhanced Southwest Monsoon caused massive flooding in Metro Manila. Coupled with observations from Doppler imagery and water level sensors along the

  4. Linear algebra and analytic geometry for physical sciences

    CERN Document Server

    Landi, Giovanni

    2018-01-01

    A self-contained introduction to finite dimensional vector spaces, matrices, systems of linear equations, spectral analysis on euclidean and hermitian spaces, affine euclidean geometry, quadratic forms and conic sections. The mathematical formalism is motivated and introduced by problems from physics, notably mechanics (including celestial) and electro-magnetism, with more than two hundreds examples and solved exercises. Topics include: The group of orthogonal transformations on euclidean spaces, in particular rotations, with Euler angles and angular velocity. The rigid body with its inertia matrix. The unitary group. Lie algebras and exponential map. The Dirac’s bra-ket formalism. Spectral theory for self-adjoint endomorphisms on euclidean and hermitian spaces. The Minkowski spacetime from special relativity and the Maxwell equations. Conic sections with the use of eccentricity and Keplerian motions. An appendix collects basic algebraic notions like group, ring and field; and complex numbers and integers m...

  5. Design of a Control System for a Maglev Planar Motor Based on Two-Dimension Linear Interpolation

    Directory of Open Access Journals (Sweden)

    Feng Xing

    2017-08-01

    Full Text Available In order to realize the high speed and high-precision control of a maglev planar motor, a high-precision electromagnetic model is needed in the first place, which can also contribute to meeting the real-time running requirements. Traditionally, the electromagnetic model is based on analytical calculations. However, this neglects the model simplification and the manufacturing errors, which may bring certain errors to the model. Aiming to handle this inaccuracy, this paper proposes a novel design method for a maglev planar motor control system based on two-dimensional linear interpolation. First, the magnetic field is divided into several regions according to the symmetry of the Halbach magnetic array, and the uniform grid method is adopted to partition one of these regions. Second, targeting this region, it is possible to sample the electromagnetic forces and torques on each node of the grid and obtain the complete electromagnetic model in this region through the two-dimensional linear interpolation method. Third, the whole electromagnetic model of the maglev planar motor can be derived according to the symmetry of the magnetic field. Finally, the decoupling method and controller are designed according to this electromagnetic model, and thereafter, the control model can be established. The designed control system is demonstrated through simulations and experiments to feature better accuracy and meet the requirements of real-time control.

  6. Novel method of interpolation and extrapolation of functions by a linear initial value problem

    CSIR Research Space (South Africa)

    Shatalov, M

    2008-09-01

    Full Text Available A novel method of function approximation using an initial value, linear, ordinary differential equation (ODE) is presented. The main advantage of this method is to obtain the approximation expressions in a closed form. This technique can be taught...

  7. Linear theory of a cold relativistic beam in a strongly magnetized finite-geometry plasma

    International Nuclear Information System (INIS)

    Gagne, R.R.J.; Shoucri, M.M.

    1976-01-01

    The linear theory of a finite-geometry cold relativistic beam propagating in a cold homogeneous finite-geometry plasma, is investigated in the case of a strongly magnetized plasma. The beam is assumed to propagate parallel to the external magnetic field. It is shown that the instability which takes place at the Cherenkov resonance ωapprox. =k/subz/v/subb/ is of the convective type. The effect of the finite geometry on the instability growth rate is studied and is shown to decrease the growth rate, with respect to the infinite geometry, by a factor depending on the ratio of the beam-to-plasma radius

  8. Reconnection Scaling Experiment (RSX): Magnetic Reconnection in Linear Geometry

    Science.gov (United States)

    Intrator, T.; Sovinec, C.; Begay, D.; Wurden, G.; Furno, I.; Werley, C.; Fisher, M.; Vermare, L.; Fienup, W.

    2001-10-01

    The linear Reconnection Scaling Experiment (RSX) at LANL is a new experiment that can create MHD relevant plasmas to look at the physics of magnetic reconnection. This experiment can scale many relevant parameters because the guns that generate the plasma and current channels do not depend on equilibrium or force balance for startup. We describe the experiment and initial electrostatic and magnetic probe data. Two parallel current channels sweep down a long plasma column and probe data accumulated over many shots gives 3D movies of magnetic reconnection. Our first data tries to define an operating regime free from kink instabilities that might otherwise confuse the data and shot repeatability. We compare this with MHD 2 fluid NIMROD simulations of the single current channel kink stability boundary for a variety of experimental conditions.

  9. Securing Body Sensor Networks with Biometric Methods: A New Key Negotiation Method and a Key Sampling Method for Linear Interpolation Encryption

    OpenAIRE

    Zhao, Huawei; Chen, Chi; Hu, Jiankun; Qin, Jing

    2015-01-01

    We present two approaches that exploit biometric data to address security problems in the body sensor networks: a new key negotiation scheme based on the fuzzy extractor technology and an improved linear interpolation encryption method. The first approach designs two attack games to give the formal definition of fuzzy negotiation that forms a new key negotiation scheme based on fuzzy extractor technology. According to the definition, we further define a concrete structure of fuzzy negotiation...

  10. Optimal bounds for a Lagrange interpolation inequality for piecewise linear continuous finite elements in two space dimensions

    KAUST Repository

    Muhamadiev, È rgash; Nazarov, Murtazo

    2015-01-01

    © 2014 Elsevier Inc. In this paper the interpolation inequality of Szepessy [12, Lemma 4.2] is revisited. The lower bound in the above reference is proven to be proportional to p-2, where p is a polynomial degree, that goes fast to zero

  11. Monotone piecewise bicubic interpolation

    International Nuclear Information System (INIS)

    Carlson, R.E.; Fritsch, F.N.

    1985-01-01

    In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables

  12. Geometries

    CERN Document Server

    Sossinsky, A B

    2012-01-01

    The book is an innovative modern exposition of geometry, or rather, of geometries; it is the first textbook in which Felix Klein's Erlangen Program (the action of transformation groups) is systematically used as the basis for defining various geometries. The course of study presented is dedicated to the proposition that all geometries are created equal--although some, of course, remain more equal than others. The author concentrates on several of the more distinguished and beautiful ones, which include what he terms "toy geometries", the geometries of Platonic bodies, discrete geometries, and classical continuous geometries. The text is based on first-year semester course lectures delivered at the Independent University of Moscow in 2003 and 2006. It is by no means a formal algebraic or analytic treatment of geometric topics, but rather, a highly visual exposition containing upwards of 200 illustrations. The reader is expected to possess a familiarity with elementary Euclidean geometry, albeit those lacking t...

  13. Geometry

    Indian Academy of Sciences (India)

    . In the previous article we looked at the origins of synthetic and analytic geometry. More practical minded people, the builders and navigators, were studying two other aspects of geometry- trigonometry and integral calculus. These are actually ...

  14. Interpolation functors and interpolation spaces

    CERN Document Server

    Brudnyi, Yu A

    1991-01-01

    The theory of interpolation spaces has its origin in the classical work of Riesz and Marcinkiewicz but had its first flowering in the years around 1960 with the pioneering work of Aronszajn, Calderón, Gagliardo, Krein, Lions and a few others. It is interesting to note that what originally triggered off this avalanche were concrete problems in the theory of elliptic boundary value problems related to the scale of Sobolev spaces. Later on, applications were found in many other areas of mathematics: harmonic analysis, approximation theory, theoretical numerical analysis, geometry of Banach spaces, nonlinear functional analysis, etc. Besides this the theory has a considerable internal beauty and must by now be regarded as an independent branch of analysis, with its own problems and methods. Further development in the 1970s and 1980s included the solution by the authors of this book of one of the outstanding questions in the theory of the real method, the K-divisibility problem. In a way, this book harvests the r...

  15. Optimal bounds for a Lagrange interpolation inequality for piecewise linear continuous finite elements in two space dimensions

    KAUST Repository

    Muhamadiev, Èrgash

    2015-03-01

    © 2014 Elsevier Inc. In this paper the interpolation inequality of Szepessy [12, Lemma 4.2] is revisited. The lower bound in the above reference is proven to be proportional to p-2, where p is a polynomial degree, that goes fast to zero as p increases. We prove that the lower bound is proportional to ln2 p which is an increasing function. Moreover, we prove that this estimate is sharp.

  16. Geometry

    CERN Document Server

    Prasolov, V V

    2015-01-01

    This book provides a systematic introduction to various geometries, including Euclidean, affine, projective, spherical, and hyperbolic geometries. Also included is a chapter on infinite-dimensional generalizations of Euclidean and affine geometries. A uniform approach to different geometries, based on Klein's Erlangen Program is suggested, and similarities of various phenomena in all geometries are traced. An important notion of duality of geometric objects is highlighted throughout the book. The authors also include a detailed presentation of the theory of conics and quadrics, including the theory of conics for non-Euclidean geometries. The book contains many beautiful geometric facts and has plenty of problems, most of them with solutions, which nicely supplement the main text. With more than 150 figures illustrating the arguments, the book can be recommended as a textbook for undergraduate and graduate-level courses in geometry.

  17. Spatial interpolation

    NARCIS (Netherlands)

    Stein, A.

    1991-01-01

    The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are

  18. Generalization of Asaoka method to linearly anisotropic scattering: benchmark data in cylindrical geometry

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1975-11-01

    The Integral Transform Method for the neutron transport equation has been developed in last years by Asaoka and others. The method uses Fourier transform techniques in solving isotropic one-dimensional transport problems in homogeneous media. The method has been extended to linearly anisotropic transport in one-dimensional homogeneous media. Series expansions were also obtained using Hembd techniques for the new anisotropic matrix elements in cylindrical geometry. Carlvik spatial-spherical harmonics method was generalized to solve the same problem. By applying a relation between the isotropic and anisotropic one-dimensional kernels, it was demonstrated that anisotropic matrix elements can be calculated by a linear combination of a few isotropic matrix elements. This means in practice that the anisotropic problem of order N with the N+2 isotropic matrix for the plane and spherical geometries, and N+1 isotropic matrix for cylindrical geometries can be solved. A method of solving linearly anisotropic one-dimensional transport problems in homogeneous media was defined by applying Mika and Stankiewicz observations: isotropic matrix elements were computed by Hembd series and anisotropic matrix elements then calculated from recursive relations. The method has been applied to albedo and critical problems in cylindrical geometries. Finally, a number of results were computed with 12-digit accuracy for use as benchmarks [fr

  19. Contributions to the spectral theory of the linear Boltzmann operator for various geometries

    International Nuclear Information System (INIS)

    Protopopescu, V.

    1975-01-01

    The linear monoenergetic Boltzmann operator with isotropic scattering is studied for various geometries and boundary conditions as the infinitesimal generator of a positivity preserving contractive semigroup in an appropriate Hilbert space. General results about the existence and the uniqueness of the solutions of the corresponding evolution problems are reviewed. The spectrum of the Boltzmann operator is analyzed for semi-infinite, slab and parallelepipedic geometries with vacuum, periodic, perfectly reflecting, generalized and diffusely reflecting boundary condition respectively. The main features of these spectra, their importance for determining the asymptotic evolution and possible generalizations to more realistic models are put together in a final section. (author)

  20. Analysis of ECT Synchronization Performance Based on Different Interpolation Methods

    Directory of Open Access Journals (Sweden)

    Yang Zhixin

    2014-01-01

    Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.

  1. Interpolation in Spaces of Functions

    Directory of Open Access Journals (Sweden)

    K. Mosaleheh

    2006-03-01

    Full Text Available In this paper we consider the interpolation by certain functions such as trigonometric and rational functions for finite dimensional linear space X. Then we extend this to infinite dimensional linear spaces

  2. Image Interpolation with Contour Stencils

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    Image interpolation is the problem of increasing the resolution of an image. Linear methods must compromise between artifacts like jagged edges, blurring, and overshoot (halo) artifacts. More recent works consider nonlinear methods to improve interpolation of edges and textures. In this paper we apply contour stencils for estimating the image contours based on total variation along curves and then use this estimation to construct a fast edge-adaptive interpolation.

  3. Geometry

    CERN Document Server

    Pedoe, Dan

    1988-01-01

    ""A lucid and masterly survey."" - Mathematics Gazette Professor Pedoe is widely known as a fine teacher and a fine geometer. His abilities in both areas are clearly evident in this self-contained, well-written, and lucid introduction to the scope and methods of elementary geometry. It covers the geometry usually included in undergraduate courses in mathematics, except for the theory of convex sets. Based on a course given by the author for several years at the University of Minnesota, the main purpose of the book is to increase geometrical, and therefore mathematical, understanding and to he

  4. Local-metrics error-based Shepard interpolation as surrogate for highly non-linear material models in high dimensions

    Science.gov (United States)

    Lorenzi, Juan M.; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian

    2017-10-01

    Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.

  5. Linear Discontinuous Expansion Method using the Subcell Balances for Unstructured Geometry SN Transport

    International Nuclear Information System (INIS)

    Hong, Ser Gi; Kim, Jong Woon; Lee, Young Ouk; Kim, Kyo Youn

    2010-01-01

    The subcell balance methods have been developed for one- and two-dimensional SN transport calculations. In this paper, a linear discontinuous expansion method using sub-cell balances (LDEM-SCB) is developed for neutral particle S N transport calculations in 3D unstructured geometrical problems. At present, this method is applied to the tetrahedral meshes. As the name means, this method assumes the linear distribution of the particle flux in each tetrahedral mesh and uses the balance equations for four sub-cells of each tetrahedral mesh to obtain the equations for the four sub-cell average fluxes which are unknowns. This method was implemented in the computer code MUST (Multi-group Unstructured geometry S N Transport). The numerical tests show that this method gives more robust solution than DFEM (Discontinuous Finite Element Method)

  6. Interpolation theory

    CERN Document Server

    Lunardi, Alessandra

    2018-01-01

    This book is the third edition of the 1999 lecture notes of the courses on interpolation theory that the author delivered at the Scuola Normale in 1998 and 1999. In the mathematical literature there are many good books on the subject, but none of them is very elementary, and in many cases the basic principles are hidden below great generality. In this book the principles of interpolation theory are illustrated aiming at simplification rather than at generality. The abstract theory is reduced as far as possible, and many examples and applications are given, especially to operator theory and to regularity in partial differential equations. Moreover the treatment is self-contained, the only prerequisite being the knowledge of basic functional analysis.

  7. Linear Analyses of Magnetohydrodynamic Richtmyer-Meshkov Instability in Cylindrical Geometry

    KAUST Repository

    Bakhsh, Abeer

    2018-05-13

    We investigate the Richtmyer-Meshkov instability (RMI) that occurs when an incident shock impulsively accelerates the interface between two different fluids. RMI is important in many technological applications such as Inertial Confinement Fusion (ICF) and astrophysical phenomena such as supernovae. We consider RMI in the presence of the magnetic field in converging geometry through both simulations and analytical means in the framework of ideal magnetohydrodynamics (MHD). In this thesis, we perform linear stability analyses via simulations in the cylindrical geometry, which is of relevance to ICF. In converging geometry, RMI is usually followed by the Rayleigh-Taylor instability (RTI). We show that the presence of a magnetic field suppresses the instabilities. We study the influence of the strength of the magnetic field, perturbation wavenumbers and other relevant parameters on the evolution of the RM and RT instabilities. First, we perform linear stability simulations for a single interface between two different fluids in which the magnetic field is normal to the direction of the average motion of the density interface. The suppression of the instabilities is most evident for large wavenumbers and relatively strong magnetic fields strengths. The mechanism of suppression is the transport of vorticity away from the density interface by two Alfv ́en fronts. Second, we examine the case of an azimuthal magnetic field at the density interface. The most evident suppression of the instability at the interface is for large wavenumbers and relatively strong magnetic fields strengths. After the shock interacts with the interface, the emerging vorticity breaks up into waves traveling parallel and anti-parallel to the magnetic field. The interference as these waves propagate with alternating phase causing the perturbation growth rate of the interface to oscillate in time. Finally, we propose incompressible models for MHD RMI in the presence of normal or azimuthal magnetic

  8. Geometry.

    Science.gov (United States)

    Mahaffey, Michael L.

    One of a series of experimental units for children at the preschool level, this booklet deals with geometric concepts. A unit on volume and a unit on linear measurement are covered; for each unit a discussion of mathematical objectives, a list of materials needed, and a sequence of learning activities are provided. Directions are specified for the…

  9. The linear characteristic method for spatially discretizing the discrete ordinates equations in (x,y)-geometry

    International Nuclear Information System (INIS)

    Larsen, E.W.; Alcouffe, R.E.

    1981-01-01

    In this article a new linear characteristic (LC) spatial differencing scheme for the discrete ordinates equations in (x,y)-geometry is described and numerical comparisons are given with the diamond difference (DD) method. The LC method is more stable with mesh size and is generally much more accurate than the DD method on both fine and coarse meshes, for eigenvalue and deep penetration problems. The LC method is based on computations involving the exact solution of a cell problem which has spatially linear boundary conditions and interior source. The LC method is coupled to the diffusion synthetic acceleration (DSA) algorithm in that the linear variations of the source are determined in part by the results of the DSA calculation from the previous inner iteration. An inexpensive negative-flux fixup is used which has very little effect on the accuracy of the solution. The storage requirements for LC are essentially the same as that for DD, while the computational times for LC are generally less than twice the DD computational times for the same mesh. This increase in computational cost is offset if one computes LC solutions on somewhat coarser meshes than DD; the resulting LC solutions are still generally much more accurate than the DD solutions. (orig.) [de

  10. Edge-detect interpolation for direct digital periapical images

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    The purpose of this study was to aid in the use of the digital images by edge-detect interpolation for direct digital periapical images using edge-deted interpolation. This study was performed by image processing of 20 digital periapical images; pixel replication, linear non-interpolation, linear interpolation, and edge-sensitive interpolation. The obtained results were as follows ; 1. Pixel replication showed blocking artifact and serious image distortion. 2. Linear interpolation showed smoothing effect on the edge. 3. Edge-sensitive interpolation overcame the smoothing effect on the edge and showed better image.

  11. Linear intra-bone geometry dependencies of the radius: Radius length determination by maximum distal width

    International Nuclear Information System (INIS)

    Baumbach, S.F.; Krusche-Mandl, I.; Huf, W.; Mall, G.; Fialka, C.

    2012-01-01

    Purpose: The aim of the study was to investigate possible linear intra-bone geometry dependencies by determining the relation between the maximum radius length and maximum distal width in two independent populations and test for possible gender or age effects. A strong correlation can help develop more representative fracture models and osteosynthetic devices as well as aid gender and height estimation in anthropologic/forensic cases. Methods: First, maximum radius length and distal width of 100 consecutive patients, aged 20–70 years, were digitally measured on standard lower arm radiographs by two independent investigators. Second, the same measurements were performed ex vivo on a second cohort, 135 isolated, formalin fixed radii. Standard descriptive statistics as well as correlations were calculated and possible gender age influences tested for both populations separately. Results: The radiographic dataset resulted in a correlation of radius length and width of r = 0.753 (adj. R 2 = 0.563, p 2 = 0.592) and side no influence on the correlation. Radius length–width correlation for the isolated radii was r = 0.621 (adj. R 2 = 0.381, p 2 = 0.598). Conclusion: A relatively strong radius length–distal width correlation was found in two different populations, indicating that linear body proportions might not only apply to body height and axial length measurements of long bones but also to proportional dependency of bone shapes in general.

  12. Interpolation for de-Dopplerisation

    Science.gov (United States)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  13. Laboratory studies of groundwater degassing in replicas of natural fractured rock for linear flow geometry

    International Nuclear Information System (INIS)

    Geller, J.T.

    1998-02-01

    Laboratory experiments to simulate two-phase (gas and water) flow in fractured rock evolving from groundwater degassing were conducted in transparent replicas of natural rock fractures. These experiments extend the work by Geller et al. (1995) and Jarsjo and Geller (1996) that tests the hypothesis that groundwater degassing caused observed flow reductions in the Stripa Simulated Drift Experiment (SDE). Understanding degassing effects over a range of gas contents is needed due to the uncertainty in the gas contents of the water at the SDE. The main objectives of this study were to: (1) measure the effect of groundwater degassing on liquid flow rates for lower gas contents than the values used in Geller for linear flow geometry in the same fracture replicas of Geller; (2) provide a data set to develop a predictive model of two-phase flow in fractures for conditions of groundwater degassing; and (3) improve the certainty of experimental gas contents (this effort included modifications to the experimental system used by Geller et al. and separate gas-water equilibration tests). The Stripa site is being considered for a high-level radioactive waste repository

  14. Dynamic graphs, community detection, and Riemannian geometry

    Energy Technology Data Exchange (ETDEWEB)

    Bakker, Craig; Halappanavar, Mahantesh; Visweswara Sathanur, Arun

    2018-03-29

    A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time {dynamic community detection} and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and that the Riemannian methods are generally better suited to dynamic community detection. Next steps with the Riemannian framework include developing higher-order interpolation methods (e.g. the analogues of polynomial and spline interpolation) and a Riemannian least-squares regression method for working with noisy data.

  15. Interpolation-Based Condensation Model Reduction Part 1: Frequency Window Reduction Method Application to Structural Acoustics

    National Research Council Canada - National Science Library

    Ingel, R

    1999-01-01

    ... (which require derivative information) interpolation functions as well as standard Lagrangian functions, which can be linear, quadratic or cubic, have been used to construct the interpolation windows...

  16. Spinning geometry = Twisted geometry

    International Nuclear Information System (INIS)

    Freidel, Laurent; Ziprick, Jonathan

    2014-01-01

    It is well known that the SU(2)-gauge invariant phase space of loop gravity can be represented in terms of twisted geometries. These are piecewise-linear-flat geometries obtained by gluing together polyhedra, but the resulting geometries are not continuous across the faces. Here we show that this phase space can also be represented by continuous, piecewise-flat three-geometries called spinning geometries. These are composed of metric-flat three-cells glued together consistently. The geometry of each cell and the manner in which they are glued is compatible with the choice of fluxes and holonomies. We first remark that the fluxes provide each edge with an angular momentum. By studying the piecewise-flat geometries which minimize edge lengths, we show that these angular momenta can be literally interpreted as the spin of the edges: the geometries of all edges are necessarily helices. We also show that the compatibility of the gluing maps with the holonomy data results in the same conclusion. This shows that a spinning geometry represents a way to glue together the three-cells of a twisted geometry to form a continuous geometry which represents a point in the loop gravity phase space. (paper)

  17. Calculations of stationary solutions for the non linear viscous resistive MHD equations in slab geometry

    International Nuclear Information System (INIS)

    Edery, D.

    1983-11-01

    The reduced system of the non linear resistive MHD equations is used in the 2-D one helicity approximation in the numerical computations of stationary tearing modes. The critical magnetic Raynolds number S (S=tausub(r)/tausub(H) where tausub(R) and tausub(H) are respectively the characteristic resistive and hydro magnetic times) and the corresponding linear solution are computed as a starting approximation for the full non linear equations. These equations are then treated numerically by an iterative procedure which is shown to be rapidly convergent. A numerical application is given in the last part of this paper

  18. Spline Interpolation of Image

    OpenAIRE

    I. Kuba; J. Zavacky; J. Mihalik

    1995-01-01

    This paper presents the use of B spline functions in various digital signal processing applications. The theory of one-dimensional B spline interpolation is briefly reviewed, followed by its extending to two dimensions. After presenting of one and two dimensional spline interpolation, the algorithms of image interpolation and resolution increasing were proposed. Finally, experimental results of computer simulations are presented.

  19. Validation of favor code linear elastic fracture solutions for finite-length flaw geometries

    International Nuclear Information System (INIS)

    Dickson, T.L.; Keeney, J.A.; Bryson, J.W.

    1995-01-01

    One of the current tasks within the US Nuclear Regulatory Commission (NRC)-funded Heavy Section Steel Technology Program (HSST) at Oak Ridge National Laboratory (ORNL) is the continuing development of the FAVOR (Fracture, analysis of Vessels: Oak Ridge) computer code. FAVOR performs structural integrity analyses of embrittled nuclear reactor pressure vessels (RPVs) with stainless steel cladding, to evaluate compliance with the applicable regulatory criteria. Since the initial release of FAVOR, the HSST program has continued to enhance the capabilities of the FAVOR code. ABAQUS, a nuclear quality assurance certified (NQA-1) general multidimensional finite element code with fracture mechanics capabilities, was used to generate a database of stress-intensity-factor influence coefficients (SIFICs) for a range of axially and circumferentially oriented semielliptical inner-surface flaw geometries applicable to RPVs with an internal radius (Ri) to wall thickness (w) ratio of 10. This database of SIRCs has been incorporated into a development version of FAVOR, providing it with the capability to perform deterministic and probabilistic fracture analyses of RPVs subjected to transients, such as pressurized thermal shock (PTS), for various flaw geometries. This paper discusses the SIFIC database, comparisons with other investigators, and some of the benchmark verification problem specifications and solutions

  20. The linear stability analysis of MHD models in axisymmetric toroidal geometry

    International Nuclear Information System (INIS)

    Manickam, J.; Grimm, R.C.; Dewar, R.L.

    1981-01-01

    A computational model to analyze the linear stability properties of general toroidal systems in the ideal magnetohydrodynamic limits is presented. This model includes an explicit treatment of the asymptotic singular behaviour at rational surfaces. It is verified through applications to internal kink modes. (orig.)

  1. Rapid Fourier space solution of linear partial integro-differential equations in toroidal magnetic confinement geometries

    International Nuclear Information System (INIS)

    McMillan, B.F.; Jolliet, S.; Tran, T.M.; Villard, L.; Bottino, A.; Angelino, P.

    2010-01-01

    Fluctuating quantities in magnetic confinement geometries often inherit a strong anisotropy along the field lines. One technique for describing these structures is the use of a certain set of Fourier components on the tori of nested flux surfaces. We describe an implementation of this approach for solving partial differential equations, like Poisson's equation, where a different set of Fourier components may be chosen on each surface according to the changing safety factor profile. Allowing the resolved components to change to follow the anisotropy significantly reduces the total number of degrees of freedom in the description. This can permit large gains in computational performance. We describe, in particular, how this approach can be applied to rapidly solve the gyrokinetic Poisson equation in a particle code, ORB5 (Jolliet et al. (2007) [5]), with a regular (non-field-aligned) mesh. (authors)

  2. SPLINE, Spline Interpolation Function

    International Nuclear Information System (INIS)

    Allouard, Y.

    1977-01-01

    1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

  3. Effect of accelerating gap geometry on the beam breakup instability in linear induction accelerators

    International Nuclear Information System (INIS)

    Miller, R.B.; Marder, B.M.; Coleman, P.D.; Clark, R.E.

    1988-01-01

    The electron beam in a linear induction accelerator is generally susceptible to growth of the transverse beam breakup instability. In this paper we analyze a new technique for reducing the transverse coupling between the beam and the accelerating cavities, thereby reducing beam breakup growth. The basic idea is that the most worrisome cavity modes can be cutoff by a short section of coaxial transmission line inserted between the cavity structure and the accelerating gap region. We have used the three-dimensional simulation code SOS to analyze this problem. In brief, we find that the technique works, provided that the lowest TE mode cutoff frequency in the coaxial line is greater than the frequency of the most worrisome TM mode of the accelerating cavity

  4. Solution to the Diffusion equation for multi groups in X Y geometry using Linear Perturbation theory

    International Nuclear Information System (INIS)

    Mugica R, C.A.

    2004-01-01

    Diverse methods exist to solve numerically the neutron diffusion equation for several energy groups in stationary state among those that highlight those of finite elements. In this work the numerical solution of this equation is presented using Raviart-Thomas nodal methods type finite element, the RT0 and RT1, in combination with iterative techniques that allow to obtain the approached solution in a quick form. Nevertheless the above mentioned, the precision of a method is intimately bound to the dimension of the approach space by cell, 5 for the case RT0 and 12 for the RT1, and/or to the mesh refinement, that makes the order of the problem of own value to solve to grow considerably. By this way if it wants to know an acceptable approach to the value of the effective multiplication factor of the system when this it has experimented a small perturbation it was appeal to the Linear perturbation theory with which is possible to determine it starting from the neutron flow and of the effective multiplication factor of the not perturbed case. Results are presented for a reference problem in which a perturbation is introduced in an assemble that simulates changes in the control bar. (Author)

  5. U.S. Army Armament Research, Development and Engineering Center Grain Evaluation Software to Numerically Predict Linear Burn Regression for Solid Propellant Grain Geometries

    Science.gov (United States)

    2017-10-01

    ENGINEERING CENTER GRAIN EVALUATION SOFTWARE TO NUMERICALLY PREDICT LINEAR BURN REGRESSION FOR SOLID PROPELLANT GRAIN GEOMETRIES Brian...distribution is unlimited. AD U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT AND ENGINEERING CENTER Munitions Engineering Technology Center Picatinny...U.S. ARMY ARMAMENT RESEARCH, DEVELOPMENT AND ENGINEERING CENTER GRAIN EVALUATION SOFTWARE TO NUMERICALLY PREDICT LINEAR BURN REGRESSION FOR SOLID

  6. Generalized interpolative quantum statistics

    International Nuclear Information System (INIS)

    Ramanathan, R.

    1992-01-01

    A generalized interpolative quantum statistics is presented by conjecturing a certain reordering of phase space due to the presence of possible exotic objects other than bosons and fermions. Such an interpolation achieved through a Bose-counting strategy predicts the existence of an infinite quantum Boltzmann-Gibbs statistics akin to the one discovered by Greenberg recently

  7. CMB anisotropies interpolation

    NARCIS (Netherlands)

    Zinger, S.; Delabrouille, Jacques; Roux, Michel; Maitre, Henri

    2010-01-01

    We consider the problem of the interpolation of irregularly spaced spatial data, applied to observation of Cosmic Microwave Background (CMB) anisotropies. The well-known interpolation methods and kriging are compared to the binning method which serves as a reference approach. We analyse kriging

  8. Feature displacement interpolation

    DEFF Research Database (Denmark)

    Nielsen, Mads; Andresen, Per Rønsholt

    1998-01-01

    Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features...... often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov regularizations, also including scale......-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface....

  9. Extension Of Lagrange Interpolation

    Directory of Open Access Journals (Sweden)

    Mousa Makey Krady

    2015-01-01

    Full Text Available Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.

  10. A FAST MORPHING-BASED INTERPOLATION FOR MEDICAL IMAGES: APPLICATION TO CONFORMAL RADIOTHERAPY

    Directory of Open Access Journals (Sweden)

    Hussein Atoui

    2011-05-01

    Full Text Available A method is presented for fast interpolation between medical images. The method is intended for both slice and projective interpolation. It allows offline interpolation between neighboring slices in tomographic data. Spatial correspondence between adjacent images is established using a block matching algorithm. Interpolation of image intensities is then carried out by morphing between the images. The morphing-based method is compared to standard linear interpolation, block-matching-based interpolation and registrationbased interpolation in 3D tomographic data sets. Results show that the proposed method scored similar performance in comparison to registration-based interpolation, and significantly outperforms both linear and block-matching-based interpolation. This method is applied in the context of conformal radiotherapy for online projective interpolation between Digitally Reconstructed Radiographs (DRRs.

  11. Spectral nodal methodology for multigroup slab-geometry discrete ordinates neutron transport problems with linearly anisotropic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Oliva, Amaury M.; Filho, Hermes A.; Silva, Davi M.; Garcia, Carlos R., E-mail: aoliva@iprj.uerj.br, E-mail: halves@iprj.uerj.br, E-mail: davijmsilva@yahoo.com.br, E-mail: cgh@instec.cu [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Instituto Politecnico. Departamento de Modelagem Computacional; Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba)

    2017-07-01

    In this paper, we propose a numerical methodology for the development of a method of the spectral nodal class that will generate numerical solutions free from spatial truncation errors. This method, denominated Spectral Deterministic Method (SDM), is tested as an initial study of the solutions (spectral analysis) of neutron transport equations in the discrete ordinates (S{sub N}) formulation, in one-dimensional slab geometry, multigroup approximation, with linearly anisotropic scattering, considering homogeneous and heterogeneous domains with fixed source. The unknowns in the methodology are the cell-edge, and cell average angular fluxes, the numerical values calculated for these quantities coincide with the analytic solution of the equations. These numerical results are shown and compared with the traditional ne- mesh method Diamond Difference (DD) and the coarse-mesh method spectral Green's function (SGF) to illustrate the method's accuracy and stability. The solution algorithms problems are implemented in a computer simulator made in C++ language, the same that was used to generate the results of the reference work. (author)

  12. On some methods of achieving a continuous and differentiated assessment in Linear Algebra and Analytic and Differential Geometry courses and seminars

    Directory of Open Access Journals (Sweden)

    M. A.P. PURCARU

    2017-12-01

    Full Text Available This paper aims at highlighting some aspects related to assessment as regards its use as a differentiated training strategy for Linear Algebra and Analytic and Differential Geometry courses and seminars. Thus, the following methods of continuous differentiated assessment are analyzed and exemplified: the portfolio, the role play, some interactive methods and practical examinations.

  13. Digital time-interpolator

    International Nuclear Information System (INIS)

    Schuller, S.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report presents a description of the design of a digital time meter. This time meter should be able to measure, by means of interpolation, times of 100 ns with an accuracy of 50 ps. In order to determine the best principle for interpolation, three methods were simulated at the computer with a Pascal code. On the basis of this the best method was chosen and used in the design. In order to test the principal operation of the circuit a part of the circuit was constructed with which the interpolation could be tested. The remainder of the circuit was simulated with a computer. So there are no data available about the operation of the complete circuit in practice. The interpolation part however is the most critical part, the remainder of the circuit is more or less simple logic. Besides this report also gives a description of the principle of interpolation and the design of the circuit. The measurement results at the prototype are presented finally. (author). 3 refs.; 37 figs.; 2 tabs

  14. Mathematical model of geometry and fibrous structure of the heart.

    Science.gov (United States)

    Nielsen, P M; Le Grice, I J; Smaill, B H; Hunter, P J

    1991-04-01

    We developed a mathematical representation of ventricular geometry and muscle fiber organization using three-dimensional finite elements referred to a prolate spheroid coordinate system. Within elements, fields are approximated using basis functions with associated parameters defined at the element nodes. Four parameters per node are used to describe ventricular geometry. The radial coordinate is interpolated using cubic Hermite basis functions that preserve slope continuity, while the angular coordinates are interpolated linearly. Two further nodal parameters describe the orientation of myocardial fibers. The orientation of fibers within coordinate planes bounded by epicardial and endocardial surfaces is interpolated linearly, with transmural variation given by cubic Hermite basis functions. Left and right ventricular geometry and myocardial fiber orientations were characterized for a canine heart arrested in diastole and fixed at zero transmural pressure. The geometry was represented by a 24-element ensemble with 41 nodes. Nodal parameters fitted using least squares provided a realistic description of ventricular epicardial [root mean square (RMS) error less than 0.9 mm] and endocardial (RMS error less than 2.6 mm) surfaces. Measured fiber fields were also fitted (RMS error less than 17 degrees) with a 60-element, 99-node mesh obtained by subdividing the 24-element mesh. These methods provide a compact and accurate anatomic description of the ventricles suitable for use in finite element stress analysis, simulation of cardiac electrical activation, and other cardiac field modeling problems.

  15. Multivariate Birkhoff interpolation

    CERN Document Server

    Lorentz, Rudolph A

    1992-01-01

    The subject of this book is Lagrange, Hermite and Birkhoff (lacunary Hermite) interpolation by multivariate algebraic polynomials. It unifies and extends a new algorithmic approach to this subject which was introduced and developed by G.G. Lorentz and the author. One particularly interesting feature of this algorithmic approach is that it obviates the necessity of finding a formula for the Vandermonde determinant of a multivariate interpolation in order to determine its regularity (which formulas are practically unknown anyways) by determining the regularity through simple geometric manipulations in the Euclidean space. Although interpolation is a classical problem, it is surprising how little is known about its basic properties in the multivariate case. The book therefore starts by exploring its fundamental properties and its limitations. The main part of the book is devoted to a complete and detailed elaboration of the new technique. A chapter with an extensive selection of finite elements follows as well a...

  16. A non-linear, finite element, heat conduction code to calculate temperatures in solids of arbitrary geometry

    International Nuclear Information System (INIS)

    Tayal, M.

    1987-01-01

    Structures often operate at elevated temperatures. Temperature calculations are needed so that the design can accommodate thermally induced stresses and material changes. A finite element computer called FEAT has been developed to calculate temperatures in solids of arbitrary shapes. FEAT solves the classical equation for steady state conduction of heat. The solution is obtained for two-dimensional (plane or axisymmetric) or for three-dimensional problems. Gap elements are use to simulate interfaces between neighbouring surfaces. The code can model: conduction; internal generation of heat; prescribed convection to a heat sink; prescribed temperatures at boundaries; prescribed heat fluxes on some surfaces; and temperature-dependence of material properties like thermal conductivity. The user has a option of specifying the detailed variation of thermal conductivity with temperature. For convenience to the nuclear fuel industry, the user can also opt for pre-coded values of thermal conductivity, which are obtained from the MATPRO data base (sponsored by the U.S. Nuclear Regulatory Commission). The finite element method makes FEAT versatile, and enables it to accurately accommodate complex geometries. The optional link to MATPRO makes it convenient for the nuclear fuel industry to use FEAT, without loss of generality. Special numerical techniques make the code inexpensive to run, for the type of material non-linearities often encounter in the analysis of nuclear fuel. The code, however, is general, and can be used for other components of the reactor, or even for non-nuclear systems. The predictions of FEAT have been compared against several analytical solutions. The agreement is usually better than 5%. Thermocouple measurements show that the FEAT predictions are consistent with measured changes in temperatures in simulated pressure tubes. FEAT was also found to predict well, the axial variations in temperatures in the end-pellets(UO 2 ) of two fuel elements irradiated

  17. Precipitation interpolation in mountainous areas

    Science.gov (United States)

    Kolberg, Sjur

    2015-04-01

    Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.

  18. Interpolation algorithm for asynchronous ADC-data

    Directory of Open Access Journals (Sweden)

    S. Bramburger

    2017-09-01

    Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  19. Interpolation and sampling in spaces of analytic functions

    CERN Document Server

    Seip, Kristian

    2004-01-01

    The book is about understanding the geometry of interpolating and sampling sequences in classical spaces of analytic functions. The subject can be viewed as arising from three classical topics: Nevanlinna-Pick interpolation, Carleson's interpolation theorem for H^\\infty, and the sampling theorem, also known as the Whittaker-Kotelnikov-Shannon theorem. The book aims at clarifying how certain basic properties of the space at hand are reflected in the geometry of interpolating and sampling sequences. Key words for the geometric descriptions are Carleson measures, Beurling densities, the Nyquist rate, and the Helson-Szegő condition. The book is based on six lectures given by the author at the University of Michigan. This is reflected in the exposition, which is a blend of informal explanations with technical details. The book is essentially self-contained. There is an underlying assumption that the reader has a basic knowledge of complex and functional analysis. Beyond that, the reader should have some familiari...

  20. Optimized Quasi-Interpolators for Image Reconstruction.

    Science.gov (United States)

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  1. Time-interpolator

    International Nuclear Information System (INIS)

    Blok, M. de; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report describes a time-interpolator with which time differences can be measured using digital and analog techniques. It concerns a maximum measuring time of 6.4 μs with a resolution of 100 ps. Use is made of Emitter Coupled Logic (ECL) and analogues of high-frequency techniques. The difficulty which accompanies the use of ECL-logic is keeping as short as possible the mutual connections and closing properly the outputs in order to avoid reflections. The digital part of the time-interpolator consists of a continuous running clock and logic which converts an input signal into a start- and stop signal. The analog part consists of a Time to Amplitude Converter (TAC) and an analog to digital converter. (author). 3 refs.; 30 figs

  2. Interpolative Boolean Networks

    Directory of Open Access Journals (Sweden)

    Vladimir Dobrić

    2017-01-01

    Full Text Available Boolean networks are used for modeling and analysis of complex systems of interacting entities. Classical Boolean networks are binary and they are relevant for modeling systems with complex switch-like causal interactions. More descriptive power can be provided by the introduction of gradation in this model. If this is accomplished by using conventional fuzzy logics, the generalized model cannot secure the Boolean frame. Consequently, the validity of the model’s dynamics is not secured. The aim of this paper is to present the Boolean consistent generalization of Boolean networks, interpolative Boolean networks. The generalization is based on interpolative Boolean algebra, the [0,1]-valued realization of Boolean algebra. The proposed model is adaptive with respect to the nature of input variables and it offers greater descriptive power as compared with traditional models. For illustrative purposes, IBN is compared to the models based on existing real-valued approaches. Due to the complexity of the most systems to be analyzed and the characteristics of interpolative Boolean algebra, the software support is developed to provide graphical and numerical tools for complex system modeling and analysis.

  3. Interpolation-free scanning and sampling scheme for tomographic reconstructions

    International Nuclear Information System (INIS)

    Donohue, K.D.; Saniie, J.

    1987-01-01

    In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

  4. Research on Electronic Transformer Data Synchronization Based on Interpolation Methods and Their Error Analysis

    Directory of Open Access Journals (Sweden)

    Pang Fubin

    2015-09-01

    Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.

  5. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    Science.gov (United States)

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  6. Efficient GPU-based texture interpolation using uniform B-splines

    NARCIS (Netherlands)

    Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.

    2008-01-01

    This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and

  7. A parameterization of observer-based controllers: Bumpless transfer by covariance interpolation

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Komareji, Mohammad

    2009-01-01

    This paper presents an algorithm to interpolate between two observer-based controllers for a linear multivariable system such that the closed loop system remains stable throughout the interpolation. The method interpolates between the inverse Lyapunov functions for the two original state feedback...

  8. Vector geometry

    CERN Document Server

    Robinson, Gilbert de B

    2011-01-01

    This brief undergraduate-level text by a prominent Cambridge-educated mathematician explores the relationship between algebra and geometry. An elementary course in plane geometry is the sole requirement for Gilbert de B. Robinson's text, which is the result of several years of teaching and learning the most effective methods from discussions with students. Topics include lines and planes, determinants and linear equations, matrices, groups and linear transformations, and vectors and vector spaces. Additional subjects range from conics and quadrics to homogeneous coordinates and projective geom

  9. Particles geometry influence in the thermal stress level in an SiC reinforced aluminum matrix composite considering the material non-linear behavior

    International Nuclear Information System (INIS)

    Miranda, Carlos A. de J.; Libardi, Rosani M.P.; Boari, Zoroastro de M.

    2009-01-01

    An analytical methodology was developed to predict the thermal stress level that occurs in a metallic matrix composite reinforced with SiC particles, when the temperature decreases from 600 deg C to 20 deg C during the fabrication process. This analytical development is based on the Eshelby method, dislocation mechanisms, and the Maxwell-Boltzmann distribution model. The material was assumed to have a linear elastic behavior. The analytical results from this formulation were verified against numerical linear analyses that were performed over a set of random non-uniform distribution of particles that covers a wide range of volumetric ratios. To stick with the analytical hypothesis, particles with round geometry were used. Each stress distribution, represented by the isostress curves at ΔT=-580 deg C, was analyzed with an image analyzer. A statistical procedure was applied to obtain the most probable thermal stress level. Analytical and numerical results compared very well. Plastic deformation as well as particle geometry can alter significantly the stress field in the material. To account for these effects, in this work, several numerical analyses were performed considering the non-linear behavior for the aluminum matrix and distinct particle geometries. Two distinct sets of data with were used. To allow a direct comparison, the first set has the same models (particle form, size and distribution) as used previously. The second set analyze quadrilateral particles and present very tight range of volumetric ratio, closer to what is found in actual SiC composites. A simple and fast algorithm was developed to analyze the new results. The comparison of these results with the previous ones shows, as expected, the strong influence of the elastic-plastic behavior of the aluminum matrix on the composite thermal stress distribution due to its manufacturing process and shows, also, a small influence of the particles geometry and volumetric ratio. (author)

  10. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  11. Smooth Phase Interpolated Keying

    Science.gov (United States)

    Borah, Deva K.

    2007-01-01

    Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values

  12. Interpolating string field theories

    International Nuclear Information System (INIS)

    Zwiebach, B.

    1992-01-01

    This paper reports that a minimal area problem imposing different length conditions on open and closed curves is shown to define a one-parameter family of covariant open-closed quantum string field theories. These interpolate from a recently proposed factorizable open-closed theory up to an extended version of Witten's open string field theory capable of incorporating on shell closed strings. The string diagrams of the latter define a new decomposition of the moduli spaces of Riemann surfaces with punctures and boundaries based on quadratic differentials with both first order and second order poles

  13. Scalable Intersample Interpolation Architecture for High-channel-count Beamformers

    DEFF Research Database (Denmark)

    Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt

    2011-01-01

    Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....

  14. Fast image interpolation via random forests.

    Science.gov (United States)

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  15. Shape Preserving Interpolation Using C2 Rational Cubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2016-01-01

    Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.

  16. Convergence of trajectories in fractal interpolation of stochastic processes

    International Nuclear Information System (INIS)

    MaIysz, Robert

    2006-01-01

    The notion of fractal interpolation functions (FIFs) can be applied to stochastic processes. Such construction is especially useful for the class of α-self-similar processes with stationary increments and for the class of α-fractional Brownian motions. For these classes, convergence of the Minkowski dimension of the graphs in fractal interpolation of the Hausdorff dimension of the graph of original process was studied in [Herburt I, MaIysz R. On convergence of box dimensions of fractal interpolation stochastic processes. Demonstratio Math 2000;4:873-88.], [MaIysz R. A generalization of fractal interpolation stochastic processes to higher dimension. Fractals 2001;9:415-28.], and [Herburt I. Box dimension of interpolations of self-similar processes with stationary increments. Probab Math Statist 2001;21:171-8.]. We prove that trajectories of fractal interpolation stochastic processes converge to the trajectory of the original process. We also show that convergence of the trajectories in fractal interpolation of stochastic processes is equivalent to the convergence of trajectories in linear interpolation

  17. Quasi interpolation with Voronoi splines.

    Science.gov (United States)

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  18. Research of Cubic Bezier Curve NC Interpolation Signal Generator

    Directory of Open Access Journals (Sweden)

    Shijun Ji

    2014-08-01

    Full Text Available Interpolation technology is the core of the computer numerical control (CNC system, and the precision and stability of the interpolation algorithm directly affect the machining precision and speed of CNC system. Most of the existing numerical control interpolation technology can only achieve circular arc interpolation, linear interpolation or parabola interpolation, but for the numerical control (NC machining of parts with complicated surface, it needs to establish the mathematical model and generate the curved line and curved surface outline of parts and then discrete the generated parts outline into a large amount of straight line or arc to carry on the processing, which creates the complex program and a large amount of code, so it inevitably introduce into the approximation error. All these factors affect the machining accuracy, surface roughness and machining efficiency. The stepless interpolation of cubic Bezier curve controlled by analog signal is studied in this paper, the tool motion trajectory of Bezier curve can be directly planned out in CNC system by adjusting control points, and then these data were put into the control motor which can complete the precise feeding of Bezier curve. This method realized the improvement of CNC trajectory controlled ability from the simple linear and circular arc to the complex project curve, and it provides a new way for economy realizing the curve surface parts with high quality and high efficiency machining.

  19. Pixel Interpolation Methods

    OpenAIRE

    Mintěl, Tomáš

    2009-01-01

    Tato diplomová práce se zabývá akcelerací interpolačních metod s využitím GPU a architektury NVIDIA (R) CUDA TM. Grafický výstup je reprezentován demonstrační aplikací pro transformaci obrazu nebo videa s použitím vybrané interpolace. Časově kritické části kódu jsou přesunuty na GPU a vykonány paralelně. Pro práci s obrazem a videem jsou použity vysoce optimalizované algoritmy z knihovny OpenCV, od firmy Intel. This master's thesis deals with acceleration of pixel interpolation methods usi...

  20. Radial basis function interpolation of unstructured, three-dimensional, volumetric particle tracking velocimetry data

    International Nuclear Information System (INIS)

    Casa, L D C; Krueger, P S

    2013-01-01

    Unstructured three-dimensional fluid velocity data were interpolated using Gaussian radial basis function (RBF) interpolation. Data were generated to imitate the spatial resolution and experimental uncertainty of a typical implementation of defocusing digital particle image velocimetry. The velocity field associated with a steadily rotating infinite plate was simulated to provide a bounded, fully three-dimensional analytical solution of the Navier–Stokes equations, allowing for robust analysis of the interpolation accuracy. The spatial resolution of the data (i.e. particle density) and the number of RBFs were varied in order to assess the requirements for accurate interpolation. Interpolation constraints, including boundary conditions and continuity, were included in the error metric used for the least-squares minimization that determines the interpolation parameters to explore methods for improving RBF interpolation results. Even spacing and logarithmic spacing of RBF locations were also investigated. Interpolation accuracy was assessed using the velocity field, divergence of the velocity field, and viscous torque on the rotating boundary. The results suggest that for the present implementation, RBF spacing of 0.28 times the boundary layer thickness is sufficient for accurate interpolation, though theoretical error analysis suggests that improved RBF positioning may yield more accurate results. All RBF interpolation results were compared to standard Gaussian weighting and Taylor expansion interpolation methods. Results showed that RBF interpolation improves interpolation results compared to the Taylor expansion method by 60% to 90% based on the average squared velocity error and provides comparable velocity results to Gaussian weighted interpolation in terms of velocity error. RMS accuracy of the flow field divergence was one to two orders of magnitude better for the RBF interpolation compared to the other two methods. RBF interpolation that was applied to

  1. Image Interpolation with Geometric Contour Stencils

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We consider the image interpolation problem where given an image vm,n with uniformly-sampled pixels vm,n and point spread function h, the goal is to find function u(x,y satisfying vm,n = (h*u(m,n for all m,n in Z. This article improves upon the IPOL article Image Interpolation with Contour Stencils. In the previous work, contour stencils are used to estimate the image contours locally as short line segments. This article begins with a continuous formulation of total variation integrated over a collection of curves and defines contour stencils as a consistent discretization. This discretization is more reliable than the previous approach and can effectively distinguish contours that are locally shaped like lines, curves, corners, and circles. These improved contour stencils sense more of the geometry in the image. Interpolation is performed using an extension of the method described in the previous article. Using the improved contour stencils, there is an increase in image quality while maintaining similar computational efficiency.

  2. Imaging the complex geometry of a magma reservoir using FEM-based linear inverse modeling of InSAR data: application to Rabaul Caldera, Papua New Guinea

    Science.gov (United States)

    Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martì Molist, Joan

    2017-06-01

    We test an innovative inversion scheme using Green's functions from an array of pressure sources embedded in finite-element method (FEM) models to image, without assuming an a-priori geometry, the composite and complex shape of a volcano deformation source. We invert interferometric synthetic aperture radar (InSAR) data to estimate the pressurization and shape of the magma reservoir of Rabaul caldera, Papua New Guinea. The results image the extended shallow magmatic system responsible for a broad and long-term subsidence of the caldera between 2007 February and 2010 December. Elastic FEM solutions are integrated into the regularized linear inversion of InSAR data of volcano surface displacements in order to obtain a 3-D image of the source of deformation. The Green's function matrix is constructed from a library of forward line-of-sight displacement solutions for a grid of cubic elementary deformation sources. Each source is sequentially generated by removing the corresponding cubic elements from a common meshed domain and simulating the injection of a fluid mass flux into the cavity, which results in a pressurization and volumetric change of the fluid-filled cavity. The use of a single mesh for the generation of all FEM models avoids the computationally expensive process of non-linear inversion and remeshing a variable geometry domain. Without assuming an a-priori source geometry other than the configuration of the 3-D grid that generates the library of Green's functions, the geodetic data dictate the geometry of the magma reservoir as a 3-D distribution of pressure (or flux of magma) within the source array. The inversion of InSAR data of Rabaul caldera shows a distribution of interconnected sources forming an amorphous, shallow magmatic system elongated under two opposite sides of the caldera. The marginal areas at the sides of the imaged magmatic system are the possible feeding reservoirs of the ongoing Tavurvur volcano eruption of andesitic products on the

  3. Fuzzy linguistic model for interpolation

    International Nuclear Information System (INIS)

    Abbasbandy, S.; Adabitabar Firozja, M.

    2007-01-01

    In this paper, a fuzzy method for interpolating of smooth curves was represented. We present a novel approach to interpolate real data by applying the universal approximation method. In proposed method, fuzzy linguistic model (FLM) applied as universal approximation for any nonlinear continuous function. Finally, we give some numerical examples and compare the proposed method with spline method

  4. A disposition of interpolation techniques

    NARCIS (Netherlands)

    Knotters, M.; Heuvelink, G.B.M.

    2010-01-01

    A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated

  5. Shape-based interpolation of multidimensional grey-level images

    International Nuclear Information System (INIS)

    Grevera, G.J.; Udupa, J.K.

    1996-01-01

    Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. In this paper, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n + 1)-dimensional [(n + 1)-D] space. The binary shape-based method is then applied to this image to create an (n + 1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation

  6. Contrast-guided image interpolation.

    Science.gov (United States)

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  7. The bases for the use of interpolation in helical computed tomography: an explanation for radiologists

    International Nuclear Information System (INIS)

    Garcia-Santos, J. M.; Cejudo, J.

    2002-01-01

    In contrast to conventional computed tomography (CT), helical CT requires the application of interpolators to achieve image reconstruction. This is because the projections processed by the computer are not situated in the same plane. Since the introduction of helical CT. a number of interpolators have been designed in the attempt to maintain the thickness of the reconstructed section as close as possible to the thickness of the X-ray beam. The purpose of this article is to discuss the function of these interpolators, stressing the advantages and considering the possible inconveniences of high-grade curved interpolators with respect to standard linear interpolators. (Author) 7 refs

  8. Surface interpolation with radial basis functions for medical imaging

    International Nuclear Information System (INIS)

    Carr, J.C.; Beatson, R.K.; Fright, W.R.

    1997-01-01

    Radial basis functions are presented as a practical solution to the problem of interpolating incomplete surfaces derived from three-dimensional (3-D) medical graphics. The specific application considered is the design of cranial implants for the repair of defects, usually holes, in the skull. Radial basis functions impose few restrictions on the geometry of the interpolation centers and are suited to problems where interpolation centers do not form a regular grid. However, their high computational requirements have previously limited their use to problems where the number of interpolation centers is small (<300). Recently developed fast evaluation techniques have overcome these limitations and made radial basis interpolation a practical approach for larger data sets. In this paper radial basis functions are fitted to depth-maps of the skull's surface, obtained from X-ray computed tomography (CT) data using ray-tracing techniques. They are used to smoothly interpolate the surface of the skull across defect regions. The resulting mathematical description of the skull's surface can be evaluated at any desired resolution to be rendered on a graphics workstation or to generate instructions for operating a computer numerically controlled (CNC) mill

  9. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Janusz Konrad

    2009-01-01

    Full Text Available View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  10. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Ince Serdar

    2008-01-01

    Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  11. BIMOND3, Monotone Bivariate Interpolation

    International Nuclear Information System (INIS)

    Fritsch, F.N.; Carlson, R.E.

    2001-01-01

    1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data

  12. Comparing interpolation schemes in dynamic receive ultrasound beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Andresen, Henrik; Nikolov, Svetoslav

    2005-01-01

    In medical ultrasound interpolation schemes are of- ten applied in receive focusing for reconstruction of image points. This paper investigates the performance of various interpolation scheme by means of ultrasound simulations of point scatterers in Field II. The investigation includes conventional...... B-mode imaging and synthetic aperture (SA) imaging using a 192-element, 7 MHz linear array transducer with λ pitch as simulation model. The evaluation consists primarily of calculations of the side lobe to main lobe ratio, SLMLR, and the noise power of the interpolation error. When using...... conventional B-mode imaging and linear interpolation, the difference in mean SLMLR is 6.2 dB. With polynomial interpolation the ratio is in the range 6.2 dB to 0.3 dB using 2nd to 5th order polynomials, and with FIR interpolation the ratio is in the range 5.8 dB to 0.1 dB depending on the filter design...

  13. The research on NURBS adaptive interpolation technology

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Zhang, Sujia; Zhang, Feng

    2017-04-01

    In order to solve the problems of Research on NURBS Adaptive Interpolation Technology, such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for NURBS adaptive interpolation method of NURBS curve and simulation. We can use NURBS adaptive interpolation that calculates (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meets the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  14. A Meshfree Quasi-Interpolation Method for Solving Burgers’ Equation

    Directory of Open Access Journals (Sweden)

    Mingzhu Li

    2014-01-01

    Full Text Available The main aim of this work is to consider a meshfree algorithm for solving Burgers’ equation with the quartic B-spline quasi-interpolation. Quasi-interpolation is very useful in the study of approximation theory and its applications, since it can yield solutions directly without the need to solve any linear system of equations and overcome the ill-conditioning problem resulting from using the B-spline as a global interpolant. The numerical scheme is presented, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the time derivative of the dependent variable. Compared to other numerical methods, the main advantages of our scheme are higher accuracy and lower computational complexity. Meanwhile, the algorithm is very simple and easy to implement and the numerical experiments show that it is feasible and valid.

  15. COMPARISONS BETWEEN DIFFERENT INTERPOLATION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    G. Garnero

    2014-01-01

    In the present study different algorithms will be analysed in order to spot an optimal interpolation methodology. The availability of the recent digital model produced by the Regione Piemonte with airborne LIDAR and the presence of sections of testing realized with higher resolutions and the presence of independent digital models on the same territory allow to set a series of analysis with consequent determination of the best methodologies of interpolation. The analysis of the residuals on the test sites allows to calculate the descriptive statistics of the computed values: all the algorithms have furnished interesting results; all the more interesting, notably for dense models, the IDW (Inverse Distance Weighing algorithm results to give best results in this study case. Moreover, a comparative analysis was carried out by interpolating data at different input point density, with the purpose of highlighting thresholds in input density that may influence the quality reduction of the final output in the interpolation phase.

  16. Microcrystalline thin-film solar cell deposition on moving substrates using a linear VHF-PECVD reactor and a cross-flow geometry

    International Nuclear Information System (INIS)

    Flikweert, A J; Zimmermann, T; Merdzhanova, T; Weigand, D; Appenzeller, W; Gordijn, A

    2012-01-01

    A concept for high-rate plasma deposition (PECVD) of hydrogenated microcrystalline silicon on moving substrates (dynamic deposition) is developed and evaluated. The chamber allows for substrates up to a size of 40 × 40 cm 2 . The deposition plasma is sustained between linear VHF electrodes (60 MHz) and a moving substrate. Due to the gas flow geometry and the high degree of source gas depletion, from the carrier's point of view the silane concentration varies when passing the electrodes. This is known to lead to different growth conditions which can induce transitions from microcrystalline to amorphous growth. The effect of different silane concentrations is simulated at a standard RF showerhead electrode by intentionally varying the silane concentration during deposition in static mode. This variation may decrease the layer quality of microcrystalline silicon, due to a shift of the crystallinity away from the optimum. However, adapting the input silane concentration, state-of-the-art solar cells are obtained. Microcrystalline cells (ZnO : Al/Ag back contacts) produced by the linear VHF plasma sources show an efficiency of 7.9% and 6.6% for depositions in static and dynamic mode, respectively. (paper)

  17. Permanently calibrated interpolating time counter

    International Nuclear Information System (INIS)

    Jachna, Z; Szplet, R; Kwiatkowski, P; Różyc, K

    2015-01-01

    We propose a new architecture of an integrated time interval counter that provides its permanent calibration in the background. Time interval measurement and the calibration procedure are based on the use of a two-stage interpolation method and parallel processing of measurement and calibration data. The parallel processing is achieved by a doubling of two-stage interpolators in measurement channels of the counter, and by an appropriate extension of control logic. Such modification allows the updating of transfer characteristics of interpolators without the need to break a theoretically infinite measurement session. We describe the principle of permanent calibration, its implementation and influence on the quality of the counter. The precision of the presented counter is kept at a constant level (below 20 ps) despite significant changes in the ambient temperature (from −10 to 60 °C), which can cause a sevenfold decrease in the precision of the counter with a traditional calibration procedure. (paper)

  18. Differential geometry

    CERN Document Server

    Ciarlet, Philippe G

    2007-01-01

    This book gives the basic notions of differential geometry, such as the metric tensor, the Riemann curvature tensor, the fundamental forms of a surface, covariant derivatives, and the fundamental theorem of surface theory in a selfcontained and accessible manner. Although the field is often considered a classical one, it has recently been rejuvenated, thanks to the manifold applications where it plays an essential role. The book presents some important applications to shells, such as the theory of linearly and nonlinearly elastic shells, the implementation of numerical methods for shells, and

  19. Direct Trajectory Interpolation on the Surface using an Open CNC

    OpenAIRE

    Beudaert , Xavier; Lavernhe , Sylvain; Tournier , Christophe

    2014-01-01

    International audience; Free-form surfaces are used for many industrial applications from aeronautical parts, to molds or biomedical implants. In the common machining process, computer-aided manufacturing (CAM) software generates approximated tool paths because of the limitation induced by the input tool path format of the industrial CNC. Then, during the tool path interpolation, marks on finished surfaces can appear induced by non smooth feedrate planning. Managing the geometry of the tool p...

  20. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    Science.gov (United States)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  1. Can a polynomial interpolation improve on the Kaplan-Yorke dimension?

    International Nuclear Information System (INIS)

    Richter, Hendrik

    2008-01-01

    The Kaplan-Yorke dimension can be derived using a linear interpolation between an h-dimensional Lyapunov exponent λ (h) >0 and an h+1-dimensional Lyapunov exponent λ (h+1) <0. In this Letter, we use a polynomial interpolation to obtain generalized Lyapunov dimensions and study the relationships among them for higher-dimensional systems

  2. Kriging interpolation in seismic attribute space applied to the South Arne Field, North Sea

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Schiøtt, Christian

    2010-01-01

    Seismic attributes can be used to guide interpolation in-between and extrapolation away from well log locations using for example linear regression, neural networks, and kriging. Kriging-based estimation methods (and most other types of interpolation/extrapolation techniques) are intimately linke...

  3. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Mingjian Sun

    2015-01-01

    Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

  4. Effect of interpolation on parameters extracted from seating interface pressure arrays.

    Science.gov (United States)

    Wininger, Michael; Crane, Barbara

    2014-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

  5. Generation of nuclear data banks through interpolation

    International Nuclear Information System (INIS)

    Castillo M, J.A.

    1999-01-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, used to generate Nuclear Data Banks employing bi cubic polynomial interpolation, taking as independent variables the uranium and gadolinium percents. Two proposals were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed to obtain the interpolating polynomial and later, the corresponding linear equations system. In the solution of this system the Gaussian elimination method with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validations test, a comparison was made between the values obtained with INTPOLBI and INTERTEG (created at the Instituto de Investigaciones Electricas with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks. (Author)

  6. Nuclear data banks generation by interpolation

    International Nuclear Information System (INIS)

    Castillo M, J. A.

    1999-01-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks

  7. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry

    International Nuclear Information System (INIS)

    Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.

    2011-01-01

    Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R 2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  8. An empirical model of diagnostic x-ray attenuation under narrow-beam geometry.

    Science.gov (United States)

    Mathieu, Kelsey B; Kappadath, S Cheenu; White, R Allen; Atkinson, E Neely; Cody, Dianna D

    2011-08-01

    The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semi-logarithmic (exponential) and linear interpolation]. The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).

  9. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    Science.gov (United States)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  10. An integral conservative gridding-algorithm using Hermitian curve interpolation

    International Nuclear Information System (INIS)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-01-01

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  11. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    Science.gov (United States)

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for

  12. Randomized interpolative decomposition of separated representations

    Science.gov (United States)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  13. Size-Dictionary Interpolation for Robot's Adjustment

    Directory of Open Access Journals (Sweden)

    Morteza eDaneshmand

    2015-05-01

    Full Text Available This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically-inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps towards completion of the whole realistic online fitting package is provided.

  14. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    Science.gov (United States)

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  15. A Note on Cubic Convolution Interpolation

    OpenAIRE

    Meijering, E.; Unser, M.

    2003-01-01

    We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

  16. Node insertion in Coalescence Fractal Interpolation Function

    International Nuclear Information System (INIS)

    Prasad, Srijanani Anurag

    2013-01-01

    The Iterated Function System (IFS) used in the construction of Coalescence Hidden-variable Fractal Interpolation Function (CHFIF) depends on the interpolation data. The insertion of a new point in a given set of interpolation data is called the problem of node insertion. In this paper, the effect of insertion of new point on the related IFS and the Coalescence Fractal Interpolation Function is studied. Smoothness and Fractal Dimension of a CHFIF obtained with a node are also discussed

  17. Bayer Demosaicking with Polynomial Interpolation.

    Science.gov (United States)

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

    2016-08-30

    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  18. A temporal interpolation approach for dynamic reconstruction in perfusion CT

    International Nuclear Information System (INIS)

    Montes, Pau; Lauritsch, Guenter

    2007-01-01

    This article presents a dynamic CT reconstruction algorithm for objects with time dependent attenuation coefficient. Projection data acquired over several rotations are interpreted as samples of a continuous signal. Based on this idea, a temporal interpolation approach is proposed which provides the maximum temporal resolution for a given rotational speed of the CT scanner. Interpolation is performed using polynomial splines. The algorithm can be adapted to slow signals, reducing the amount of data acquired and the computational cost. A theoretical analysis of the approximations made by the algorithm is provided. In simulation studies, the temporal interpolation approach is compared with three other dynamic reconstruction algorithms based on linear regression, linear interpolation, and generalized Parker weighting. The presented algorithm exhibits the highest temporal resolution for a given sampling interval. Hence, our approach needs less input data to achieve a certain quality in the reconstruction than the other algorithms discussed or, equivalently, less x-ray exposure and computational complexity. The proposed algorithm additionally allows the possibility of using slow rotating scanners for perfusion imaging purposes

  19. Investigation of Back-off Based Interpolation Between Recurrent Neural Network and N-gram Language Models (Author’s Manuscript)

    Science.gov (United States)

    2016-02-11

    experiments were then conducted on the same BABEL task. The acoustic models were trained on 46 hours of speech. Tan - dem and hybrid DNN systems were...interpolation gave a comparable WER score of 46.9%. A fur - ther linear interpolation using equation (11) between the back-off based interpolated LM and the

  20. Application of ordinary kriging for interpolation of micro-structured technical surfaces

    International Nuclear Information System (INIS)

    Raid, Indek; Kusnezowa, Tatjana; Seewig, Jörg

    2013-01-01

    Kriging is an interpolation technique used in geostatistics. In this paper we present kriging applied in the field of three-dimensional optical surface metrology. Technical surfaces are not always optically cooperative, meaning that measurements of technical surfaces contain invalid data points because of different effects. These data points need to be interpolated to obtain a complete area in order to fulfil further processing. We present an elementary type of kriging, known as ordinary kriging, and apply it to interpolate measurements of different technical surfaces containing different kinds of realistic defects. The result of the interpolation with kriging is compared to six common interpolation techniques: nearest neighbour, natural neighbour, inverse distance to a power, triangulation with linear interpolation, modified Shepard's method and radial basis function. In order to quantify the results of different interpolations, the topographies are compared to defect-free reference topographies. Kriging is derived from a stochastic model that suggests providing an unbiased, linear estimation with a minimized error variance. The estimation with kriging is based on a preceding statistical analysis of the spatial structure of the surface. This comprises the choice and adaptation of specific models of spatial continuity. In contrast to common methods, kriging furthermore considers specific anisotropy in the data and adopts the interpolation accordingly. The gained benefit requires some additional effort in preparation and makes the overall estimation more time-consuming than common methods. However, the adaptation to the data makes this method very flexible and accurate. (paper)

  1. Potential problems with interpolating fields

    Energy Technology Data Exchange (ETDEWEB)

    Birse, Michael C. [The University of Manchester, Theoretical Physics Division, School of Physics and Astronomy, Manchester (United Kingdom)

    2017-11-15

    A potential can have features that do not reflect the dynamics of the system it describes but rather arise from the choice of interpolating fields used to define it. This is illustrated using a toy model of scattering with two coupled channels. A Bethe-Salpeter amplitude is constructed which is a mixture of the waves in the two channels. The potential derived from this has a strong repulsive core, which arises from the admixture of the closed channel in the wave function and not from the dynamics of the model. (orig.)

  2. The Geometry Conference

    CERN Document Server

    Bárány, Imre; Vilcu, Costin

    2016-01-01

    This volume presents easy-to-understand yet surprising properties obtained using topological, geometric and graph theoretic tools in the areas covered by the Geometry Conference that took place in Mulhouse, France from September 7–11, 2014 in honour of Tudor Zamfirescu on the occasion of his 70th anniversary. The contributions address subjects in convexity and discrete geometry, in distance geometry or with geometrical flavor in combinatorics, graph theory or non-linear analysis. Written by top experts, these papers highlight the close connections between these fields, as well as ties to other domains of geometry and their reciprocal influence. They offer an overview on recent developments in geometry and its border with discrete mathematics, and provide answers to several open questions. The volume addresses a large audience in mathematics, including researchers and graduate students interested in geometry and geometrical problems.

  3. Direct probe of the bent and linear geometries of the core-excited Renner-Teller pair states by means of the triple-ion-coincidence momentum imaging technique

    International Nuclear Information System (INIS)

    Muramatsu, Y.; Ueda, K.; Chiba, H.; Saito, N.; Lavollee, M.; Czasch, A.; Weber, T.; Jagutzki, O.; Schmidt-Boecking, H.; Moshammer, R.; Becker, U.; Kubozuka, K.; Koyano, I.

    2002-01-01

    The doubly degenerate core-excited Π state of CO 2 splits into two due to static Renner-Teller effect. Using the triple-ion-coincidence momentum imaging technique and focusing on the dependence of the measured quantities on the polarization of the incident light, we have probed, directly and separately, the linear and bent geometries for the B 1 and A 1 Renner-Teller pair states, as a direct proof of the static Renner-Teller effect

  4. Solución bidimensional sin malla de la ecuación no lineal de convección-difusión-reacción mediante el método de Interpolación Local Hermítica Two-dimensional meshless solution of the non-linear convection diffusion reaction equation by the Local Hermitian Interpolation method

    Directory of Open Access Journals (Sweden)

    Carlos A Bustamante Chaverra

    2013-03-01

    Full Text Available Un método sin malla es desarrollado para solucionar una versión genérica de la ecuación no lineal de convección-difusión-reacción en dominios bidimensionales. El método de Interpolación Local Hermítica (LHI es empleado para la discretización espacial, y diferentes estrategias son implementadas para solucionar el sistema de ecuaciones no lineales resultante, entre estas iteración de Picard, método de Newton-Raphson y el Método de Homotopía truncado (HAM. En el método LHI las Funciones de Base Radial (RBFs son empleadas para construir una función de interpolación. A diferencia del Método de Kansa, el LHI es aplicado localmente y los operadores diferenciales de las condiciones de frontera y la ecuación gobernante son utilizados para construir la función de interpolación, obteniéndose una matriz de colocación simétrica. El método de Newton-Rapshon se implementa con matriz Jacobiana analítica y numérica, y las derivadas de la ecuación gobernante con respecto al paramétro de homotopía son obtenidas analíticamente. El esquema numérico es verificado mediante la comparación de resultados con las soluciones analíticas de las ecuaciones de Burgers en una dimensión y Richards en dos dimensiones. Similares resultados son obtenidos para todos los solucionadores que se probaron, pero mejores ratas de convergencia son logradas con el método de Newton-Raphson en doble iteración.A meshless numerical scheme is developed for solving a generic version of the non-linear convection-diffusion-reaction equation in two-dim-ensional domains. The Local Hermitian Interpolation (LHI method is employed for the spatial discretization and several strategies are implemented for the solution of the resulting non-linear equation system, among them the Picard iteration, the Newton Raphson method and a truncated version of the Homotopy Analysis Method (HAM. The LHI method is a local collocation strategy in which Radial Basis Functions (RBFs

  5. A Hybrid Interpolation Method for Geometric Nonlinear Spatial Beam Elements with Explicit Nodal Force

    Directory of Open Access Journals (Sweden)

    Huiqing Fang

    2016-01-01

    Full Text Available Based on geometrically exact beam theory, a hybrid interpolation is proposed for geometric nonlinear spatial Euler-Bernoulli beam elements. First, the Hermitian interpolation of the beam centerline was used for calculating nodal curvatures for two ends. Then, internal curvatures of the beam were interpolated with a second interpolation. At this point, C1 continuity was satisfied and nodal strain measures could be consistently derived from nodal displacement and rotation parameters. The explicit expression of nodal force without integration, as a function of global parameters, was founded by using the hybrid interpolation. Furthermore, the proposed beam element can be degenerated into linear beam element under the condition of small deformation. Objectivity of strain measures and patch tests are also discussed. Finally, four numerical examples are discussed to prove the validity and effectivity of the proposed beam element.

  6. Conformal Interpolating Algorithm Based on Cubic NURBS in Aspheric Ultra-Precision Machining

    International Nuclear Information System (INIS)

    Li, C G; Zhang, Q R; Cao, C G; Zhao, S L

    2006-01-01

    Numeric control machining and on-line compensation for aspheric surface are key techniques in ultra-precision machining. In this paper, conformal cubic NURBS interpolating curve is applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal cubic NURBS interpolation, we compare it with the linear interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by cubic NURBS is higher than by line. The algorithm is benefit to increasing the surface form precision of workpieces in ultra-precision machining

  7. Interpolation of rational matrix functions

    CERN Document Server

    Ball, Joseph A; Rodman, Leiba

    1990-01-01

    This book aims to present the theory of interpolation for rational matrix functions as a recently matured independent mathematical subject with its own problems, methods and applications. The authors decided to start working on this book during the regional CBMS conference in Lincoln, Nebraska organized by F. Gilfeather and D. Larson. The principal lecturer, J. William Helton, presented ten lectures on operator and systems theory and the interplay between them. The conference was very stimulating and helped us to decide that the time was ripe for a book on interpolation for matrix valued functions (both rational and non-rational). When the work started and the first partial draft of the book was ready it became clear that the topic is vast and that the rational case by itself with its applications is already enough material for an interesting book. In the process of writing the book, methods for the rational case were developed and refined. As a result we are now able to present the rational case as an indepe...

  8. Evaluation of various interpolants available in DICE

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reu, Phillip L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Crozier, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.

  9. Stereo matching and view interpolation based on image domain triangulation.

    Science.gov (United States)

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  10. Data interpolation for vibration diagnostics using two-variable correlations

    International Nuclear Information System (INIS)

    Branagan, L.

    1991-01-01

    This paper reports that effective machinery vibration diagnostics require a clear differentiation between normal vibration changes caused by plant process conditions and those caused by degradation. The normal relationship between vibration and a process parameter can be quantified by developing the appropriate correlation. The differences in data acquisition requirements between dynamic signals (vibration spectra) and static signals (pressure, temperature, etc.) result in asynchronous data acquisition; the development of any correlation must then be based on some form of interpolated data. This interpolation can reproduce or distort the original measured quantity depending on the characteristics of the data and the interpolation technique. Relevant data characteristics, such as acquisition times, collection cycle times, compression method, storage rate, and the slew rate of the measured variable, are dependent both on the data handling and on the measured variable. Linear and staircase interpolation, along with the use of clustering and filtering, provide the necessary options to develop accurate correlations. The examples illustrate the appropriate application of these options

  11. Hyperbolic geometry

    CERN Document Server

    Iversen, Birger

    1992-01-01

    Although it arose from purely theoretical considerations of the underlying axioms of geometry, the work of Einstein and Dirac has demonstrated that hyperbolic geometry is a fundamental aspect of modern physics

  12. Research on interpolation methods in medical image processing.

    Science.gov (United States)

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  13. Twistor geometry

    NARCIS (Netherlands)

    van den Broek, P.M.

    1984-01-01

    The aim of this paper is to give a detailed exposition of the relation between the geometry of twistor space and the geometry of Minkowski space. The paper has a didactical purpose; no use has been made of differential geometry and cohomology.

  14. Intermediate algebra & analytic geometry

    CERN Document Server

    Gondin, William R

    1967-01-01

    Intermediate Algebra & Analytic Geometry Made Simple focuses on the principles, processes, calculations, and methodologies involved in intermediate algebra and analytic geometry. The publication first offers information on linear equations in two unknowns and variables, functions, and graphs. Discussions focus on graphic interpretations, explicit and implicit functions, first quadrant graphs, variables and functions, determinate and indeterminate systems, independent and dependent equations, and defective and redundant systems. The text then examines quadratic equations in one variable, system

  15. Cardinal Basis Piecewise Hermite Interpolation on Fuzzy Data

    Directory of Open Access Journals (Sweden)

    H. Vosoughi

    2016-01-01

    Full Text Available A numerical method along with explicit construction to interpolation of fuzzy data through the extension principle results by widely used fuzzy-valued piecewise Hermite polynomial in general case based on the cardinal basis functions, which satisfy a vanishing property on the successive intervals, has been introduced here. We have provided a numerical method in full detail using the linear space notions for calculating the presented method. In order to illustrate the method in computational examples, we take recourse to three prime cases: linear, cubic, and quintic.

  16. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    Science.gov (United States)

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  17. Differential Interpolation Effects in Free Recall

    Science.gov (United States)

    Petrusic, William M.; Jamieson, Donald G.

    1978-01-01

    Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…

  18. Transfinite C2 interpolant over triangles

    International Nuclear Information System (INIS)

    Alfeld, P.; Barnhill, R.E.

    1984-01-01

    A transfinite C 2 interpolant on a general triangle is created. The required data are essentially C 2 , no compatibility conditions arise, and the precision set includes all polynomials of degree less than or equal to eight. The symbol manipulation language REDUCE is used to derive the scheme. The scheme is discretized to two different finite dimensional C 2 interpolants in an appendix

  19. Neutron Flux Interpolation with Finite Element Method in the Nuclear Fuel Cell Calculation using Collision Probability Method

    International Nuclear Information System (INIS)

    Shafii, M. Ali; Su'ud, Zaki; Waris, Abdul; Kurniasih, Neny; Ariani, Menik; Yulianti, Yanti

    2010-01-01

    Nuclear reactor design and analysis of next-generation reactors require a comprehensive computing which is better to be executed in a high performance computing. Flat flux (FF) approach is a common approach in solving an integral transport equation with collision probability (CP) method. In fact, the neutron flux distribution is not flat, even though the neutron cross section is assumed to be equal in all regions and the neutron source is uniform throughout the nuclear fuel cell. In non-flat flux (NFF) approach, the distribution of neutrons in each region will be different depending on the desired interpolation model selection. In this study, the linear interpolation using Finite Element Method (FEM) has been carried out to be treated the neutron distribution. The CP method is compatible to solve the neutron transport equation for cylindrical geometry, because the angle integration can be done analytically. Distribution of neutrons in each region of can be explained by the NFF approach with FEM and the calculation results are in a good agreement with the result from the SRAC code. In this study, the effects of the mesh on the k eff and other parameters are investigated.

  20. Analysis of velocity planning interpolation algorithm based on NURBS curve

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.

  1. An Improved Rotary Interpolation Based on FPGA

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2014-08-01

    Full Text Available This paper presents an improved rotary interpolation algorithm, which consists of a standard curve interpolation module and a rotary process module. Compared to the conventional rotary interpolation algorithms, the proposed rotary interpolation algorithm is simpler and more efficient. The proposed algorithm was realized on a FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe, which uses rotary ellipse and rotary parabolic as an example. According to the theoretical analysis and practical process validation, the algorithm has the following advantages: firstly, less arithmetic items is conducive for interpolation operation; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  2. Dynamic Stability Analysis Using High-Order Interpolation

    Directory of Open Access Journals (Sweden)

    Juarez-Toledo C.

    2012-10-01

    Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.

  3. An approach to multiobjective optimization of rotational therapy. II. Pareto optimal surfaces and linear combinations of modulated blocked arcs for a prostate geometry.

    Science.gov (United States)

    Pardo-Montero, Juan; Fenwick, John D

    2010-06-01

    The purpose of this work is twofold: To further develop an approach to multiobjective optimization of rotational therapy treatments recently introduced by the authors [J. Pardo-Montero and J. D. Fenwick, "An approach to multiobjective optimization of rotational therapy," Med. Phys. 36, 3292-3303 (2009)], especially regarding its application to realistic geometries, and to study the quality (Pareto optimality) of plans obtained using such an approach by comparing them with Pareto optimal plans obtained through inverse planning. In the previous work of the authors, a methodology is proposed for constructing a large number of plans, with different compromises between the objectives involved, from a small number of geometrically based arcs, each arc prioritizing different objectives. Here, this method has been further developed and studied. Two different techniques for constructing these arcs are investigated, one based on image-reconstruction algorithms and the other based on more common gradient-descent algorithms. The difficulty of dealing with organs abutting the target, briefly reported in previous work of the authors, has been investigated using partial OAR unblocking. Optimality of the solutions has been investigated by comparison with a Pareto front obtained from inverse planning. A relative Euclidean distance has been used to measure the distance of these plans to the Pareto front, and dose volume histogram comparisons have been used to gauge the clinical impact of these distances. A prostate geometry has been used for the study. For geometries where a blocked OAR abuts the target, moderate OAR unblocking can substantially improve target dose distribution and minimize hot spots while not overly compromising dose sparing of the organ. Image-reconstruction type and gradient-descent blocked-arc computations generate similar results. The Pareto front for the prostate geometry, reconstructed using a large number of inverse plans, presents a hockey-stick shape

  4. Spatiotemporal video deinterlacing using control grid interpolation

    Science.gov (United States)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  5. Non-linear analysis of a closure manway using spiral wound gasket with metal-metal contact and a new geometry approach

    International Nuclear Information System (INIS)

    Jesus Miranda, C.A. de.

    1992-01-01

    The results of a PWR pressurizer closure manway analysis are presented. The manway geometry is slightly different from the conventional solution with the goal to reduce the bending stresses in the bolts when the system is pressurized. So the salt stresses value will also be reduced. The viability of the proposed solution will be confirmed by: verification of the stresses in the bolts connecting the blind flange to the nozzle by ASME III, subsection NB and level of the tightness reached in the spiral wound (type SG) gasket based in the criteria defined in the references. (author)

  6. Molecular geometry

    CERN Document Server

    Rodger, Alison

    1995-01-01

    Molecular Geometry discusses topics relevant to the arrangement of atoms. The book is comprised of seven chapters that tackle several areas of molecular geometry. Chapter 1 reviews the definition and determination of molecular geometry, while Chapter 2 discusses the unified view of stereochemistry and stereochemical changes. Chapter 3 covers the geometry of molecules of second row atoms, and Chapter 4 deals with the main group elements beyond the second row. The book also talks about the complexes of transition metals and f-block elements, and then covers the organometallic compounds and trans

  7. Feedrate optimization in 5-axis machining based on direct trajectory interpolation on the surface using an open cnc

    OpenAIRE

    Beudaert , Xavier; Lavernhe , Sylvain; Tournier , Christophe

    2014-01-01

    International audience; In the common machining process of free-form surfaces, CAM software generates approximated tool paths because of the input tool path format of the industrial CNC. Then, marks on finished surfaces may appear due to non smooth feedrate planning during interpolation. The Direct Trajectory Interpolation on the Surface (DTIS) method allows managing the tool path geometry and the kinematical parameters to achieve higher productivity and a better surface quality. Machining ex...

  8. The Diffraction Response Interpolation Method

    DEFF Research Database (Denmark)

    Jespersen, Søren Kragh; Wilhjelm, Jens Erik; Pedersen, Peder C.

    1998-01-01

    Computer modeling of the output voltage in a pulse-echo system is computationally very demanding, particularly whenconsidering reflector surfaces of arbitrary geometry. A new, efficient computational tool, the diffraction response interpolationmethod (DRIM), for modeling of reflectors in a fluid...... medium, is presented. The DRIM is based on the velocity potential impulseresponse method, adapted to pulse-echo applications by the use of acoustical reciprocity. Specifically, the DRIM operates bydividing the reflector surface into planar elements, finding the diffraction response at the corners...

  9. Interferometric interpolation of sparse marine data

    KAUST Repository

    Hanafy, Sherif M.

    2013-10-11

    We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.

  10. Optical geometry

    International Nuclear Information System (INIS)

    Robinson, I.; Trautman, A.

    1988-01-01

    The geometry of classical physics is Lorentzian; but weaker geometries are often more appropriate: null geodesics and electromagnetic fields, for example, are well known to be objects of conformal geometry. To deal with a single null congruence, or with the radiative electromagnetic fields associated with it, even less is needed: flag geometry for the first, optical geometry, with which this paper is chiefly concerned, for the second. The authors establish a natural one-to-one correspondence between optical geometries, considered locally, and three-dimensional Cauchy-Riemann structures. A number of Lorentzian geometries are shown to be equivalent from the optical point of view. For example the Goedel universe, the Taub-NUT metric and Hauser's twisting null solution have an optical geometry isomorphic to the one underlying the Robinson congruence in Minkowski space. The authors present general results on the problem of lifting a CR structure to a Lorentz manifold and, in particular, to Minkowski space; and exhibit the relevance of the deviation form to this problem

  11. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    Science.gov (United States)

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  12. Introduction to tropical geometry

    CERN Document Server

    Maclagan, Diane

    2015-01-01

    Tropical geometry is a combinatorial shadow of algebraic geometry, offering new polyhedral tools to compute invariants of algebraic varieties. It is based on tropical algebra, where the sum of two numbers is their minimum and the product is their sum. This turns polynomials into piecewise-linear functions, and their zero sets into polyhedral complexes. These tropical varieties retain a surprising amount of information about their classical counterparts. Tropical geometry is a young subject that has undergone a rapid development since the beginning of the 21st century. While establishing itself as an area in its own right, deep connections have been made to many branches of pure and applied mathematics. This book offers a self-contained introduction to tropical geometry, suitable as a course text for beginning graduate students. Proofs are provided for the main results, such as the Fundamental Theorem and the Structure Theorem. Numerous examples and explicit computations illustrate the main concepts. Each of t...

  13. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    Science.gov (United States)

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  14. Sample Data Synchronization and Harmonic Analysis Algorithm Based on Radial Basis Function Interpolation

    Directory of Open Access Journals (Sweden)

    Huaiqing Zhang

    2014-01-01

    Full Text Available The spectral leakage has a harmful effect on the accuracy of harmonic analysis for asynchronous sampling. This paper proposed a time quasi-synchronous sampling algorithm which is based on radial basis function (RBF interpolation. Firstly, a fundamental period is evaluated by a zero-crossing technique with fourth-order Newton’s interpolation, and then, the sampling sequence is reproduced by the RBF interpolation. Finally, the harmonic parameters can be calculated by FFT on the synchronization of sampling data. Simulation results showed that the proposed algorithm has high accuracy in measuring distorted and noisy signals. Compared to the local approximation schemes as linear, quadric, and fourth-order Newton interpolations, the RBF is a global approximation method which can acquire more accurate results while the time-consuming is about the same as Newton’s.

  15. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    Science.gov (United States)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With

  16. Architectural geometry

    KAUST Repository

    Pottmann, Helmut; Eigensatz, Michael; Vaxman, Amir; Wallner, Johannes

    2014-01-01

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  17. Architectural geometry

    KAUST Repository

    Pottmann, Helmut

    2014-11-26

    Around 2005 it became apparent in the geometry processing community that freeform architecture contains many problems of a geometric nature to be solved, and many opportunities for optimization which however require geometric understanding. This area of research, which has been called architectural geometry, meanwhile contains a great wealth of individual contributions which are relevant in various fields. For mathematicians, the relation to discrete differential geometry is significant, in particular the integrable system viewpoint. Besides, new application contexts have become available for quite some old-established concepts. Regarding graphics and geometry processing, architectural geometry yields interesting new questions but also new objects, e.g. replacing meshes by other combinatorial arrangements. Numerical optimization plays a major role but in itself would be powerless without geometric understanding. Summing up, architectural geometry has become a rewarding field of study. We here survey the main directions which have been pursued, we show real projects where geometric considerations have played a role, and we outline open problems which we think are significant for the future development of both theory and practice of architectural geometry.

  18. Hydrodynamic description of the long-time tails of the linear and rotational velocity autocorrelation functions of a particle in a confined geometry.

    Science.gov (United States)

    Frydel, Derek; Rice, Stuart A

    2007-12-01

    We report a hydrodynamic analysis of the long-time behavior of the linear and angular velocity autocorrelation functions of an isolated colloid particle constrained to have quasi-two-dimensional motion, and compare the predicted behavior with the results of lattice-Boltzmann simulations. Our analysis uses the singularity method to characterize unsteady linear motion of an incompressible fluid. For bounded fluids we construct an image system with a discrete set of fundamental solutions of the Stokes equation from which we extract the long-time decay of the velocity. For the case that there are free slip boundary conditions at walls separated by H particle diameters, the time evolution of the parallel linear velocity and the perpendicular rotational velocity following impulsive excitation both correspond to the time evolution of a two-dimensional (2D) fluid with effective density rho_(2D)=rhoH. For the case that there are no slip boundary conditions at the walls, the same types of motion correspond to 2D fluid motions with a coefficient of friction xi=pi(2)nu/H(2) modulo a prefactor of order 1, with nu the kinematic viscosity. The linear particle motion perpendicular to the walls also experiences an effective frictional force, but the time dependence is proportional to t(-2) , which cannot be related to either pure 3D or pure 2D fluid motion. Our incompressible fluid model predicts correct self-diffusion constants but it does not capture all of the effects of the fluid confinement on the particle motion. In particular, the linear motion of a particle perpendicular to the walls is influenced by coupling between the density flux and the velocity field, which leads to damped velocity oscillations whose frequency is proportional to c_(s)/H , with c_(s) the velocity of sound. For particle motion parallel to no slip walls there is a slowing down of a density flux that spreads diffusively, which generates a long-time decay proportional to t(-1) .

  19. Beautiful geometry

    CERN Document Server

    Maor, Eli

    2014-01-01

    If you've ever thought that mathematics and art don't mix, this stunning visual history of geometry will change your mind. As much a work of art as a book about mathematics, Beautiful Geometry presents more than sixty exquisite color plates illustrating a wide range of geometric patterns and theorems, accompanied by brief accounts of the fascinating history and people behind each. With artwork by Swiss artist Eugen Jost and text by acclaimed math historian Eli Maor, this unique celebration of geometry covers numerous subjects, from straightedge-and-compass constructions to intriguing configur

  20. NOAA Optimum Interpolation (OI) SST V2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...

  1. Kuu plaat : Interpol Antics. Plaadid kauplusest Lasering

    Index Scriptorium Estoniae

    2005-01-01

    Heliplaatidest: "Interpol Antics", Scooter "Mind the Gap", Slide-Fifty "The Way Ahead", Psyhhoterror "Freddy, löö esimesena!", Riho Sibul "Must", Bossacucanova "Uma Batida Diferente", "Biscantorat - Sound of the spirit from Glenstal Abbey"

  2. Revisiting Veerman’s interpolation method

    DEFF Research Database (Denmark)

    Christiansen, Peter; Bay, Niels Oluf

    2016-01-01

    and (c) FEsimulations. A comparison of the determined forming limits yields insignificant differences in the limit strain obtainedwith Veerman’s method or exact Lagrangian interpolation for the two sheet metal forming processes investigated. Theagreement with the FE-simulations is reasonable.......This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established...... for comparison. Bulge testing and tensile testing of aluminium sheets containingelectro-chemically etched circle grids are performed to experimentally determine the forming limit of the sheet material.The forming limit is determined using (a) Veerman’s interpolation method, (b) exact Lagrangian interpolation...

  3. NOAA Daily Optimum Interpolation Sea Surface Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA 1/4° daily Optimum Interpolation Sea Surface Temperature (or daily OISST) is an analysis constructed by combining observations from different platforms...

  4. Integration and interpolation of sampled waveforms

    International Nuclear Information System (INIS)

    Stearns, S.D.

    1978-01-01

    Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed

  5. Wideband DOA Estimation through Projection Matrix Interpolation

    OpenAIRE

    Selva, J.

    2017-01-01

    This paper presents a method to reduce the complexity of the deterministic maximum likelihood (DML) estimator in the wideband direction-of-arrival (WDOA) problem, which is based on interpolating the array projection matrix in the temporal frequency variable. It is shown that an accurate interpolator like Chebyshev's is able to produce DML cost functions comprising just a few narrowband-like summands. Actually, the number of such summands is far smaller (roughly by factor ten in the numerical ...

  6. Interpolation for a subclass of H

    Indian Academy of Sciences (India)

    |g(zm)| ≤ c |zm − zm |, ∀m ∈ N. Thus it is natural to pose the following interpolation problem for H. ∞. : DEFINITION 4. We say that (zn) is an interpolating sequence in the weak sense for H. ∞ if given any sequence of complex numbers (λn) verifying. |λn| ≤ c ψ(zn,z. ∗ n) |zn − zn |, ∀n ∈ N,. (4) there exists a product fg ∈ H.

  7. Reconstruction of reflectance data using an interpolation technique.

    Science.gov (United States)

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  8. Image interpolation via graph-based Bayesian label propagation.

    Science.gov (United States)

    Xianming Liu; Debin Zhao; Jiantao Zhou; Wen Gao; Huifang Sun

    2014-03-01

    In this paper, we propose a novel image interpolation algorithm via graph-based Bayesian label propagation. The basic idea is to first create a graph with known and unknown pixels as vertices and with edge weights encoding the similarity between vertices, then the problem of interpolation converts to how to effectively propagate the label information from known points to unknown ones. This process can be posed as a Bayesian inference, in which we try to combine the principles of local adaptation and global consistency to obtain accurate and robust estimation. Specially, our algorithm first constructs a set of local interpolation models, which predict the intensity labels of all image samples, and a loss term will be minimized to keep the predicted labels of the available low-resolution (LR) samples sufficiently close to the original ones. Then, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on all samples. Moreover, a graph-Laplacian-based manifold regularization term is incorporated to penalize the global smoothness of intensity labels, such smoothing can alleviate the insufficient training of the local models and make them more robust. Finally, we construct a unified objective function to combine together the global loss of the locally linear regression, square error of prediction bias on the available LR samples, and the manifold regularization term. It can be solved with a closed-form solution as a convex optimization problem. Experimental results demonstrate that the proposed method achieves competitive performance with the state-of-the-art image interpolation algorithms.

  9. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  10. Analytische Geometrie

    Science.gov (United States)

    Kemnitz, Arnfried

    Der Grundgedanke der Analytischen Geometrie besteht darin, dass geometrische Untersuchungen mit rechnerischen Mitteln geführt werden. Geometrische Objekte werden dabei durch Gleichungen beschrieben und mit algebraischen Methoden untersucht.

  11. Algebraic geometry

    CERN Document Server

    Lefschetz, Solomon

    2005-01-01

    An introduction to algebraic geometry and a bridge between its analytical-topological and algebraical aspects, this text for advanced undergraduate students is particularly relevant to those more familiar with analysis than algebra. 1953 edition.

  12. Information geometry

    CERN Document Server

    Ay, Nihat; Lê, Hông Vân; Schwachhöfer, Lorenz

    2017-01-01

    The book provides a comprehensive introduction and a novel mathematical foundation of the field of information geometry with complete proofs and detailed background material on measure theory, Riemannian geometry and Banach space theory. Parametrised measure models are defined as fundamental geometric objects, which can be both finite or infinite dimensional. Based on these models, canonical tensor fields are introduced and further studied, including the Fisher metric and the Amari-Chentsov tensor, and embeddings of statistical manifolds are investigated. This novel foundation then leads to application highlights, such as generalizations and extensions of the classical uniqueness result of Chentsov or the Cramér-Rao inequality. Additionally, several new application fields of information geometry are highlighted, for instance hierarchical and graphical models, complexity theory, population genetics, or Markov Chain Monte Carlo. The book will be of interest to mathematicians who are interested in geometry, inf...

  13. A Whirlwind Tour of Computational Geometry.

    Science.gov (United States)

    Graham, Ron; Yao, Frances

    1990-01-01

    Described is computational geometry which used concepts and results from classical geometry, topology, combinatorics, as well as standard algorithmic techniques such as sorting and searching, graph manipulations, and linear programing. Also included are special techniques and paradigms. (KR)

  14. Viscous properties of isotropic fluids composed of linear molecules: departure from the classical Navier-Stokes theory in nano-confined geometries.

    Science.gov (United States)

    Hansen, J S; Daivis, Peter J; Todd, B D

    2009-10-01

    In this paper we present equilibrium molecular-dynamics results for the shear, rotational, and spin viscosities for fluids composed of linear molecules. The density dependence of the shear viscosity follows a stretched exponential function, whereas the rotational viscosity and the spin viscosities show approximately power-law dependencies. The frequency-dependent shear and spin viscosities are also studied. It is found that viscoelastic behavior is first manifested in the shear viscosity and that the real part of the spin viscosities features a maximum for nonzero frequency. The calculated transport coefficients are used together with the extended Navier-Stokes equations to investigate the effect of the coupling between the intrinsic angular momentum and linear momentum for highly confined fluids. Both steady and oscillatory flows are studied. It is shown, for example, that the fluid flow rate for Poiseuille flow is reduced by up to 10% in a 2 nm channel for a buta-triene fluid at density 236 kg m(-3) and temperature 306 K. The coupling effect may, therefore, become very important for nanofluidic applications.

  15. MODIS Snow Cover Recovery Using Variational Interpolation

    Science.gov (United States)

    Tran, H.; Nguyen, P.; Hsu, K. L.; Sorooshian, S.

    2017-12-01

    Cloud obscuration is one of the major problems that limit the usages of satellite images in general and in NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) global Snow-Covered Area (SCA) products in particular. Among the approaches to resolve the problem, the Variational Interpolation (VI) algorithm method, proposed by Xia et al., 2012, obtains cloud-free dynamic SCA images from MODIS. This method is automatic and robust. However, computational deficiency is a main drawback that degrades applying the method for larger scales (i.e., spatial and temporal scales). To overcome this difficulty, this study introduces an improved version of the original VI. The modified VI algorithm integrates the MINimum RESidual (MINRES) iteration (Paige and Saunders., 1975) to prevent the system from breaking up when applied to much broader scales. An experiment was done to demonstrate the crash-proof ability of the new algorithm in comparison with the original VI method, an ability that is obtained when maintaining the distribution of the weights set after solving the linear system. After that, the new VI algorithm was applied to the whole Contiguous United States (CONUS) over four winter months of 2016 and 2017, and validated using the snow station network (SNOTEL). The resulting cloud free images have high accuracy in capturing the dynamical changes of snow in contrast with the MODIS snow cover maps. Lastly, the algorithm was applied to create a Cloud free images dataset from March 10, 2000 to February 28, 2017, which is able to provide an overview of snow trends over CONUS for nearly two decades. ACKNOWLEDGMENTSWe would like to acknowledge NASA, NOAA Office of Hydrologic Development (OHD) National Weather Service (NWS), Cooperative Institute for Climate and Satellites (CICS), Army Research Office (ARO), ICIWaRM, and UNESCO for supporting this research.

  16. On the 1/N expansion of the two-dimensional non-linear sigma-model: The vestige of chiral geometry

    International Nuclear Information System (INIS)

    Flume, R.

    1978-11-01

    We investigate the functioning of the O(N)-symmetry of the non-linear two-dimensional sigma-model using the 1/N expansion. The mechanism of O(N)-symmetry restoration is made explicit. We show that the O(N) invariant operators are in a one to one correspondance with the (c-number) invariants of the classical model. We observe a phenomenon, important in the context of the symmetry restoration, which might be called 'transmutation of anomalies'. That is, an anomaly of the equations of motion appearing before a summation of graphs contributing to the leading order of 1/N as a short distance effect becomes, after the summation, a long-distance effect. (orig.) [de

  17. Interpolation on the manifold of K component GMMs.

    Science.gov (United States)

    Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas

    2015-12-01

    Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.

  18. A Hybrid Method for Interpolating Missing Data in Heterogeneous Spatio-Temporal Datasets

    Directory of Open Access Journals (Sweden)

    Min Deng

    2016-02-01

    Full Text Available Space-time interpolation is widely used to estimate missing or unobserved values in a dataset integrating both spatial and temporal records. Although space-time interpolation plays a key role in space-time modeling, existing methods were mainly developed for space-time processes that exhibit stationarity in space and time. It is still challenging to model heterogeneity of space-time data in the interpolation model. To overcome this limitation, in this study, a novel space-time interpolation method considering both spatial and temporal heterogeneity is developed for estimating missing data in space-time datasets. The interpolation operation is first implemented in spatial and temporal dimensions. Heterogeneous covariance functions are constructed to obtain the best linear unbiased estimates in spatial and temporal dimensions. Spatial and temporal correlations are then considered to combine the interpolation results in spatial and temporal dimensions to estimate the missing data. The proposed method is tested on annual average temperature and precipitation data in China (1984–2009. Experimental results show that, for these datasets, the proposed method outperforms three state-of-the-art methods—e.g., spatio-temporal kriging, spatio-temporal inverse distance weighting, and point estimation model of biased hospitals-based area disease estimation methods.

  19. Comparison of spatial interpolation techniques to predict soil properties in the colombian piedmont eastern plains

    Directory of Open Access Journals (Sweden)

    Mauricio Castro Franco

    2017-07-01

    Full Text Available Context: Interpolating soil properties at field-scale in the Colombian piedmont eastern plains is challenging due to: the highly and complex variable nature of some processes; the effects of the soil; the land use; and the management. While interpolation techniques are being adapted to include auxiliary information of these effects, the soil data are often difficult to predict using conventional techniques of spatial interpolation. Method: In this paper, we evaluated and compared six spatial interpolation techniques: Inverse Distance Weighting (IDW, Spline, Ordinary Kriging (KO, Universal Kriging (UK, Cokriging (Ckg, and Residual Maximum Likelihood-Empirical Best Linear Unbiased Predictor (REML-EBLUP, from conditioned Latin Hypercube as a sampling strategy. The ancillary information used in Ckg and REML-EBLUP was indexes calculated from a digital elevation model (MDE. The “Random forest” algorithm was used for selecting the most important terrain index for each soil properties. Error metrics were used to validate interpolations against cross validation. Results: The results support the underlying assumption that HCLc captured adequately the full distribution of variables of ancillary information in the Colombian piedmont eastern plains conditions. They also suggest that Ckg and REML-EBLUP perform best in the prediction in most of the evaluated soil properties. Conclusions: Mixed interpolation techniques having auxiliary soil information and terrain indexes, provided a significant improvement in the prediction of soil properties, in comparison with other techniques.

  20. Interpolated sagittal and coronal reconstruction of CT images in the screening of neck abnormalities

    International Nuclear Information System (INIS)

    Koga, Issei

    1983-01-01

    Recontructed sagittal and coronal images were analyzed for their usefulness during clinical applications and to determine the correct use of recontruction techniques. Recontructed stereoscopic images can be formed by continuous or interrupted image reconstruction using interpolation. This study showed that lesions less than 10 mm in diameter should be made continuously and recontructed with uninterrupted technique. However, 5 mm interrupted distances are acceptable for interpolated reconstruction except in cases of lesions less than 10 mm in diameter. Clinically, interpolated reconstruction is not adequated for semicircular lesions less than 10 mm. Blood vessels and linear lesions are good condiated for the application of interpolated recontruction. Reconstruction of images using interrupted interpolation is therefore recommended for screening and for demonstrating correct stereoscopic information, except cases of small lesions less than 10 mm in diameter. Results of this study underscore the fact that obscure information in transverse CT images should be routinely utilized by interporating recontruction techniques, if transverse images are not made continuously. Interpolated recontruction may be helpful in obtaining stereoscopic information. (author)

  1. Analytic geometry

    CERN Document Server

    Burdette, A C

    1971-01-01

    Analytic Geometry covers several fundamental aspects of analytic geometry needed for advanced subjects, including calculus.This book is composed of 12 chapters that review the principles, concepts, and analytic proofs of geometric theorems, families of lines, the normal equation of the line, and related matters. Other chapters highlight the application of graphing, foci, directrices, eccentricity, and conic-related topics. The remaining chapters deal with the concept polar and rectangular coordinates, surfaces and curves, and planes.This book will prove useful to undergraduate trigonometric st

  2. Geometry Revealed

    CERN Document Server

    Berger, Marcel

    2010-01-01

    Both classical geometry and modern differential geometry have been active subjects of research throughout the 20th century and lie at the heart of many recent advances in mathematics and physics. The underlying motivating concept for the present book is that it offers readers the elements of a modern geometric culture by means of a whole series of visually appealing unsolved (or recently solved) problems that require the creation of concepts and tools of varying abstraction. Starting with such natural, classical objects as lines, planes, circles, spheres, polygons, polyhedra, curves, surfaces,

  3. Noncommutative geometry

    CERN Document Server

    Connes, Alain

    1994-01-01

    This English version of the path-breaking French book on this subject gives the definitive treatment of the revolutionary approach to measure theory, geometry, and mathematical physics developed by Alain Connes. Profusely illustrated and invitingly written, this book is ideal for anyone who wants to know what noncommutative geometry is, what it can do, or how it can be used in various areas of mathematics, quantization, and elementary particles and fields.Key Features* First full treatment of the subject and its applications* Written by the pioneer of this field* Broad applications in mathemat

  4. Finite-dimensional linear algebra

    CERN Document Server

    Gockenbach, Mark S

    2010-01-01

    Some Problems Posed on Vector SpacesLinear equationsBest approximationDiagonalizationSummaryFields and Vector SpacesFields Vector spaces Subspaces Linear combinations and spanning sets Linear independence Basis and dimension Properties of bases Polynomial interpolation and the Lagrange basis Continuous piecewise polynomial functionsLinear OperatorsLinear operatorsMore properties of linear operatorsIsomorphic vector spaces Linear operator equations Existence and uniqueness of solutions The fundamental theorem; inverse operatorsGaussian elimination Newton's method Linear ordinary differential eq

  5. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  6. New families of interpolating type IIB backgrounds

    Science.gov (United States)

    Minasian, Ruben; Petrini, Michela; Zaffaroni, Alberto

    2010-04-01

    We construct new families of interpolating two-parameter solutions of type IIB supergravity. These correspond to D3-D5 systems on non-compact six-dimensional manifolds which are mathbb{T}2 fibrations over Eguchi-Hanson and multi-center Taub-NUT spaces, respectively. One end of the interpolation corresponds to a solution with only D5 branes and vanishing NS three-form flux. A topology changing transition occurs at the other end, where the internal space becomes a direct product of the four-dimensional surface and the two-torus and the complexified NS-RR three-form flux becomes imaginary self-dual. Depending on the choice of the connections on the torus fibre, the interpolating family has either mathcal{N}=2 or mathcal{N}=1 supersymmetry. In the mathcal{N}=2 case it can be shown that the solutions are regular.

  7. Interpolation of quasi-Banach spaces

    International Nuclear Information System (INIS)

    Tabacco Vignati, A.M.

    1986-01-01

    This dissertation presents a method of complex interpolation for familities of quasi-Banach spaces. This method generalizes the theory for families of Banach spaces, introduced by others. Intermediate spaces in several particular cases are characterized using different approaches. The situation when all the spaces have finite dimensions is studied first. The second chapter contains the definitions and main properties of the new interpolation spaces, and an example concerning the Schatten ideals associated with a separable Hilbert space. The case of L/sup P/ spaces follows from the maximal operator theory contained in Chapter III. Also introduced is a different method of interpolation for quasi-Banach lattices of functions, and conditions are given to guarantee that the two techniques yield the same result. Finally, the last chapter contains a different, and more direct, approach to the case of Hardy spaces

  8. Projective Geometry

    Indian Academy of Sciences (India)

    mathematicians are trained to use very precise language, and so find it hard to simplify and state .... thing. If you take a plane on which there are two such triangles which enjoy the above ... within this geometry to simplify things if needed.

  9. Geometry -----------~--------------RESONANCE

    Indian Academy of Sciences (India)

    Parallel: A pair of lines in a plane is said to be parallel if they do not meet. Mathematicians were at war ... Subsequently, Poincare, Klein, Beltrami and others refined non-. Euclidean geometry. ... plane divides the plane into two half planes and.

  10. The effect of interpolation methods in temperature and salinity trends in the Western Mediterranean

    Directory of Open Access Journals (Sweden)

    M. VARGAS-YANEZ

    2012-04-01

    Full Text Available Temperature and salinity data in the historical record are scarce and unevenly distributed in space and time and the estimation of linear trends is sensitive to different factors. In the case of the Western Mediterranean, previous works have studied the sensitivity of these trends to the use of bathythermograph data, the averaging methods or the way in which gaps in time series are dealt with. In this work, a new factor is analysed: the effect of data interpolation. Temperature and salinity time series are generated averaging existing data over certain geographical areas and also by means of interpolation. Linear trends from both types of time series are compared. There are some differences between both estimations for some layers and geographical areas, while in other cases the results are consistent. Those results which do not depend on the use of interpolated or non-interpolated data, neither are influenced by data analysis methods can be considered as robust ones. Those results influenced by the interpolation process or the factors analysed in previous sensitivity tests are not considered as robust results.

  11. Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances

    Science.gov (United States)

    Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.

    2018-04-01

    Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.

  12. The interpolation method based on endpoint coordinate for CT three-dimensional image

    International Nuclear Information System (INIS)

    Suto, Yasuzo; Ueno, Shigeru.

    1997-01-01

    Image interpolation is frequently used to improve slice resolution to reach spatial resolution. Improved quality of reconstructed three-dimensional images can be attained with this technique as a result. Linear interpolation is a well-known and widely used method. The distance-image method, which is a non-linear interpolation technique, is also used to convert CT value images to distance images. This paper describes a newly developed method that makes use of end-point coordinates: CT-value images are initially converted to binary images by thresholding them and then sequences of pixels with 1-value are arranged in vertical or horizontal directions. A sequence of pixels with 1-value is defined as a line segment which has starting and end points. For each pair of adjacent line segments, another line segment was composed by spatial interpolation of the start and end points. Binary slice images are constructed from the composed line segments. Three-dimensional images were reconstructed from clinical X-ray CT images, using three different interpolation methods and their quality and processing speed were evaluated and compared. (author)

  13. Positivity Preserving Interpolation Using Rational Bicubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2015-01-01

    Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

  14. BV-norm convergence of interpolation approximations for Frobenius-Perron operators

    International Nuclear Information System (INIS)

    Ding, J; Rhee, N

    2008-01-01

    Let S be a chaotic transformation from an interval into itself and let P be the corresponding Frobenius-Perron operator associated with S. In this paper we prove the convergence under the variation norm for a piecewise linear interpolation method that was recently proposed by the authors for computing a stationary density of P

  15. Geometry VI

    Indian Academy of Sciences (India)

    meet in edges (linear segments) that are E in number. The edges terminate in ... V - E = 0 (Exercise). Euler's proof is entirely analogous and quite elementary. .... linear algebra and lies at the basis of all geometrical research. The reader is ...

  16. An introduction to incidence geometry

    CERN Document Server

    De Bruyn, Bart

    2016-01-01

    This book gives an introduction to the field of Incidence Geometry by discussing the basic families of point-line geometries and introducing some of the mathematical techniques that are essential for their study. The families of geometries covered in this book include among others the generalized polygons, near polygons, polar spaces, dual polar spaces and designs. Also the various relationships between these geometries are investigated. Ovals and ovoids of projective spaces are studied and some applications to particular geometries will be given. A separate chapter introduces the necessary mathematical tools and techniques from graph theory. This chapter itself can be regarded as a self-contained introduction to strongly regular and distance-regular graphs. This book is essentially self-contained, only assuming the knowledge of basic notions from (linear) algebra and projective and affine geometry. Almost all theorems are accompanied with proofs and a list of exercises with full solutions is given at the end...

  17. Riemannian geometry

    CERN Document Server

    Petersen, Peter

    2016-01-01

    Intended for a one year course, this text serves as a single source, introducing readers to the important techniques and theorems, while also containing enough background on advanced topics to appeal to those students wishing to specialize in Riemannian geometry. This is one of the few Works to combine both the geometric parts of Riemannian geometry and the analytic aspects of the theory. The book will appeal to a readership that have a basic knowledge of standard manifold theory, including tensors, forms, and Lie groups. Important revisions to the third edition include: a substantial addition of unique and enriching exercises scattered throughout the text; inclusion of an increased number of coordinate calculations of connection and curvature; addition of general formulas for curvature on Lie Groups and submersions; integration of variational calculus into the text allowing for an early treatment of the Sphere theorem using a proof by Berger; incorporation of several recent results about manifolds with posit...

  18. Special geometry

    International Nuclear Information System (INIS)

    Strominger, A.

    1990-01-01

    A special manifold is an allowed target manifold for the vector multiplets of D=4, N=2 supergravity. These manifolds are of interest for string theory because the moduli spaces of Calabi-Yau threefolds and c=9, (2,2) conformal field theories are special. Previous work has given a local, coordinate-dependent characterization of special geometry. A global description of special geometries is given herein, and their properties are studied. A special manifold M of complex dimension n is characterized by the existence of a holomorphic Sp(2n+2,R)xGL(1,C) vector bundle over M with a nowhere-vanishing holomorphic section Ω. The Kaehler potential on M is the logarithm of the Sp(2n+2,R) invariant norm of Ω. (orig.)

  19. Technical note: Improving the AWAT filter with interpolation schemes for advanced processing of high resolution data

    Science.gov (United States)

    Peters, Andre; Nehls, Thomas; Wessolek, Gerd

    2016-06-01

    Weighing lysimeters with appropriate data filtering yield the most precise and unbiased information for precipitation (P) and evapotranspiration (ET). A recently introduced filter scheme for such data is the AWAT (Adaptive Window and Adaptive Threshold) filter (Peters et al., 2014). The filter applies an adaptive threshold to separate significant from insignificant mass changes, guaranteeing that P and ET are not overestimated, and uses a step interpolation between the significant mass changes. In this contribution we show that the step interpolation scheme, which reflects the resolution of the measuring system, can lead to unrealistic prediction of P and ET, especially if they are required in high temporal resolution. We introduce linear and spline interpolation schemes to overcome these problems. To guarantee that medium to strong precipitation events abruptly following low or zero fluxes are not smoothed in an unfavourable way, a simple heuristic selection criterion is used, which attributes such precipitations to the step interpolation. The three interpolation schemes (step, linear and spline) are tested and compared using a data set from a grass-reference lysimeter with 1 min resolution, ranging from 1 January to 5 August 2014. The selected output resolutions for P and ET prediction are 1 day, 1 h and 10 min. As expected, the step scheme yielded reasonable flux rates only for a resolution of 1 day, whereas the other two schemes are well able to yield reasonable results for any resolution. The spline scheme returned slightly better results than the linear scheme concerning the differences between filtered values and raw data. Moreover, this scheme allows continuous differentiability of filtered data so that any output resolution for the fluxes is sound. Since computational burden is not problematic for any of the interpolation schemes, we suggest always using the spline scheme.

  20. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.; Efendiev, Yalchin R.; Galvis, Juan; Ghommem, Mehdi

    2014-01-01

    residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully

  1. Spectral Compressive Sensing with Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Dadkhahi, Hamid; F. Duarte, Marco

    2013-01-01

    . In this paper, we introduce a greedy recovery algorithm that leverages a band-exclusion function and a polar interpolation function to address these two issues in spectral compressive sensing. Our algorithm is geared towards line spectral estimation from compressive measurements and outperforms most existing...

  2. Technique for image interpolation using polynomial transforms

    NARCIS (Netherlands)

    Escalante Ramírez, B.; Martens, J.B.; Haskell, G.G.; Hang, H.M.

    1993-01-01

    We present a new technique for image interpolation based on polynomial transforms. This is an image representation model that analyzes an image by locally expanding it into a weighted sum of orthogonal polynomials. In the discrete case, the image segment within every window of analysis is

  3. A Study on the Improvement of Digital Periapical Images using Image Interpolation Methods

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    Image resampling is of particular interest in digital radiology. When resampling an image to a new set of coordinate, there appears blocking artifacts and image changes. To enhance image quality, interpolation algorithms have been used. Resampling is used to increase the number of points in an image to improve its appearance for display. The process of interpolation is fitting a continuous function to the discrete points in the digital image. The purpose of this study was to determine the effects of the seven interpolation functions when image resampling in digital periapical images. The images were obtained by Digora, CDR and scanning of Ektaspeed plus periapical radiograms on the dry skull and human subject. The subjects were exposed to intraoral X-ray machine at 60 kVp and 70 kVp with exposure time varying between 0.01 and 0.50 second. To determine which interpolation method would provide the better image, seven functions were compared ; (1) nearest neighbor (2) linear (3) non-linear (4) facet model (5) cubic convolution (6) cubic spline (7) gray segment expansion. And resampled images were compared in terms of SNR (Signal to Noise Ratio) and MTF (Modulation Transfer Function) coefficient value. The obtained results were as follows ; 1. The highest SNR value (75.96 dB) was obtained with cubic convolution method and the lowest SNR value (72.44 dB) was obtained with facet model method among seven interpolation methods. 2. There were significant differences of SNR values among CDR, Digora and film scan (P 0.05). 4. There were significant differences of MTF coefficient values between linear interpolation method and the other six interpolation methods (P<0.05). 5. The speed of computation time was the fastest with nearest neighbor method and the slowest with non-linear method. 6. The better image was obtained with cubic convolution, cubic spline and gray segment method in ROC analysis. 7. The better sharpness of edge was obtained with gray segment expansion method

  4. General Geometry and Geometry of Electromagnetism

    OpenAIRE

    Shahverdiyev, Shervgi S.

    2002-01-01

    It is shown that Electromagnetism creates geometry different from Riemannian geometry. General geometry including Riemannian geometry as a special case is constructed. It is proven that the most simplest special case of General Geometry is geometry underlying Electromagnetism. Action for electromagnetic field and Maxwell equations are derived from curvature function of geometry underlying Electromagnetism. And it is shown that equation of motion for a particle interacting with electromagnetic...

  5. Application of an enhanced cross-section interpolation model for highly poisoned LWR core calculations

    International Nuclear Information System (INIS)

    Palau, J.M.; Cathalau, S.; Hudelot, J.P.; Barran, F.; Bellanger, V.; Magnaud, C.; Moreau, F.

    2011-01-01

    Burnable poisons are extensively used by Light Water Reactor designers in order to preserve the fuel reactivity potential and increase the cycle length (without increasing the uranium enrichment). In the industrial two-steps (assembly 2D transport-core 3D diffusion) calculation schemes these heterogeneities yield to strong flux and cross-sections perturbations that have to be taken into account in the final 3D burn-up calculations. This paper presents the application of an enhanced cross-section interpolation model (implemented in the French CRONOS2 code) to LWR (highly poisoned) depleted core calculations. The principle is to use the absorbers (or actinide) concentrations as the new interpolation parameters instead of the standard local burnup/fluence parameters. It is shown by comparing the standard (burnup/fluence) and new (concentration) interpolation models and using the lattice transport code APOLLO2 as a numerical reference that reactivity and local reaction rate prediction of a 2x2 LWR assembly configuration (slab geometry) is significantly improved with the concentration interpolation model. Gains on reactivity and local power predictions (resp. more than 1000 pcm and 20 % discrepancy reduction compared to the reference APOLLO2 scheme) are obtained by using this model. In particular, when epithermal absorbers are inserted close to thermal poison the 'shadowing' ('screening') spectral effects occurring during control operations are much more correctly modeled by concentration parameters. Through this outstanding example it is highlighted that attention has to be paid to the choice of cross-section interpolation parameters (burnup 'indicator') in core calculations with few energy groups and variable geometries all along the irradiation cycle. Actually, this new model could be advantageously applied to steady-state and transient LWR heterogeneous core computational analysis dealing with strong spectral-history variations under

  6. Evaluation of several two-step scoring functions based on linear interaction energy, effective ligand size, and empirical pair potentials for prediction of protein-ligand binding geometry and free energy.

    Science.gov (United States)

    Rahaman, Obaidur; Estrada, Trilce P; Doren, Douglas J; Taufer, Michela; Brooks, Charles L; Armen, Roger S

    2011-09-26

    The performances of several two-step scoring approaches for molecular docking were assessed for their ability to predict binding geometries and free energies. Two new scoring functions designed for "step 2 discrimination" were proposed and compared to our CHARMM implementation of the linear interaction energy (LIE) approach using the Generalized-Born with Molecular Volume (GBMV) implicit solvation model. A scoring function S1 was proposed by considering only "interacting" ligand atoms as the "effective size" of the ligand and extended to an empirical regression-based pair potential S2. The S1 and S2 scoring schemes were trained and 5-fold cross-validated on a diverse set of 259 protein-ligand complexes from the Ligand Protein Database (LPDB). The regression-based parameters for S1 and S2 also demonstrated reasonable transferability in the CSARdock 2010 benchmark using a new data set (NRC HiQ) of diverse protein-ligand complexes. The ability of the scoring functions to accurately predict ligand geometry was evaluated by calculating the discriminative power (DP) of the scoring functions to identify native poses. The parameters for the LIE scoring function with the optimal discriminative power (DP) for geometry (step 1 discrimination) were found to be very similar to the best-fit parameters for binding free energy over a large number of protein-ligand complexes (step 2 discrimination). Reasonable performance of the scoring functions in enrichment of active compounds in four different protein target classes established that the parameters for S1 and S2 provided reasonable accuracy and transferability. Additional analysis was performed to definitively separate scoring function performance from molecular weight effects. This analysis included the prediction of ligand binding efficiencies for a subset of the CSARdock NRC HiQ data set where the number of ligand heavy atoms ranged from 17 to 35. This range of ligand heavy atoms is where improved accuracy of predicted ligand

  7. Reconfiguration of face expressions based on the discrete capture data of radial basis function interpolation

    Institute of Scientific and Technical Information of China (English)

    ZHENG Guangguo; ZHOU Dongsheng; WEI Xiaopeng; ZHANG Qiang

    2012-01-01

    Compactly supported radial basis function can enable the coefficient matrix of solving weigh linear system to have a sparse banded structure, thereby reducing the complexity of the algorithm. Firstly, based on the compactly supported radial basis function, the paper makes the complex quadratic function (Multiquadric, MQ for short) to be transformed and proposes a class of compactly supported MQ function. Secondly, the paper describes a method that interpolates discrete motion capture data to solve the motion vectors of the interpolation points and they are used in facial expression reconstruction. Finally, according to this characteris- tic of the uneven distribution of the face markers, the markers are numbered and grouped in accordance with the density level, and then be interpolated in line with each group. The approach not only ensures the accuracy of the deformation of face local area and smoothness, but also reduces the time complexity of computing.

  8. Comparison of Spatial Interpolation Schemes for Rainfall Data and Application in Hydrological Modeling

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2017-05-01

    Full Text Available The spatial distribution of precipitation is an important aspect of water-related research. The use of different interpolation schemes in the same catchment may cause large differences and deviations from the actual spatial distribution of rainfall. Our study analyzes different methods of spatial rainfall interpolation at annual, daily, and hourly time scales to provide a comprehensive evaluation. An improved regression-based scheme is proposed using principal component regression with residual correction (PCRR and is compared with inverse distance weighting (IDW and multiple linear regression (MLR interpolation methods. In this study, the meso-scale catchment of the Fuhe River in southeastern China was selected as a typical region. Furthermore, a hydrological model HEC-HMS was used to calculate streamflow and to evaluate the impact of rainfall interpolation methods on the results of the hydrological model. Results show that the PCRR method performed better than the other methods tested in the study and can effectively eliminate the interpolation anomalies caused by terrain differences between observation points and surrounding areas. Simulated streamflow showed different characteristics based on the mean, maximum, minimum, and peak flows. The results simulated by PCRR exhibited the lowest streamflow error and highest correlation with measured values at the daily time scale. The application of the PCRR method is found to be promising because it considers multicollinearity among variables.

  9. Generation of response functions of a NaI detector by using an interpolation technique

    International Nuclear Information System (INIS)

    Tominaga, Shoji

    1983-01-01

    A computer method is developed for generating response functions of a NaI detector to monoenergetic γ-rays. The method is based on an interpolation between measured response curves by a detector. The computer programs are constructed for Heath's response spectral library. The principle of the basic mathematics used for interpolation, which was reported previously by the author, et al., is that response curves can be decomposed into a linear combination of intrinsic-component patterns, and thereby the interpolation of curves is reduced to a simple interpolation of weighting coefficients needed to combine the component patterns. This technique has some advantages of data compression, reduction in computation time, and stability of the solution, in comparison with the usual functional fitting method. The processing method of segmentation of a spectrum is devised to generate useful and precise response curves. A spectral curve, obtained for each γ-ray source, is divided into some regions defined by the physical processes, such as the photopeak area, the Compton continuum area, the backscatter peak area, and so on. Each segment curve then is processed separately for interpolation. Lastly the estimated curves to the respective areas are connected on one channel scale. The generation programs are explained briefly. It is shown that the generated curve represents the overall shape of a response spectrum including not only its photopeak but also the corresponding Compton area, with a sufficient accuracy. (author)

  10. SAR image formation with azimuth interpolation after azimuth transform

    Science.gov (United States)

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  11. Interpolation of fuzzy data | Khodaparast | Journal of Fundamental ...

    African Journals Online (AJOL)

    Considering the many applications of mathematical functions in different ways, it is essential to have a defining function. In this study, we used Fuzzy Lagrangian interpolation and natural fuzzy spline polynomials to interpolate the fuzzy data. In the current world and in the field of science and technology, interpolation issues ...

  12. Interpolation of diffusion weighted imaging datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

    2014-01-01

    anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...

  13. Program LINEAR (version 79-1): linearize data in the evaluated nuclear data file/version B (ENDF/B) format

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1979-01-01

    Program LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form (i.e., removes points not needed for linear interpolability). The main advantage of the code is that it allows subsequent codes to consider only linear-linear data. A listing of the source deck is available on request

  14. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

  15. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

  16. An algorithm for centerline extraction using natural neighbour interpolation

    DEFF Research Database (Denmark)

    Mioc, Darka; Antón Castro, Francesc/François; Dharmaraj, Girija

    2004-01-01

    , especially due to the lack of explicit topology in commercial GIS systems. Indeed, each map update might require the batch processing of the whole map. Currently, commercial GIS do not offer completely automatic raster/vector conversion even for simple scanned black and white maps. Various commercial raster...... they need user defined tolerances settings, what causes difficulties in the extraction of complex spatial features, for example: road junctions, curved or irregular lines and complex intersections of linear features. The approach we use here is based on image processing filtering techniques to extract...... to the improvement of data caption and conversion in GIS and to develop a software toolkit for automated raster/vector conversion. The approach is based on computing the skeleton from Voronoi diagrams using natural neighbour interpolation. In this paper we present the algorithm for skeleton extraction from scanned...

  17. Differential maps, difference maps, interpolated maps, and long term prediction

    International Nuclear Information System (INIS)

    Talman, R.

    1988-06-01

    Mapping techniques may be thought to be attractive for the long term prediction of motion in accelerators, especially because a simple map can approximately represent an arbitrarily complicated lattice. The intention of this paper is to develop prejudices as to the validity of such methods by applying them to a simple, exactly solveable, example. It is shown that a numerical interpolation map, such as can be generated in the accelerator tracking program TEAPOT, predicts the evolution more accurately than an analytically derived differential map of the same order. Even so, in the presence of ''appreciable'' nonlinearity, it is shown to be impractical to achieve ''accurate'' prediction beyond some hundreds of cycles of oscillation. This suggests that the value of nonlinear maps is restricted to the parameterization of only the ''leading'' deviation from linearity. 41 refs., 6 figs

  18. Differential geometry curves, surfaces, manifolds

    CERN Document Server

    Kohnel, Wolfgang

    2002-01-01

    This carefully written book is an introduction to the beautiful ideas and results of differential geometry. The first half covers the geometry of curves and surfaces, which provide much of the motivation and intuition for the general theory. Special topics that are explored include Frenet frames, ruled surfaces, minimal surfaces and the Gauss-Bonnet theorem. The second part is an introduction to the geometry of general manifolds, with particular emphasis on connections and curvature. The final two chapters are insightful examinations of the special cases of spaces of constant curvature and Einstein manifolds. The text is illustrated with many figures and examples. The prerequisites are undergraduate analysis and linear algebra.

  19. Quadratic polynomial interpolation on triangular domain

    Science.gov (United States)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  20. Trace interpolation by slant-stack migration

    International Nuclear Information System (INIS)

    Novotny, M.

    1990-01-01

    The slant-stack migration formula based on the radon transform is studied with respect to the depth steep Δz of wavefield extrapolation. It can be viewed as a generalized trace-interpolation procedure including wave extrapolation with an arbitrary step Δz. For Δz > 0 the formula yields the familiar plane-wave decomposition, while for Δz > 0 it provides a robust tool for migration transformation of spatially under sampled wavefields. Using the stationary phase method, it is shown that the slant-stack migration formula degenerates into the Rayleigh-Sommerfeld integral in the far-field approximation. Consequently, even a narrow slant-stack gather applied before the diffraction stack can significantly improve the representation of noisy data in the wavefield extrapolation process. The theory is applied to synthetic and field data to perform trace interpolation and dip reject filtration. The data examples presented prove that the radon interpolator works well in the dip range, including waves with mutual stepouts smaller than half the dominant period

  1. Delimiting areas of endemism through kernel interpolation.

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  2. Delimiting areas of endemism through kernel interpolation.

    Directory of Open Access Journals (Sweden)

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  3. Multivariate calculus and geometry

    CERN Document Server

    Dineen, Seán

    2014-01-01

    Multivariate calculus can be understood best by combining geometric insight, intuitive arguments, detailed explanations and mathematical reasoning. This textbook has successfully followed this programme. It additionally provides a solid description of the basic concepts, via familiar examples, which are then tested in technically demanding situations. In this new edition the introductory chapter and two of the chapters on the geometry of surfaces have been revised. Some exercises have been replaced and others provided with expanded solutions. Familiarity with partial derivatives and a course in linear algebra are essential prerequisites for readers of this book. Multivariate Calculus and Geometry is aimed primarily at higher level undergraduates in the mathematical sciences. The inclusion of many practical examples involving problems of several variables will appeal to mathematics, science and engineering students.

  4. A meshless scheme for partial differential equations based on multiquadric trigonometric B-spline quasi-interpolation

    International Nuclear Information System (INIS)

    Gao Wen-Wu; Wang Zhi-Gang

    2014-01-01

    Based on the multiquadric trigonometric B-spline quasi-interpolant, this paper proposes a meshless scheme for some partial differential equations whose solutions are periodic with respect to the spatial variable. This scheme takes into account the periodicity of the analytic solution by using derivatives of a periodic quasi-interpolant (multiquadric trigonometric B-spline quasi-interpolant) to approximate the spatial derivatives of the equations. Thus, it overcomes the difficulties of the previous schemes based on quasi-interpolation (requiring some additional boundary conditions and yielding unwanted high-order discontinuous points at the boundaries in the spatial domain). Moreover, the scheme also overcomes the difficulty of the meshless collocation methods (i.e., yielding a notorious ill-conditioned linear system of equations for large collocation points). The numerical examples that are presented at the end of the paper show that the scheme provides excellent approximations to the analytic solutions. (general)

  5. Differential Geometry

    CERN Document Server

    Stoker, J J

    2011-01-01

    This classic work is now available in an unabridged paperback edition. Stoker makes this fertile branch of mathematics accessible to the nonspecialist by the use of three different notations: vector algebra and calculus, tensor calculus, and the notation devised by Cartan, which employs invariant differential forms as elements in an algebra due to Grassman, combined with an operation called exterior differentiation. Assumed are a passing acquaintance with linear algebra and the basic elements of analysis.

  6. Evaluation of Teeth and Supporting Structures on Digital Radiograms using Interpolation Methods

    International Nuclear Information System (INIS)

    Koh, Kwang Joon; Chang, Kee Wan

    1999-01-01

    To determine the effect of interpolation functions when processing the digital periapical images. The digital images were obtained by Digora and CDR system on the dry skull and human subject. 3 oral radiologists evaluated the 3 portions of each processed image using 7 interpolation methods and ROC curves were obtained by trapezoidal methods. The highest Az value(0.96) was obtained with cubic spline method and the lowest Az value(0.03) was obtained with facet model method in Digora system. The highest Az value(0.79) was obtained with gray segment expansion method and the lowest Az value(0.07) was obtained with facet model method in CDR system. There was significant difference of Az value in original image between Digora and CDR system at alpha=0.05 level. There were significant differences of Az values between Digora and CDR images with cubic spline method, facet model method, linear interpolation method and non-linear interpolation method at alpha= 0.1 level.

  7. Image Interpolation Scheme based on SVM and Improved PSO

    Science.gov (United States)

    Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.

    2018-01-01

    In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.

  8. Interpolation functions and the Lions-Peetre interpolation construction

    International Nuclear Information System (INIS)

    Ovchinnikov, V I

    2014-01-01

    The generalization of the Lions-Peetre interpolation method of means considered in the present survey is less general than the generalizations known since the 1970s. However, our level of generalization is sufficient to encompass spaces that are most natural from the point of view of applications, like the Lorentz spaces, Orlicz spaces, and their analogues. The spaces φ(X 0 ,X 1 ) p 0 ,p 1 considered here have three parameters: two positive numerical parameters p 0 and p 1 of equal standing, and a function parameter φ. For p 0 ≠p 1 these spaces can be regarded as analogues of Orlicz spaces under the real interpolation method. Embedding criteria are established for the family of spaces φ(X 0 ,X 1 ) p 0 ,p 1 , together with optimal interpolation theorems that refine all the known interpolation theorems for operators acting on couples of weighted spaces L p and that extend these theorems beyond scales of spaces. The main specific feature is that the function parameter φ can be an arbitrary natural functional parameter in the interpolation. Bibliography: 43 titles

  9. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  10. Research progress and hotspot analysis of spatial interpolation

    Science.gov (United States)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  11. Preprocessor with spline interpolation for converting stereolithography into cutter location source data

    Science.gov (United States)

    Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo

    2017-06-01

    The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.

  12. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, Christopher M. [Los Alamos National Laboratory

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementation techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.

  13. Interpolation Approaches for Characterizing Spatial Variability of Soil Properties in Tuz Lake Basin of Turkey

    Science.gov (United States)

    Gorji, Taha; Sertel, Elif; Tanik, Aysegul

    2017-12-01

    Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters

  14. FEM-based linear inverse modeling using a 3D source array to image magma chambers with free geometry. Application to InSAR data from Rabaul Caldera (PNG).

    Science.gov (United States)

    Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martí Molist, Joan

    2015-04-01

    In this study, we present a method to fully integrate a family of finite element models (FEMs) into the regularized linear inversion of InSAR data collected at Rabaul caldera (PNG) between February 2007 and December 2010. During this period the caldera experienced a long-term steady subsidence that characterized surface movement both inside the caldera and outside, on its western side. The inversion is based on an array of FEM sources in the sense that the Green's function matrix is a library of forward numerical displacement solutions generated by the sources of an array common to all FEMs. Each entry of the library is the LOS surface displacement generated by injecting a unity mass of fluid, of known density and bulk modulus, into a different source cavity of the array for each FEM. By using FEMs, we are taking advantage of their capability of including topography and heterogeneous distribution of elastic material properties. All FEMs of the family share the same mesh in which only one source is activated at the time by removing the corresponding elements and applying the unity fluid flux. The domain therefore only needs to be discretized once. This precludes remeshing for each activated source, thus reducing computational requirements, often a downside of FEM-based inversions. Without imposing an a-priori source, the method allows us to identify, from a least-squares standpoint, a complex distribution of fluid flux (or change in pressure) with a 3D free geometry within the source array, as dictated by the data. The results of applying the proposed inversion to Rabaul InSAR data show a shallow magmatic system under the caldera made of two interconnected lobes located at the two opposite sides of the caldera. These lobes could be consistent with feeding reservoirs of the ongoing Tavuvur volcano eruption of andesitic products, on the eastern side, and of the past Vulcan volcano eruptions of more evolved materials, on the western side. The interconnection and

  15. Calculation of reactivity without Lagrange interpolation

    International Nuclear Information System (INIS)

    Suescun D, D.; Figueroa J, J. H.; Rodriguez R, K. C.; Villada P, J. P.

    2015-09-01

    A new method to solve numerically the inverse equation of punctual kinetics without using Lagrange interpolating polynomial is formulated; this method uses a polynomial approximation with N points based on a process of recurrence for simulating different forms of nuclear power. The results show a reliable accuracy. Furthermore, the method proposed here is suitable for real-time measurements of reactivity, with step sizes of calculations greater that Δt = 0.3 s; due to its precision can be used to implement a digital meter of reactivity in real time. (Author)

  16. Solving the Schroedinger equation using Smolyak interpolants

    International Nuclear Information System (INIS)

    Avila, Gustavo; Carrington, Tucker Jr.

    2013-01-01

    In this paper, we present a new collocation method for solving the Schroedinger equation. Collocation has the advantage that it obviates integrals. All previous collocation methods have, however, the crucial disadvantage that they require solving a generalized eigenvalue problem. By combining Lagrange-like functions with a Smolyak interpolant, we device a collocation method that does not require solving a generalized eigenvalue problem. We exploit the structure of the grid to develop an efficient algorithm for evaluating the matrix-vector products required to compute energy levels and wavefunctions. Energies systematically converge as the number of points and basis functions are increased

  17. Topics in multivariate approximation and interpolation

    CERN Document Server

    Jetter, Kurt

    2005-01-01

    This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr

  18. Adapting Better Interpolation Methods to Model Amphibious MT Data Along the Cascadian Subduction Zone.

    Science.gov (United States)

    Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.

    2016-12-01

    Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new

  19. Normal forms in Poisson geometry

    NARCIS (Netherlands)

    Marcut, I.T.

    2013-01-01

    The structure of Poisson manifolds is highly nontrivial even locally. The first important result in this direction is Conn's linearization theorem around fixed points. One of the main results of this thesis (Theorem 2) is a normal form theorem in Poisson geometry, which is the Poisson-geometric

  20. Multivariable calculus and differential geometry

    CERN Document Server

    Walschap, Gerard

    2015-01-01

    This text is a modern in-depth study of the subject that includes all the material needed from linear algebra. It then goes on to investigate topics in differential geometry, such as manifolds in Euclidean space, curvature, and the generalization of the fundamental theorem of calculus known as Stokes' theorem.

  1. Complex analysis and CR geometry

    CERN Document Server

    Zampieri, Giuseppe

    2008-01-01

    Cauchy-Riemann (CR) geometry is the study of manifolds equipped with a system of CR-type equations. Compared to the early days when the purpose of CR geometry was to supply tools for the analysis of the existence and regularity of solutions to the \\bar\\partial-Neumann problem, it has rapidly acquired a life of its own and has became an important topic in differential geometry and the study of non-linear partial differential equations. A full understanding of modern CR geometry requires knowledge of various topics such as real/complex differential and symplectic geometry, foliation theory, the geometric theory of PDE's, and microlocal analysis. Nowadays, the subject of CR geometry is very rich in results, and the amount of material required to reach competence is daunting to graduate students who wish to learn it. However, the present book does not aim at introducing all the topics of current interest in CR geometry. Instead, an attempt is made to be friendly to the novice by moving, in a fairly relaxed way, f...

  2. Air Quality Assessment Using Interpolation Technique

    Directory of Open Access Journals (Sweden)

    Awkash Kumar

    2016-07-01

    Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.

  3. A General 2D Meshless Interpolating Boundary Node Method Based on the Parameter Space

    Directory of Open Access Journals (Sweden)

    Hongyin Yang

    2017-01-01

    Full Text Available The presented study proposed an improved interpolating boundary node method (IIBNM for 2D potential problems. The improved interpolating moving least-square (IIMLS method was applied to construct the shape functions, of which the delta function properties and boundary conditions were directly implemented. In addition, any weight function used in the moving least-square (MLS method was also applicable in the IIMLS method. Boundary cells were required in the computation of the boundary integrals, and additional discretization error was not avoided if traditional cells were used to approximate the geometry. The present study applied the parametric cells created in the parameter space to preserve the exact geometry, and the geometry was maintained due to the number of cells. Only the number of nodes on the boundary was required as additional information for boundary node construction. Most importantly, the IIMLS method can be applied in the parameter space to construct shape functions without the requirement of additional computations for the curve length.

  4. Non-commutative geometry inspired charged black holes

    International Nuclear Information System (INIS)

    Ansoldi, Stefano; Nicolini, Piero; Smailagic, Anais; Spallucci, Euro

    2007-01-01

    We find a new, non-commutative geometry inspired, solution of the coupled Einstein-Maxwell field equations describing a variety of charged, self-gravitating objects, including extremal and non-extremal black holes. The metric smoothly interpolates between de Sitter geometry, at short distance, and Reissner-Nordstrom geometry far away from the origin. Contrary to the ordinary Reissner-Nordstrom spacetime there is no curvature singularity in the origin neither 'naked' nor shielded by horizons. We investigate both Hawking process and pair creation in this new scenario

  5. An Automated Approach to Very High Order Aeroacoustic Computations in Complex Geometries

    Science.gov (United States)

    Dyson, Rodger W.; Goodrich, John W.

    2000-01-01

    Computational aeroacoustics requires efficient, high-resolution simulation tools. And for smooth problems, this is best accomplished with very high order in space and time methods on small stencils. But the complexity of highly accurate numerical methods can inhibit their practical application, especially in irregular geometries. This complexity is reduced by using a special form of Hermite divided-difference spatial interpolation on Cartesian grids, and a Cauchy-Kowalewslci recursion procedure for time advancement. In addition, a stencil constraint tree reduces the complexity of interpolating grid points that are located near wall boundaries. These procedures are used to automatically develop and implement very high order methods (>15) for solving the linearized Euler equations that can achieve less than one grid point per wavelength resolution away from boundaries by including spatial derivatives of the primitive variables at each grid point. The accuracy of stable surface treatments is currently limited to 11th order for grid aligned boundaries and to 2nd order for irregular boundaries.

  6. Optimum and robust 3D facies interpolation strategies in a heterogeneous coal zone (Tertiary As Pontes basin, NW Spain)

    Energy Technology Data Exchange (ETDEWEB)

    Falivene, Oriol; Cabrera, Lluis; Saez, Alberto [Geomodels Institute, Group of Geodynamics and Basin Analysis, Department of Stratigraphy, Paleontology and Marine Geosciences, Universitat de Barcelona, c/ Marti i Franques s/n, Facultat de Geologia, 08028 Barcelona (Spain)

    2007-07-02

    Coal exploration and mining in extensively drilled and sampled coal zones can benefit from 3D statistical facies interpolation. Starting from closely spaced core descriptions, and using interpolation methods, a 3D optimum and robust facies distribution model was obtained for a thick, heterogeneous coal zone deposited in the non-marine As Pontes basin (Oligocene-Early Miocene, NW Spain). Several grid layering styles, interpolation methods (truncated inverse squared distance weighting, truncated kriging, truncated kriging with an areal trend, indicator inverse squared distance weighting, indicator kriging, and indicator kriging with an areal trend) and searching conditions were compared. Facies interpolation strategies were evaluated using visual comparison and cross validation. Moreover, robustness of the resultant facies distribution with respect to variations in interpolation method input parameters was verified by taking into account several scenarios of uncertainty. The resultant 3D facies reconstruction improves the understanding of the distribution and geometry of the coal facies. Furthermore, since some coal quality properties (e.g. calorific value or sulphur percentage) display a good statistical correspondence with facies, predicting the distribution of these properties using the reconstructed facies distribution as a template proved to be a powerful approach, yielding more accurate and realistic reconstructions of these properties in the coal zone. (author)

  7. Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation

    Directory of Open Access Journals (Sweden)

    Hezerul Abdul Karim

    2004-09-01

    Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.

  8. An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA

    Directory of Open Access Journals (Sweden)

    Jiye HUANG

    2014-05-01

    Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  9. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-03

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  10. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-01

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  11. Grade Distribution Modeling within the Bauxite Seams of the Wachangping Mine, China, Using a Multi-Step Interpolation Algorithm

    Directory of Open Access Journals (Sweden)

    Shaofeng Wang

    2017-05-01

    Full Text Available Mineral reserve estimation and mining design depend on a precise modeling of the mineralized deposit. A multi-step interpolation algorithm, including 1D biharmonic spline estimator for interpolating floor altitudes, 2D nearest neighbor, linear, natural neighbor, cubic, biharmonic spline, inverse distance weighted, simple kriging, and ordinary kriging interpolations for grade distribution on the two vertical sections at roadways, and 3D linear interpolation for grade distribution between sections, was proposed to build a 3D grade distribution model of the mineralized seam in a longwall mining panel with a U-shaped layout having two roadways at both sides. Compared to field data from exploratory boreholes, this multi-step interpolation using a natural neighbor method shows an optimal stability and a minimal difference between interpolation and field data. Using this method, the 97,576 m3 of bauxite, in which the mass fraction of Al2O3 (Wa and the mass ratio of Al2O3 to SiO2 (Wa/s are 61.68% and 27.72, respectively, was delimited from the 189,260 m3 mineralized deposit in the 1102 longwall mining panel in the Wachangping mine, Southwest China. The mean absolute errors, the root mean squared errors and the relative standard deviations of errors between interpolated data and exploratory grade data at six boreholes are 2.544, 2.674, and 32.37% of Wa; and 1.761, 1.974, and 67.37% of Wa/s, respectively. The proposed method can be used for characterizing the grade distribution in a mineralized seam between two roadways at both sides of a longwall mining panel.

  12. Distance-two interpolation for parallel algebraic multigrid

    International Nuclear Information System (INIS)

    Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M

    2007-01-01

    In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers

  13. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  14. Elements of linear space

    CERN Document Server

    Amir-Moez, A R; Sneddon, I N

    1962-01-01

    Elements of Linear Space is a detailed treatment of the elements of linear spaces, including real spaces with no more than three dimensions and complex n-dimensional spaces. The geometry of conic sections and quadric surfaces is considered, along with algebraic structures, especially vector spaces and transformations. Problems drawn from various branches of geometry are given.Comprised of 12 chapters, this volume begins with an introduction to real Euclidean space, followed by a discussion on linear transformations and matrices. The addition and multiplication of transformations and matrices a

  15. A vector space approach to geometry

    CERN Document Server

    Hausner, Melvin

    2010-01-01

    The effects of geometry and linear algebra on each other receive close attention in this examination of geometry's correlation with other branches of math and science. In-depth discussions include a review of systematic geometric motivations in vector space theory and matrix theory; the use of the center of mass in geometry, with an introduction to barycentric coordinates; axiomatic development of determinants in a chapter dealing with area and volume; and a careful consideration of the particle problem. 1965 edition.

  16. Complex analysis and geometry

    CERN Document Server

    Silva, Alessandro

    1993-01-01

    The papers in this wide-ranging collection report on the results of investigations from a number of linked disciplines, including complex algebraic geometry, complex analytic geometry of manifolds and spaces, and complex differential geometry.

  17. Non-Riemannian geometry

    CERN Document Server

    Eisenhart, Luther Pfahler

    2005-01-01

    This concise text by a prominent mathematician deals chiefly with manifolds dominated by the geometry of paths. Topics include asymmetric and symmetric connections, the projective geometry of paths, and the geometry of sub-spaces. 1927 edition.

  18. Geometry of the Universe

    International Nuclear Information System (INIS)

    Gurevich, L.Eh.; Gliner, Eh.B.

    1978-01-01

    Problems of investigating the Universe space-time geometry are described on a popular level. Immediate space-time geometries, corresponding to three cosmologic models are considered. Space-time geometry of a closed model is the spherical Riemann geonetry, of an open model - is the Lobachevskij geometry; and of a plane model - is the Euclidean geometry. The Universe real geometry in the contemporary epoch of development is based on the data testifying to the fact that the Universe is infinitely expanding

  19. Finite difference method and algebraic polynomial interpolation for numerically solving Poisson's equation over arbitrary domains

    Directory of Open Access Journals (Sweden)

    Tsugio Fukuchi

    2014-06-01

    Full Text Available The finite difference method (FDM based on Cartesian coordinate systems can be applied to numerical analyses over any complex domain. A complex domain is usually taken to mean that the geometry of an immersed body in a fluid is complex; here, it means simply an analytical domain of arbitrary configuration. In such an approach, we do not need to treat the outer and inner boundaries differently in numerical calculations; both are treated in the same way. Using a method that adopts algebraic polynomial interpolations in the calculation around near-wall elements, all the calculations over irregular domains reduce to those over regular domains. Discretization of the space differential in the FDM is usually derived using the Taylor series expansion; however, if we use the polynomial interpolation systematically, exceptional advantages are gained in deriving high-order differences. In using the polynomial interpolations, we can numerically solve the Poisson equation freely over any complex domain. Only a particular type of partial differential equation, Poisson's equations, is treated; however, the arguments put forward have wider generality in numerical calculations using the FDM.

  20. Shape-based grey-level image interpolation

    International Nuclear Information System (INIS)

    Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh

    1999-01-01

    The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)

  1. Improved Visualization of Gastrointestinal Slow Wave Propagation Using a Novel Wavefront-Orientation Interpolation Technique.

    Science.gov (United States)

    Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R

    2018-02-01

    High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.

  2. Input variable selection for interpolating high-resolution climate ...

    African Journals Online (AJOL)

    Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...

  3. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  4. Some observations on interpolating gauges and non-covariant gauges

    Indian Academy of Sciences (India)

    We discuss the viability of using interpolating gauges to define the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition defining term. We show that the boundary condition needed to maintain gauge-invariance as the interpolating parameter ...

  5. Improved Interpolation Kernels for Super-resolution Algorithms

    DEFF Research Database (Denmark)

    Rasti, Pejman; Orlova, Olga; Tamberg, Gert

    2016-01-01

    Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....

  6. Direction-of-Arrival Estimation for Coprime Array Using Compressive Sensing Based Array Interpolation

    Directory of Open Access Journals (Sweden)

    Aihua Liu

    2017-01-01

    Full Text Available A method of direction-of-arrival (DOA estimation using array interpolation is proposed in this paper to increase the number of resolvable sources and improve the DOA estimation performance for coprime array configuration with holes in its virtual array. The virtual symmetric nonuniform linear array (VSNLA of coprime array signal model is introduced, with the conventional MUSIC with spatial smoothing algorithm (SS-MUSIC applied on the continuous lags in the VSNLA; the degrees of freedom (DoFs for DOA estimation are obviously not fully exploited. To effectively utilize the extent of DoFs offered by the coarray configuration, a compressing sensing based array interpolation algorithm is proposed. The compressing sensing technique is used to obtain the coarse initial DOA estimation, and a modified iterative initial DOA estimation based interpolation algorithm (IMCA-AI is then utilized to obtain the final DOA estimation, which maps the sample covariance matrix of the VSNLA to the covariance matrix of a filled virtual symmetric uniform linear array (VSULA with the same aperture size. The proposed DOA estimation method can efficiently improve the DOA estimation performance. The numerical simulations are provided to demonstrate the effectiveness of the proposed method.

  7. Phase Center Interpolation Algorithm for Airborne GPS through the Kalman Filter

    Directory of Open Access Journals (Sweden)

    Edson A. Mitishita

    2005-12-01

    Full Text Available The aerial triangulation is a fundamental step in any photogrammetric project. The surveying of the traditional control points, depending on region to be mapped, still has a high cost. The distribution of control points at the block, and its positional quality, influence directly in the resulting precisions of the aero triangulation processing. The airborne GPS technique has as key objectives cost reduction and quality improvement of the ground control in the modern photogrammetric projects. Nowadays, in Brazil, the greatest photogrammetric companies are acquiring airborne GPS systems, but those systems are usually presenting difficulties in the operation, due to the need of human resources for the operation, because of the high technology involved. Inside the airborne GPS technique, one of the fundamental steps is the interpolation of the position of the phase center of the GPS antenna, in the photo shot instant. Traditionally, low degree polynomials are used, but recent studies show that those polynomials is reduced in turbulent flights, which are quite common, mainly in great scales flights. This paper has as objective to present a solution for that problem, through an algorithm based on the Kalman Filter, which takes into account the dynamic aspect of the problem. At the end of the paper, the results of a comparison between experiments done with the proposed methodology and a common linear interpolator are shown. These results show a significant accuracy gain at the procedure of linear interpolation, when the Kalman filter is used.

  8. Fractional Delayer Utilizing Hermite Interpolation with Caratheodory Representation

    Directory of Open Access Journals (Sweden)

    Qiang DU

    2018-04-01

    Full Text Available Fractional delay is indispensable for many sorts of circuits and signal processing applications. Fractional delay filter (FDF utilizing Hermite interpolation with an analog differentiator is a straightforward way to delay discrete signals. This method has a low time-domain error, but a complicated sampling module than the Shannon sampling scheme. A simplified scheme, which is based on Shannon sampling and utilizing Hermite interpolation with a digital differentiator, will lead a much higher time-domain error when the signal frequency approaches the Nyquist rate. In this letter, we propose a novel fractional delayer utilizing Hermite interpolation with Caratheodory representation. The samples of differential signal are obtained by Caratheodory representation from the samples of the original signal only. So, only one sampler is needed and the sampling module is simple. Simulation results for four types of signals demonstrate that the proposed method has significantly higher interpolation accuracy than Hermite interpolation with digital differentiator.

  9. Comparison of LHC collimator beam-based alignment to BPM-Interpolated centers

    CERN Document Server

    Valentino, G; Assmann, R W; Bruce, R; Muller, G J; Redaelli, S; Rossi, A; Lari, L

    2012-01-01

    The beam centers at the Large Hadron Collider collimators are determined by beam-based alignment, where both jaws of a collimator are moved in separately until a loss spike is detected on a Beam LossMonitor downstream. Orbit drifts of more than a few hundred micrometers cannot be tolerated, as they would compromise the performance of the collimation system. Beam Position Monitors (BPMs) are installed at various locations around the LHC ring, and a linear interpolation of the orbit can be obtained at the collimator positions. In this paper, the results obtained from beam-based alignment are compared with the orbit interpolated from the BPM data throughout the 2011 and 2012 LHC proton runs.

  10. Ultrahigh Dimensional Variable Selection for Interpolation of Point Referenced Spatial Data: A Digital Soil Mapping Case Study

    Science.gov (United States)

    Lamb, David W.; Mengersen, Kerrie

    2016-01-01

    Modern soil mapping is characterised by the need to interpolate point referenced (geostatistical) observations and the availability of large numbers of environmental characteristics for consideration as covariates to aid this interpolation. Modelling tasks of this nature also occur in other fields such as biogeography and environmental science. This analysis employs the Least Angle Regression (LAR) algorithm for fitting Least Absolute Shrinkage and Selection Operator (LASSO) penalized Multiple Linear Regressions models. This analysis demonstrates the efficiency of the LAR algorithm at selecting covariates to aid the interpolation of geostatistical soil carbon observations. Where an exhaustive search of the models that could be constructed from 800 potential covariate terms and 60 observations would be prohibitively demanding, LASSO variable selection is accomplished with trivial computational investment. PMID:27603135

  11. Computing Diffeomorphic Paths for Large Motion Interpolation.

    Science.gov (United States)

    Seo, Dohyung; Jeffrey, Ho; Vemuri, Baba C

    2013-06-01

    In this paper, we introduce a novel framework for computing a path of diffeomorphisms between a pair of input diffeomorphisms. Direct computation of a geodesic path on the space of diffeomorphisms Diff (Ω) is difficult, and it can be attributed mainly to the infinite dimensionality of Diff (Ω). Our proposed framework, to some degree, bypasses this difficulty using the quotient map of Diff (Ω) to the quotient space Diff ( M )/ Diff ( M ) μ obtained by quotienting out the subgroup of volume-preserving diffeomorphisms Diff ( M ) μ . This quotient space was recently identified as the unit sphere in a Hilbert space in mathematics literature, a space with well-known geometric properties. Our framework leverages this recent result by computing the diffeomorphic path in two stages. First, we project the given diffeomorphism pair onto this sphere and then compute the geodesic path between these projected points. Second, we lift the geodesic on the sphere back to the space of diffeomerphisms, by solving a quadratic programming problem with bilinear constraints using the augmented Lagrangian technique with penalty terms. In this way, we can estimate the path of diffeomorphisms, first, staying in the space of diffeomorphisms, and second, preserving shapes/volumes in the deformed images along the path as much as possible. We have applied our framework to interpolate intermediate frames of frame-sub-sampled video sequences. In the reported experiments, our approach compares favorably with the popular Large Deformation Diffeomorphic Metric Mapping framework (LDDMM).

  12. Functions with disconnected spectrum sampling, interpolation, translates

    CERN Document Server

    Olevskii, Alexander M

    2016-01-01

    The classical sampling problem is to reconstruct entire functions with given spectrum S from their values on a discrete set L. From the geometric point of view, the possibility of such reconstruction is equivalent to determining for which sets L the exponential system with frequencies in L forms a frame in the space L^2(S). The book also treats the problem of interpolation of discrete functions by analytic ones with spectrum in S and the problem of completeness of discrete translates. The size and arithmetic structure of both the spectrum S and the discrete set L play a crucial role in these problems. After an elementary introduction, the authors give a new presentation of classical results due to Beurling, Kahane, and Landau. The main part of the book focuses on recent progress in the area, such as construction of universal sampling sets, high-dimensional and non-analytic phenomena. The reader will see how methods of harmonic and complex analysis interplay with various important concepts in different areas, ...

  13. Rotationally Symmetric Operators for Surface Interpolation

    Science.gov (United States)

    1981-11-01

    Computational Geometry for design and rianufacture , Fills Horwood, Chichester UK, 1979. [111 Gladwell 1. and Wait. R. (eds.). Survey of numerical...from an image," Computer Graphics and Image Processing 3(1974), 277-299. 1161 Horn B. K. P. "The curve of least energy," MIT, Al Memo 610, 1981. 117...an object from a single view," Artificial Intelligence 17 (1981), 409-460. [21] Knuth 1). E. "Mathematical typography ," Bull. Amer. Math. Soc. (new

  14. Interpolating from Bianchi attractors to Lifshitz and AdS spacetimes

    International Nuclear Information System (INIS)

    Kachru, Shamit; Kundu, Nilay; Saha, Arpan; Samanta, Rickmoy; Trivedi, Sandip P.

    2014-01-01

    We construct classes of smooth metrics which interpolate from Bianchi attractor geometries of Types II, III, VI and IX in the IR to Lifshitz or AdS 2 ×S 3 geometries in the UV. While we do not obtain these metrics as solutions of Einstein gravity coupled to a simple matter field theory, we show that the matter sector stress-energy required to support these geometries (via the Einstein equations) does satisfy the weak, and therefore also the null, energy condition. Since Lifshitz or AdS 2 ×S 3 geometries can in turn be connected to AdS 5 spacetime, our results show that there is no barrier, at least at the level of the energy conditions, for solutions to arise connecting these Bianchi attractor geometries to AdS 5 spacetime. The asymptotic AdS 5 spacetime has no non-normalizable metric deformation turned on, which suggests that furthermore, the Bianchi attractor geometries can be the IR geometries dual to field theories living in flat space, with the breaking of symmetries being either spontaneous or due to sources for other fields. Finally, we show that for a large class of flows which connect two Bianchi attractors, a C-function can be defined which is monotonically decreasing from the UV to the IR as long as the null energy condition is satisfied. However, except for special examples of Bianchi attractors (including AdS space), this function does not attain a finite and non-vanishing constant value at the end points

  15. Systematics of IIB spinorial geometry

    OpenAIRE

    Gran, U.; Gutowski, J.; Papadopoulos, G.; Roest, D.

    2005-01-01

    We reduce the classification of all supersymmetric backgrounds of IIB supergravity to the evaluation of the Killing spinor equations and their integrability conditions, which contain the field equations, on five types of spinors. This extends the work of [hep-th/0503046] to IIB supergravity. We give the expressions of the Killing spinor equations on all five types of spinors. In this way, the Killing spinor equations become a linear system for the fluxes, geometry and spacetime derivatives of...

  16. Geometry Dependence of Stellarator Turbulence

    International Nuclear Information System (INIS)

    Mynick, H.E.; Xanthopoulos, P.; Boozer, A.H.

    2009-01-01

    Using the nonlinear gyrokinetic code package GENE/GIST, we study the turbulent transport in a broad family of stellarator designs, to understand the geometry-dependence of the microturbulence. By using a set of flux tubes on a given flux surface, we construct a picture of the 2D structure of the microturbulence over that surface, and relate this to relevant geometric quantities, such as the curvature, local shear, and effective potential in the Schrodinger-like equation governing linear drift modes

  17. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    Science.gov (United States)

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  18. On Multiple Interpolation Functions of the -Genocchi Polynomials

    Directory of Open Access Journals (Sweden)

    Jin Jeong-Hee

    2010-01-01

    Full Text Available Abstract Recently, many mathematicians have studied various kinds of the -analogue of Genocchi numbers and polynomials. In the work (New approach to q-Euler, Genocchi numbers and their interpolation functions, "Advanced Studies in Contemporary Mathematics, vol. 18, no. 2, pp. 105–112, 2009.", Kim defined new generating functions of -Genocchi, -Euler polynomials, and their interpolation functions. In this paper, we give another definition of the multiple Hurwitz type -zeta function. This function interpolates -Genocchi polynomials at negative integers. Finally, we also give some identities related to these polynomials.

  19. Spectral interpolation - Zero fill or convolution. [image processing

    Science.gov (United States)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  20. Steady State Stokes Flow Interpolation for Fluid Control

    DEFF Research Database (Denmark)

    Bhatacharya, Haimasree; Nielsen, Michael Bang; Bridson, Robert

    2012-01-01

    — suffer from a common problem. They fail to capture the rotational components of the velocity field, although extrapolation in the normal direction does consider the tangential component. We address this problem by casting the interpolation as a steady state Stokes flow. This type of flow captures......Fluid control methods often require surface velocities interpolated throughout the interior of a shape to use the velocity as a feedback force or as a boundary condition. Prior methods for interpolation in computer graphics — velocity extrapolation in the normal direction and potential flow...

  1. C1 Rational Quadratic Trigonometric Interpolation Spline for Data Visualization

    Directory of Open Access Journals (Sweden)

    Shengjun Liu

    2015-01-01

    Full Text Available A new C1 piecewise rational quadratic trigonometric spline with four local positive shape parameters in each subinterval is constructed to visualize the given planar data. Constraints are derived on these free shape parameters to generate shape preserving interpolation curves for positive and/or monotonic data sets. Two of these shape parameters are constrained while the other two can be set free to interactively control the shape of the curves. Moreover, the order of approximation of developed interpolant is investigated as O(h3. Numeric experiments demonstrate that our method can construct nice shape preserving interpolation curves efficiently.

  2. Convection in Slab and Spheroidal Geometries

    Science.gov (United States)

    Porter, David H.; Woodward, Paul R.; Jacobs, Michael L.

    2000-01-01

    Three-dimensional numerical simulations of compressible turbulent thermally driven convection, in both slab and spheroidal geometries, are reviewed and analyzed in terms of velocity spectra and mixing-length theory. The same ideal gas model is used in both geometries, and resulting flows are compared. The piecewise-parabolic method (PPM), with either thermal conductivity or photospheric boundary conditions, is used to solve the fluid equations of motion. Fluid motions in both geometries exhibit a Kolmogorov-like k(sup -5/3) range in their velocity spectra. The longest wavelength modes are energetically dominant in both geometries, typically leading to one convection cell dominating the flow. In spheroidal geometry, a dipolar flow dominates the largest scale convective motions. Downflows are intensely turbulent and up drafts are relatively laminar in both geometries. In slab geometry, correlations between temperature and velocity fluctuations, which lead to the enthalpy flux, are fairly independent of depth. In spheroidal geometry this same correlation increases linearly with radius over the inner 70 percent by radius, in which the local pressure scale heights are a sizable fraction of the radius. The effects from the impenetrable boundary conditions in the slab geometry models are confused with the effects from non-local convection. In spheroidal geometry nonlocal effects, due to coherent plumes, are seen as far as several pressure scale heights from the lower boundary and are clearly distinguishable from boundary effects.

  3. Chiral properties of baryon interpolating fields

    International Nuclear Information System (INIS)

    Nagata, Keitaro; Hosaka, Atsushi; Dmitrasinovic, V.

    2008-01-01

    We study the chiral transformation properties of all possible local (non-derivative) interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We derive and use the relations/identities among the baryon operators with identical quantum numbers that follow from the combined color, Dirac and isospin Fierz transformations. These relations reduce the number of independent baryon operators with any given spin and isospin. The Fierz identities also effectively restrict the allowed baryon chiral multiplets. It turns out that the non-derivative baryons' chiral multiplets have the same dimensionality as their Lorentz representations. For the two independent nucleon operators the only permissible chiral multiplet is the fundamental one, ((1)/(2),0)+(0,(1)/(2)). For the Δ, admissible Lorentz representations are (1,(1)/(2))+((1)/(2),1) and ((3)/(2),0)+(0,(3)/(2)). In the case of the (1,(1)/(2))+((1)/(2),1) chiral multiplet, the I(J)=(3)/(2)((3)/(2)) Δ field has one I(J)=(1)/(2)((3)/(2)) chiral partner; otherwise it has none. We also consider the Abelian (U A (1)) chiral transformation properties of the fields and show that each baryon comes in two varieties: (1) with Abelian axial charge +3; and (2) with Abelian axial charge -1. In case of the nucleon these are the two Ioffe fields; in case of the Δ, the (1,(1)/(2))+((1)/(2),1) multiplet has an Abelian axial charge -1 and the ((3)/(2),0)+(0,(3)/(2)) multiplet has an Abelian axial charge +3. (orig.)

  4. Comparison of two fractal interpolation methods

    Science.gov (United States)

    Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo

    2017-03-01

    As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has

  5. Fiber orientation interpolation for the multiscale analysis of short fiber reinforced composite parts

    Science.gov (United States)

    Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf

    2018-06-01

    For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.

  6. Data-adapted moving least squares method for 3-D image interpolation

    International Nuclear Information System (INIS)

    Jang, Sumi; Lee, Yeon Ju; Jeong, Byeongseon; Nam, Haewon; Lee, Rena; Yoon, Jungho

    2013-01-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons. (paper)

  7. Fiber orientation interpolation for the multiscale analysis of short fiber reinforced composite parts

    Science.gov (United States)

    Köbler, Jonathan; Schneider, Matti; Ospald, Felix; Andrä, Heiko; Müller, Ralf

    2018-04-01

    For short fiber reinforced plastic parts the local fiber orientation has a strong influence on the mechanical properties. To enable multiscale computations using surrogate models we advocate a two-step identification strategy. Firstly, for a number of sample orientations an effective model is derived by numerical methods available in the literature. Secondly, to cover a general orientation state, these effective models are interpolated. In this article we develop a novel and effective strategy to carry out this interpolation. Firstly, taking into account symmetry arguments, we reduce the fiber orientation phase space to a triangle in R^2 . For an associated triangulation of this triangle we furnish each node with an surrogate model. Then, we use linear interpolation on the fiber orientation triangle to equip each fiber orientation state with an effective stress. The proposed approach is quite general, and works for any physically nonlinear constitutive law on the micro-scale, as long as surrogate models for single fiber orientation states can be extracted. To demonstrate the capabilities of our scheme we study the viscoelastic creep behavior of short glass fiber reinforced PA66, and use Schapery's collocation method together with FFT-based computational homogenization to derive single orientation state effective models. We discuss the efficient implementation of our method, and present results of a component scale computation on a benchmark component by using ABAQUS ®.

  8. Comparative Analysis of Spatial Interpolation Methods in the Mediterranean Area: Application to Temperature in Sicily

    Directory of Open Access Journals (Sweden)

    Annalisa Di Piazza

    2015-04-01

    Full Text Available An exhaustive comparison among different spatial interpolation algorithms was carried out in order to derive annual and monthly air temperature maps for Sicily (Italy. Deterministic, data-driven and geostatistics algorithms were used, in some cases adding the elevation information and other physiographic variables to improve the performance of interpolation techniques and the reconstruction of the air temperature field. The dataset is given by air temperature data coming from 84 stations spread around the island of Sicily. The interpolation algorithms were optimized by using a subset of the available dataset, while the remaining subset was used to validate the results in terms of the accuracy and bias of the estimates. Validation results indicate that univariate methods, which neglect the information from physiographic variables, significantly entail the largest errors, while performances improve when such parameters are taken into account. The best results at the annual scale have been obtained using the the ordinary kriging of residuals from linear regression and from the artificial neural network algorithm, while, at the monthly scale, a Fourier-series algorithm has been used to downscale mean annual temperature to reproduce monthly values in the annual cycle.

  9. A novel efficient coupled polynomial field interpolation scheme for higher order piezoelectric extension mode beam finite elements

    International Nuclear Information System (INIS)

    Sulbhewar, Litesh N; Raveendranath, P

    2014-01-01

    An efficient piezoelectric smart beam finite element based on Reddy’s third-order displacement field and layerwise linear potential is presented here. The present formulation is based on the coupled polynomial field interpolation of variables, unlike conventional piezoelectric beam formulations that use independent polynomials. Governing equations derived using a variational formulation are used to establish the relationship between field variables. The resulting expressions are used to formulate coupled shape functions. Starting with an assumed cubic polynomial for transverse displacement (w) and a linear polynomial for electric potential (φ), coupled polynomials for axial displacement (u) and section rotation (θ) are found. This leads to a coupled quadratic polynomial representation for axial displacement (u) and section rotation (θ). The formulation allows accommodation of extension–bending, shear–bending and electromechanical couplings at the interpolation level itself, in a variationally consistent manner. The proposed interpolation scheme is shown to eliminate the locking effects exhibited by conventional independent polynomial field interpolations and improve the convergence characteristics of HSDT based piezoelectric beam elements. Also, the present coupled formulation uses only three mechanical degrees of freedom per node, one less than the conventional formulations. Results from numerical test problems prove the accuracy and efficiency of the present formulation. (paper)

  10. Rhie-Chow interpolation in strong centrifugal fields

    Science.gov (United States)

    Bogovalov, S. V.; Tronin, I. V.

    2015-10-01

    Rhie-Chow interpolation formulas are derived from the Navier-Stokes and continuity equations. These formulas are generalized to gas dynamics in strong centrifugal fields (as high as 106 g) occurring in gas centrifuges.

  11. Efficient Algorithms and Design for Interpolation Filters in Digital Receiver

    Directory of Open Access Journals (Sweden)

    Xiaowei Niu

    2014-05-01

    Full Text Available Based on polynomial functions this paper introduces a generalized design method for interpolation filters. The polynomial-based interpolation filters can be implemented efficiently by using a modified Farrow structure with an arbitrary frequency response, the filters allow many pass- bands and stop-bands, and for each band the desired amplitude and weight can be set arbitrarily. The optimization coefficients of the interpolation filters in time domain are got by minimizing the weighted mean squared error function, then converting to solve the quadratic programming problem. The optimization coefficients in frequency domain are got by minimizing the maxima (MiniMax of the weighted mean squared error function. The degree of polynomials and the length of interpolation filter can be selected arbitrarily. Numerical examples verified the proposed design method not only can reduce the hardware cost effectively but also guarantee an excellent performance.

  12. [Multimodal medical image registration using cubic spline interpolation method].

    Science.gov (United States)

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  13. Interpolating and sampling sequences in finite Riemann surfaces

    OpenAIRE

    Ortega-Cerda, Joaquim

    2007-01-01

    We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.

  14. Illumination estimation via thin-plate spline interpolation.

    Science.gov (United States)

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  15. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  16. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  17. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  18. Geometry and its applications

    CERN Document Server

    Meyer, Walter J

    2006-01-01

    Meyer''s Geometry and Its Applications, Second Edition, combines traditional geometry with current ideas to present a modern approach that is grounded in real-world applications. It balances the deductive approach with discovery learning, and introduces axiomatic, Euclidean geometry, non-Euclidean geometry, and transformational geometry. The text integrates applications and examples throughout and includes historical notes in many chapters. The Second Edition of Geometry and Its Applications is a significant text for any college or university that focuses on geometry''s usefulness in other disciplines. It is especially appropriate for engineering and science majors, as well as future mathematics teachers.* Realistic applications integrated throughout the text, including (but not limited to): - Symmetries of artistic patterns- Physics- Robotics- Computer vision- Computer graphics- Stability of architectural structures- Molecular biology- Medicine- Pattern recognition* Historical notes included in many chapters...

  19. Algebraic geometry in India

    Indian Academy of Sciences (India)

    algebraic geometry but also in related fields like number theory. ... every vector bundle on the affine space is trivial. (equivalently ... les on a compact Riemann surface to unitary rep- ... tial geometry and topology and was generalised in.

  20. Spinorial Geometry and Branes

    International Nuclear Information System (INIS)

    Sloane, Peter

    2007-01-01

    We adapt the spinorial geometry method introduced in [J. Gillard, U. Gran and G. Papadopoulos, 'The spinorial geometry of supersymmetric backgrounds,' Class. Quant. Grav. 22 (2005) 1033 [ (arXiv:hep-th/0410155)

  1. Spinorial Geometry and Branes

    Energy Technology Data Exchange (ETDEWEB)

    Sloane, Peter [Department of Mathematics, King' s College, University of London, Strand, London WC2R 2LS (United Kingdom)

    2007-09-15

    We adapt the spinorial geometry method introduced in [J. Gillard, U. Gran and G. Papadopoulos, 'The spinorial geometry of supersymmetric backgrounds,' Class. Quant. Grav. 22 (2005) 1033 [ (arXiv:hep-th/0410155)

  2. Metrics for Probabilistic Geometries

    DEFF Research Database (Denmark)

    Tosi, Alessandra; Hauberg, Søren; Vellido, Alfredo

    2014-01-01

    the distribution over mappings is given by a Gaussian process. We treat the corresponding latent variable model as a Riemannian manifold and we use the expectation of the metric under the Gaussian process prior to define interpolating paths and measure distance between latent points. We show how distances...

  3. Spatial interpolation of point velocities in stream cross-section

    Directory of Open Access Journals (Sweden)

    Hasníková Eliška

    2015-03-01

    Full Text Available The most frequently used instrument for measuring velocity distribution in the cross-section of small rivers is the propeller-type current meter. Output of measuring using this instrument is point data of a tiny bulk. Spatial interpolation of measured data should produce a dense velocity profile, which is not available from the measuring itself. This paper describes the preparation of interpolation models.

  4. The Convergence Acceleration of Two-Dimensional Fourier Interpolation

    Directory of Open Access Journals (Sweden)

    Anry Nersessian

    2008-07-01

    Full Text Available Hereby, the convergence acceleration of two-dimensional trigonometric interpolation for a smooth functions on a uniform mesh is considered. Together with theoretical estimates some numerical results are presented and discussed that reveal the potential of this method for application in image processing. Experiments show that suggested algorithm allows acceleration of conventional Fourier interpolation even for sparse meshes that can lead to an efficient image compression/decompression algorithms and also to applications in image zooming procedures.

  5. Computational geometry lectures at the morningside center of mathematics

    CERN Document Server

    Wang, Ren-Hong

    2003-01-01

    Computational geometry is a borderline subject related to pure and applied mathematics, computer science, and engineering. The book contains articles on various topics in computational geometry, which are based on invited lectures and some contributed papers presented by researchers working during the program on Computational Geometry at the Morningside Center of Mathematics of the Chinese Academy of Science. The opening article by R.-H. Wang gives a nice survey of various aspects of computational geometry, many of which are discussed in more detail in other papers in the volume. The topics include problems of optimal triangulation, splines, data interpolation, problems of curve and surface design, problems of shape control, quantum teleportation, and others.

  6. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  7. Monolithic front-end ICs for interpolating cathode pad and strip detectors for GEM

    International Nuclear Information System (INIS)

    O'Connor, P.

    1993-05-01

    We are developing CMOS circuits for readout of interpolating cathode strip and pad chambers for the GEM experiment at the SSC. Because these detectors require position resolution of about 1% of the strip pitch, the electronic noise level must be less than 2000 electrons. Several test chips have been fabricated to demonstrate the feasibility of achieving the combination of low noise, speed, and wide dynamic range in CMOS. Results to date show satisfactory noise and linearity performance. Future development will concentrate on radiation-hardening the central tracker ASIC design, optimizing the shaper peaking time and noise contribution, providing more user-configurable output options, and packaging and test issues

  8. Maximum Feedrate Interpolator for Multi-axis CNC Machining with Jerk Constraints

    OpenAIRE

    Beudaert , Xavier; Lavernhe , Sylvain; Tournier , Christophe

    2012-01-01

    A key role of the CNC is to perform the feedrate interpolation which means to generate the setpoints for each machine tool axis. The aim of the VPOp algorithm is to make maximum use of the machine tool respecting both tangential and axis jerk on rotary and linear axes. The developed algorithm uses an iterative constraints intersection approach. At each sampling period, all the constraints given by each axis are expressed and by intersecting all of them the allowable interval for the next poin...

  9. Hermite interpolant multiscaling functions for numerical solution of the convection diffusion equations

    Directory of Open Access Journals (Sweden)

    Elmira Ashpazzadeh

    2018-04-01

    Full Text Available A numerical technique based on the Hermite interpolant multiscaling functions is presented for the solution of Convection-diusion equations. The operational matrices of derivative, integration and product are presented for multiscaling functions and are utilized to reduce the solution of linear Convection-diusion equation to the solution of algebraic equations. Because of sparsity of these matrices, this method is computationally very attractive and reduces the CPU time and computer memory. Illustrative examples are included to demonstrate the validity and applicability of the new technique.

  10. Commutative and Non-commutative Parallelogram Geometry: an Experimental Approach

    OpenAIRE

    Bertram, Wolfgang

    2013-01-01

    By "parallelogram geometry" we mean the elementary, "commutative", geometry corresponding to vector addition, and by "trapezoid geometry" a certain "non-commutative deformation" of the former. This text presents an elementary approach via exercises using dynamical software (such as geogebra), hopefully accessible to a wide mathematical audience, from undergraduate students and high school teachers to researchers, proceeding in three steps: (1) experimental geometry, (2) algebra (linear algebr...

  11. Geometry essentials for dummies

    CERN Document Server

    Ryan, Mark

    2011-01-01

    Just the critical concepts you need to score high in geometry This practical, friendly guide focuses on critical concepts taught in a typical geometry course, from the properties of triangles, parallelograms, circles, and cylinders, to the skills and strategies you need to write geometry proofs. Geometry Essentials For Dummies is perfect for cramming or doing homework, or as a reference for parents helping kids study for exams. Get down to the basics - get a handle on the basics of geometry, from lines, segments, and angles, to vertices, altitudes, and diagonals Conque

  12. 5-D interpolation with wave-front attributes

    Science.gov (United States)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that

  13. Stable isogeometric analysis of trimmed geometries

    Science.gov (United States)

    Marussig, Benjamin; Zechner, Jürgen; Beer, Gernot; Fries, Thomas-Peter

    2017-04-01

    We explore extended B-splines as a stable basis for isogeometric analysis with trimmed parameter spaces. The stabilization is accomplished by an appropriate substitution of B-splines that may lead to ill-conditioned system matrices. The construction for non-uniform knot vectors is presented. The properties of extended B-splines are examined in the context of interpolation, potential, and linear elasticity problems and excellent results are attained. The analysis is performed by an isogeometric boundary element formulation using collocation. It is argued that extended B-splines provide a flexible and simple stabilization scheme which ideally suits the isogeometric paradigm.

  14. Malvar-He-Cutler Linear Image Demosaicking

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-08-01

    Full Text Available Image demosaicking (or demosaicing is the interpolation problem of estimating complete color information for an image that has been captured through a color filter array (CFA, particularly on the Bayer pattern. In this paper we review a simple linear method using 5 x 5 filters, proposed by Malvar, He, and Cutler in 2004, that shows surprisingly good results.

  15. Nonperturbative quantum geometries

    International Nuclear Information System (INIS)

    Jacobson, T.; California Univ., Santa Barbara; Smolin, L.; California Univ., Santa Barbara

    1988-01-01

    Using the self-dual representation of quantum general relativity, based on Ashtekar's new phase space variables, we present an infinite dimensional family of quantum states of the gravitational field which are exactly annihilated by the hamiltonian constraint. These states are constructed from Wilson loops for Ashtekar's connection (which is the spatial part of the left handed spin connection). We propose a new regularization procedure which allows us to evaluate the action of the hamiltonian constraint on these states. Infinite linear combinations of these states which are formally annihilated by the diffeomorphism constraints as well are also described. These are explicit examples of physical states of the gravitational field - and for the compact case are exact zero eigenstates of the hamiltonian of quantum general relativity. Several different approaches to constructing diffeomorphism invariant states in the self dual representation are also described. The physical interpretation of the states described here is discussed. However, as we do not yet know the physical inner product, any interpretation is at this stage speculative. Nevertheless, this work suggests that quantum geometry at Planck scales might be much simpler when explored in terms of the parallel transport of left-handed spinors than when explored in terms of the three metric. (orig.)

  16. Arithmetic noncommutative geometry

    CERN Document Server

    Marcolli, Matilde

    2005-01-01

    Arithmetic noncommutative geometry denotes the use of ideas and tools from the field of noncommutative geometry, to address questions and reinterpret in a new perspective results and constructions from number theory and arithmetic algebraic geometry. This general philosophy is applied to the geometry and arithmetic of modular curves and to the fibers at archimedean places of arithmetic surfaces and varieties. The main reason why noncommutative geometry can be expected to say something about topics of arithmetic interest lies in the fact that it provides the right framework in which the tools of geometry continue to make sense on spaces that are very singular and apparently very far from the world of algebraic varieties. This provides a way of refining the boundary structure of certain classes of spaces that arise in the context of arithmetic geometry, such as moduli spaces (of which modular curves are the simplest case) or arithmetic varieties (completed by suitable "fibers at infinity"), by adding boundaries...

  17. A primer on linear models

    CERN Document Server

    Monahan, John F

    2008-01-01

    Preface Examples of the General Linear Model Introduction One-Sample Problem Simple Linear Regression Multiple Regression One-Way ANOVA First Discussion The Two-Way Nested Model Two-Way Crossed Model Analysis of Covariance Autoregression Discussion The Linear Least Squares Problem The Normal Equations The Geometry of Least Squares Reparameterization Gram-Schmidt Orthonormalization Estimability and Least Squares Estimators Assumptions for the Linear Mean Model Confounding, Identifiability, and Estimability Estimability and Least Squares Estimators F

  18. Basic linear algebra

    CERN Document Server

    Blyth, T S

    2002-01-01

    Basic Linear Algebra is a text for first year students leading from concrete examples to abstract theorems, via tutorial-type exercises. More exercises (of the kind a student may expect in examination papers) are grouped at the end of each section. The book covers the most important basics of any first course on linear algebra, explaining the algebra of matrices with applications to analytic geometry, systems of linear equations, difference equations and complex numbers. Linear equations are treated via Hermite normal forms which provides a successful and concrete explanation of the notion of linear independence. Another important highlight is the connection between linear mappings and matrices leading to the change of basis theorem which opens the door to the notion of similarity. This new and revised edition features additional exercises and coverage of Cramer's rule (omitted from the first edition). However, it is the new, extra chapter on computer assistance that will be of particular interest to readers:...

  19. An introduction to linear algebra and tensors

    CERN Document Server

    Akivis, M A; Silverman, Richard A

    1978-01-01

    Eminently readable, completely elementary treatment begins with linear spaces and ends with analytic geometry, covering multilinear forms, tensors, linear transformation, and more. 250 problems, most with hints and answers. 1972 edition.

  20. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullan, D.E.

    1986-01-01

    We investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. We consider the 2.0347 to 3.3546 keV energy region for 238 U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems. (author)

  1. Effect of interpolation error in pre-processing codes on calculations of self-shielding factors and their temperature derivatives

    International Nuclear Information System (INIS)

    Ganesan, S.; Gopalakrishnan, V.; Ramanadhan, M.M.; Cullen, D.E.

    1985-01-01

    The authors investigate the effect of interpolation error in the pre-processing codes LINEAR, RECENT and SIGMA1 on calculations of self-shielding factors and their temperature derivatives. They consider the 2.0347 to 3.3546 keV energy region for /sup 238/U capture, which is the NEACRP benchmark exercise on unresolved parameters. The calculated values of temperature derivatives of self-shielding factors are significantly affected by interpolation error. The sources of problems in both evaluated data and codes are identified and eliminated in the 1985 version of these codes. This paper helps to (1) inform code users to use only 1985 versions of LINEAR, RECENT, and SIGMA1 and (2) inform designers of other code systems where they may have problems and what to do to eliminate their problems

  2. Euclidean distance geometry an introduction

    CERN Document Server

    Liberti, Leo

    2017-01-01

    This textbook, the first of its kind, presents the fundamentals of distance geometry:  theory, useful methodologies for obtaining solutions, and real world applications. Concise proofs are given and step-by-step algorithms for solving fundamental problems efficiently and precisely are presented in Mathematica®, enabling the reader to experiment with concepts and methods as they are introduced. Descriptive graphics, examples, and problems, accompany the real gems of the text, namely the applications in visualization of graphs, localization of sensor networks, protein conformation from distance data, clock synchronization protocols, robotics, and control of unmanned underwater vehicles, to name several.  Aimed at intermediate undergraduates, beginning graduate students, researchers, and practitioners, the reader with a basic knowledge of linear algebra will gain an understanding of the basic theories of distance geometry and why they work in real life.

  3. Differential geometry and mathematical physics

    CERN Document Server

    Rudolph, Gerd

    Starting from an undergraduate level, this book systematically develops the basics of • Calculus on manifolds, vector bundles, vector fields and differential forms, • Lie groups and Lie group actions, • Linear symplectic algebra and symplectic geometry, • Hamiltonian systems, symmetries and reduction, integrable systems and Hamilton-Jacobi theory. The topics listed under the first item are relevant for virtually all areas of mathematical physics. The second and third items constitute the link between abstract calculus and the theory of Hamiltonian systems. The last item provides an introduction to various aspects of this theory, including Morse families, the Maslov class and caustics. The book guides the reader from elementary differential geometry to advanced topics in the theory of Hamiltonian systems with the aim of making current research literature accessible. The style is that of a mathematical textbook,with full proofs given in the text or as exercises. The material is illustrated by numerous d...

  4. A Potential Issue Involving the Application of the Unit Base Transformation to the Interpolation of Secondary Energy Distributions

    International Nuclear Information System (INIS)

    T Sutton; T Trumbull

    2005-01-01

    Secondary neutron energy spectra used by Monte Carlo codes are often provided in tabular format. Examples are the spectra obtained from ENDF/B-VI File 5 when the LF parameter has the value 1. These secondary spectra are tabulated on an incident energy mesh, and in a Monte Carlo calculation the tabulated spectra are generally interpolated to the energy of the incident neutron. A common method of interpolation involves the use of the unit base transformation. The details of the implementation vary from code to code, so here we will simply focus on the mathematics of the method. Given an incident neutron with energy E, the bracketing points E i and E i+1 on the incident energy mesh are determined. The corresponding secondary energy spectra are transformed to a dimensionless energy coordinate system in which the secondary energies lie between zero and one. A dimensionless secondary energy is then sampled from a spectrum obtained by linearly interpolating the transformed spectra--often using the method of statistical interpolation. Finally, the sampled secondary energy is transformed back into the normal energy coordinate system. For this inverse transformation, the minimum and maximum energies are linearly interpolated from the values given in the non-transformed secondary spectra. The purpose of the unit base transformation is to preserve (as nearly as possible) the physics of the secondary distribution--in particular the minimum and maximum energies possible for the secondary neutron. This method is used by several codes including MCNP and the new MC21 code that is the subject of this paper. In comparing MC21 results to those of MCNP, it was discovered that the nuclear data supplied to MCNP is structured in such a way that the code may not be doing the best possible job of preserving the physics of certain nuclear interactions. In this paper, we describe the problem and explain how it may be avoided

  5. Numerical solution of the Neutron Transport Equation using discontinuous nodal methods at X-Y geometry

    International Nuclear Information System (INIS)

    Delfin L, A.

    1996-01-01

    The purpose of this work is to solve the neutron transport equation in discrete-ordinates and X-Y geometry by developing and using the strong discontinuous and strong modified discontinuous nodal finite element schemes. The strong discontinuous and modified strong discontinuous nodal finite element schemes go from two to ten interpolation parameters per cell. They are describing giving a set D c and polynomial space S c corresponding for each scheme BDMO, RTO, BL, BDM1, HdV, BDFM1, RT1, BQ and BDM2. The solution is obtained solving the neutron transport equation moments for each nodal scheme by developing the basis functions defined by Pascal triangle and the Legendre moments giving in the polynomial space S c and, finally, looking for the non singularity of the resulting linear system. The linear system is numerically solved using a computer program for each scheme mentioned . It uses the LU method and forward and backward substitution and makes a partition of the domain in cells. The source terms and angular flux are calculated, using the directions and weights associated to the S N approximation and solving the angular flux moments to find the effective multiplication constant. The programs are written in Fortran language, using the dynamic allocation of memory to increase efficiently the available memory of the computing equipment. (Author)

  6. Study on Scattered Data Points Interpolation Method Based on Multi-line Structured Light

    International Nuclear Information System (INIS)

    Fan, J Y; Wang, F G; W, Y; Zhang, Y L

    2006-01-01

    Aiming at the range image obtained through multi-line structured light, a regional interpolation method is put forward in this paper. This method divides interpolation into two parts according to the memory format of the scattered data, one is interpolation of the data on the stripes, and the other is interpolation of data between the stripes. Trend interpolation method is applied to the data on the stripes, and Gauss wavelet interpolation method is applied to the data between the stripes. Experiments prove regional interpolation method feasible and practical, and it also promotes the speed and precision

  7. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

    International Nuclear Information System (INIS)

    Zhang Guiyong; Liu Guirong

    2010-01-01

    In the framework of a weakened weak (W 2 ) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W 2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H 1 space, but in a G 1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H 1 space and G 1 space can be viewed as a space of functions with weakened weak (W 2 ) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W 2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W 2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model and

  8. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

    Science.gov (United States)

    Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2017-12-01

    Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method

  9. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    Science.gov (United States)

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  10. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    Science.gov (United States)

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  11. A numerical calculation method for flow discretisation in complex geometry with body-fitted grids; Rechenverfahren zur Diskretisierung von Stroemungen in komplexer Geometrie mittels koerperangepasster Gitter

    Energy Technology Data Exchange (ETDEWEB)

    Jin, X.

    2001-04-01

    A numerical calculation method basing on body fitted grids is developed in this work for computational fluid dynamics in complex geometry. The method solves the conservation equations in a general nonorthogonal coordinate system which matches the curvilinear boundary. The nonorthogonal, patched grid is generated by a grid generator which solves algebraic equations. By means of an interface its geometrical data can be used by this method. The conservation equations are transformed from the Cartesian system to a general curvilinear system keeping the physical Cartesian velocity components as dependent variables. Using a staggered arrangement of variables, the three Cartesian velocity components are defined on every cell surface. Thus the coupling between pressure and velocity is ensured, and numerical oscillations are avoided. The contravariant velocity for calculating mass flux on one cell surface is resulting from dependent Cartesian velocity components. After the discretisation and linear interpolation, a three dimensional 19-point pressure equation is found. Using the explicit treatment for cross-derivative terms, it reduces to the usual 7-point equation. Under the same data and process structure, this method is compatible with the code FLUTAN using Cartesian coordinates. In order to verify this method, several laminar flows are simulated in orthogonal grids at tilted space directions and in nonorthogonal grids with variations of cell angles. The simulated flow types are considered like various duct flows, transient heat conduction, natural convection in a chimney and natural convection in cavities. Their results achieve very good agreement with analytical solutions or empirical data. Convergence for highly nonorthogonal grids is obtained. After the successful validation of this method, it is applied for a reactor safety case. A transient natural convection flow for an optional sump cooling concept SUCO is simulated. The numerical result is comparable with the

  12. Using Neural Networks to Improve the Performance of Radiative Transfer Modeling Used for Geometry Dependent Surface Lambertian-Equivalent Reflectivity Calculations

    Science.gov (United States)

    Fasnacht, Zachary; Qin, Wenhan; Haffner, David P.; Loyola, Diego; Joiner, Joanna; Krotkov, Nickolay; Vasilkov, Alexander; Spurr, Robert

    2017-01-01

    Surface Lambertian-equivalent reflectivity (LER) is important for trace gas retrievals in the direct calculation of cloud fractions and indirect calculation of the air mass factor. Current trace gas retrievals use climatological surface LER's. Surface properties that impact the bidirectional reflectance distribution function (BRDF) as well as varying satellite viewing geometry can be important for retrieval of trace gases. Geometry Dependent LER (GLER) captures these effects with its calculation of sun normalized radiances (I/F) and can be used in current LER algorithms (Vasilkov et al. 2016). Pixel by pixel radiative transfer calculations are computationally expensive for large datasets. Modern satellite missions such as the Tropospheric Monitoring Instrument (TROPOMI) produce very large datasets as they take measurements at much higher spatial and spectral resolutions. Look up table (LUT) interpolation improves the speed of radiative transfer calculations but complexity increases for non-linear functions. Neural networks perform fast calculations and can accurately predict both non-linear and linear functions with little effort.

  13. Three-dimensional reconstruction from cone-beam data using an efficient Fourier technique combined with a special interpolation filter

    International Nuclear Information System (INIS)

    Magnusson Seger, Maria

    1998-01-01

    We here present LINCON FAST which is an exact method for 3D reconstruction from cone-beam projection data. The new method is compared to the LINCON method which is known to be fast and to give good image quality. Both methods have O(N 3 log N) complexity and are based on Grangeat's result which states that the derivative of the Radon transform of the object function can be obtained from cone-beam projections. One disadvantage with LINCON is that the rather computationally intensive chirp z-transform is frequently used. In LINCON FAST , FFT and interpolation in the Fourier domain are used instead, which are less computationally demanding. The computation tools involved in LINCON FAST are solely FFT, 1D eight-point interpolation, multiplicative weighting and tri-linear interpolation. We estimate that LINCON FAST will be 2-2.5 times faster than LINCON. The interpolation filter belongs to a special class of filters developed by us. It turns out that the filter must be very carefully designed to keep a good image quality. Visual inspection of experimental results shows that the image quality is almost the same for LINCON and the new method LINCON FAST . However, it should be remembered that LINCON FAST can never give better image quality than LINCON, since LINCON FAST is designed to approximate LINCON as well as possible. (author)

  14. Real-time image-based B-mode ultrasound image simulation of needles using tensor-product interpolation.

    Science.gov (United States)

    Zhu, Mengchen; Salcudean, Septimiu E

    2011-07-01

    In this paper, we propose an interpolation-based method for simulating rigid needles in B-mode ultrasound images in real time. We parameterize the needle B-mode image as a function of needle position and orientation. We collect needle images under various spatial configurations in a water-tank using a needle guidance robot. Then we use multidimensional tensor-product interpolation to simulate images of needles with arbitrary poses and positions using collected images. After further processing, the interpolated needle and seed images are superimposed on top of phantom or tissue image backgrounds. The similarity between the simulated and the real images is measured using a correlation metric. A comparison is also performed with in vivo images obtained during prostate brachytherapy. Our results, carried out for both the convex (transverse plane) and linear (sagittal/para-sagittal plane) arrays of a trans-rectal transducer indicate that our interpolation method produces good results while requiring modest computing resources. The needle simulation method we present can be extended to the simulation of ultrasound images of other wire-like objects. In particular, we have shown that the proposed approach can be used to simulate brachytherapy seeds.

  15. Gas proportional detectors with interpolating cathode pad readout for high track multiplicities

    International Nuclear Information System (INIS)

    Yu, Bo.

    1991-12-01

    New techniques for position encoding in very high rate particle and photon detectors will be required in experiments planned for future particle accelerators such as the Superconducting Super Collider and new, high intensity, synchrotron sources. Studies of two interpolating cathode ''pad'' readout systems are described in this thesis. They are well suited for high multiplicity, two dimensional unambiguous position sensitive detection of minimum ionizing particles and heavy ions as well as detection of x-rays at high counting rates. One of the readout systems uses subdivided rows of pads interconnected by resistive strips as the cathode of a multiwire proportional chamber (MWPC). A position resolution of less than 100 μm rms, for 5.4 keV x-rays, and differential non-linearity of 12% have been achieved. Low mass (∼0.6% of a radiation length) detector construction techniques have been developed. The second readout system uses rows of chevron shaped cathode pads to perform geometrical charge division. Position resolution (FWHM) of about 1% of the readout spacing and differential non-linearity of 10% for 5.4 keV x-rays have been achieved. A review of other interpolating methods is included. Low mass cathode construction techniques are described. In conclusion, applications and future developments are discussed. 54 refs

  16. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  17. Revolutions of Geometry

    CERN Document Server

    O'Leary, Michael

    2010-01-01

    Guides readers through the development of geometry and basic proof writing using a historical approach to the topic. In an effort to fully appreciate the logic and structure of geometric proofs, Revolutions of Geometry places proofs into the context of geometry's history, helping readers to understand that proof writing is crucial to the job of a mathematician. Written for students and educators of mathematics alike, the book guides readers through the rich history and influential works, from ancient times to the present, behind the development of geometry. As a result, readers are successfull

  18. Fundamental concepts of geometry

    CERN Document Server

    Meserve, Bruce E

    1983-01-01

    Demonstrates relationships between different types of geometry. Provides excellent overview of the foundations and historical evolution of geometrical concepts. Exercises (no solutions). Includes 98 illustrations.

  19. Developments in special geometry

    International Nuclear Information System (INIS)

    Mohaupt, Thomas; Vaughan, Owen

    2012-01-01

    We review the special geometry of N = 2 supersymmetric vector and hypermultiplets with emphasis on recent developments and applications. A new formulation of the local c-map based on the Hesse potential and special real coordinates is presented. Other recent developments include the Euclidean version of special geometry, and generalizations of special geometry to non-supersymmetric theories. As applications we discuss the proof that the local r-map and c-map preserve geodesic completeness, and the construction of four- and five-dimensional static solutions through dimensional reduction over time. The shared features of the real, complex and quaternionic version of special geometry are stressed throughout.

  20. Gaussian process regression for geometry optimization

    Science.gov (United States)

    Denzel, Alexander; Kästner, Johannes

    2018-03-01

    We implemented a geometry optimizer based on Gaussian process regression (GPR) to find minimum structures on potential energy surfaces. We tested both a two times differentiable form of the Matérn kernel and the squared exponential kernel. The Matérn kernel performs much better. We give a detailed description of the optimization procedures. These include overshooting the step resulting from GPR in order to obtain a higher degree of interpolation vs. extrapolation. In a benchmark against the Limited-memory Broyden-Fletcher-Goldfarb-Shanno optimizer of the DL-FIND library on 26 test systems, we found the new optimizer to generally reduce the number of required optimization steps.

  1. Interpolant tree automata and their application in Horn clause verification

    DEFF Research Database (Denmark)

    Kafle, Bishoksan; Gallagher, John Patrick

    2016-01-01

    This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this ......This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way...... clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead....

  2. Interpolation of vector fields from human cardiac DT-MRI

    International Nuclear Information System (INIS)

    Yang, F; Zhu, Y M; Rapacchi, S; Robini, M; Croisille, P; Luo, J H

    2011-01-01

    There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.

  3. Inoculating against eyewitness suggestibility via interpolated verbatim vs. gist testing.

    Science.gov (United States)

    Pansky, Ainat; Tenenboim, Einat

    2011-01-01

    In real-life situations, eyewitnesses often have control over the level of generality in which they choose to report event information. In the present study, we adopted an early-intervention approach to investigate to what extent eyewitness memory may be inoculated against suggestibility, following two different levels of interpolated reporting: verbatim and gist. After viewing a target event, participants responded to interpolated questions that required reporting of target details at either the verbatim or the gist level. After 48 hr, both groups of participants were misled about half of the target details and were finally tested for verbatim memory of all the details. The findings were consistent with our predictions: Whereas verbatim testing was successful in completely inoculating against suggestibility, gist testing did not reduce it whatsoever. These findings are particularly interesting in light of the comparable testing effects found for these two modes of interpolated testing.

  4. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  5. Image interpolation used in three-dimensional range data compression.

    Science.gov (United States)

    Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian

    2016-05-20

    Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.

  6. Importance of interpolation and coincidence errors in data fusion

    Science.gov (United States)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  7. An adaptive interpolation scheme for molecular potential energy surfaces

    Science.gov (United States)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  8. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  9. Multi-dimensional cubic interpolation for ICF hydrodynamics simulation

    International Nuclear Information System (INIS)

    Aoki, Takayuki; Yabe, Takashi.

    1991-04-01

    A new interpolation method is proposed to solve the multi-dimensional hyperbolic equations which appear in describing the hydrodynamics of inertial confinement fusion (ICF) implosion. The advection phase of the cubic-interpolated pseudo-particle (CIP) is greatly improved, by assuming the continuities of the second and the third spatial derivatives in addition to the physical value and the first derivative. These derivatives are derived from the given physical equation. In order to evaluate the new method, Zalesak's example is tested, and we obtain successfully good results. (author)

  10. Oversampling of digitized images. [effects on interpolation in signal processing

    Science.gov (United States)

    Fischel, D.

    1976-01-01

    Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

  11. Scientific data interpolation with low dimensional manifold model

    Science.gov (United States)

    Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

    2018-01-01

    We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  12. Scientific data interpolation with low dimensional manifold model

    International Nuclear Information System (INIS)

    Zhu, Wei; Wang, Bao; Barnard, Richard C.; Hauck, Cory D.

    2017-01-01

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  13. Geometry of multihadron production

    Energy Technology Data Exchange (ETDEWEB)

    Bjorken, J.D.

    1994-10-01

    This summary talk only reviews a small sample of topics featured at this symposium: Introduction; The Geometry and Geography of Phase space; Space-Time Geometry and HBT; Multiplicities, Intermittency, Correlations; Disoriented Chiral Condensate; Deep Inelastic Scattering at HERA; and Other Contributions.

  14. Designs and finite geometries

    CERN Document Server

    1996-01-01

    Designs and Finite Geometries brings together in one place important contributions and up-to-date research results in this important area of mathematics. Designs and Finite Geometries serves as an excellent reference, providing insight into some of the most important research issues in the field.

  15. Geometry of multihadron production

    International Nuclear Information System (INIS)

    Bjorken, J.D.

    1994-10-01

    This summary talk only reviews a small sample of topics featured at this symposium: Introduction; The Geometry and Geography of Phase space; Space-Time Geometry and HBT; Multiplicities, Intermittency, Correlations; Disoriented Chiral Condensate; Deep Inelastic Scattering at HERA; and Other Contributions

  16. The Beauty of Geometry

    Science.gov (United States)

    Morris, Barbara H.

    2004-01-01

    This article describes a geometry project that used the beauty of stained-glass-window designs to teach middle school students about geometric figures and concepts. Three honors prealgebra teachers and a middle school mathematics gifted intervention specialist created a geometry project that covered the curriculum and also assessed students'…

  17. A Lorentzian quantum geometry

    Energy Technology Data Exchange (ETDEWEB)

    Grotz, Andreas

    2011-10-07

    In this thesis, a formulation of a Lorentzian quantum geometry based on the framework of causal fermion systems is proposed. After giving the general definition of causal fermion systems, we deduce space-time as a topological space with an underlying causal structure. Restricting attention to systems of spin dimension two, we derive the objects of our quantum geometry: the spin space, the tangent space endowed with a Lorentzian metric, connection and curvature. In order to get the correspondence to classical differential geometry, we construct examples of causal fermion systems by regularizing Dirac sea configurations in Minkowski space and on a globally hyperbolic Lorentzian manifold. When removing the regularization, the objects of our quantum geometry reduce to the common objects of spin geometry on Lorentzian manifolds, up to higher order curvature corrections.

  18. Methods of information geometry

    CERN Document Server

    Amari, Shun-Ichi

    2000-01-01

    Information geometry provides the mathematical sciences with a new framework of analysis. It has emerged from the investigation of the natural differential geometric structure on manifolds of probability distributions, which consists of a Riemannian metric defined by the Fisher information and a one-parameter family of affine connections called the \\alpha-connections. The duality between the \\alpha-connection and the (-\\alpha)-connection together with the metric play an essential role in this geometry. This kind of duality, having emerged from manifolds of probability distributions, is ubiquitous, appearing in a variety of problems which might have no explicit relation to probability theory. Through the duality, it is possible to analyze various fundamental problems in a unified perspective. The first half of this book is devoted to a comprehensive introduction to the mathematical foundation of information geometry, including preliminaries from differential geometry, the geometry of manifolds or probability d...

  19. A Lorentzian quantum geometry

    International Nuclear Information System (INIS)

    Grotz, Andreas

    2011-01-01

    In this thesis, a formulation of a Lorentzian quantum geometry based on the framework of causal fermion systems is proposed. After giving the general definition of causal fermion systems, we deduce space-time as a topological space with an underlying causal structure. Restricting attention to systems of spin dimension two, we derive the objects of our quantum geometry: the spin space, the tangent space endowed with a Lorentzian metric, connection and curvature. In order to get the correspondence to classical differential geometry, we construct examples of causal fermion systems by regularizing Dirac sea configurations in Minkowski space and on a globally hyperbolic Lorentzian manifold. When removing the regularization, the objects of our quantum geometry reduce to the common objects of spin geometry on Lorentzian manifolds, up to higher order curvature corrections.

  20. Favorable noise uniformity properties of Fourier-based interpolation and reconstruction approaches in single-slice helical computed tomography

    International Nuclear Information System (INIS)

    La Riviere, Patrick J.; Pan Xiaochuan

    2002-01-01

    Volumes reconstructed by standard methods from single-slice helical computed tomography (CT) data have been shown to have noise levels that are highly nonuniform relative to those in conventional CT. These noise nonuniformities can affect low-contrast object detectability and have also been identified as the cause of the zebra artifacts that plague maximum intensity projection (MIP) images of such volumes. While these spatially variant noise levels have their root in the peculiarities of the helical scan geometry, there is also a strong dependence on the interpolation and reconstruction algorithms employed. In this paper, we seek to develop image reconstruction strategies that eliminate or reduce, at its source, the nonuniformity of noise levels in helical CT relative to that in conventional CT. We pursue two approaches, independently and in concert. We argue, and verify, that Fourier-based longitudinal interpolation approaches lead to more uniform noise ratios than do the standard 360LI and 180LI approaches. We also demonstrate that a Fourier-based fan-to-parallel rebinning algorithm, used as an alternative to fanbeam filtered backprojection for slice reconstruction, also leads to more uniform noise ratios, even when making use of the 180LI and 360LI interpolation approaches

  1. Donor impurity-related linear and nonlinear optical absorption coefficients in GaAs/Ga{sub 1−x}Al{sub x}As concentric double quantum rings: Effects of geometry, hydrostatic pressure, and aluminum concentration

    Energy Technology Data Exchange (ETDEWEB)

    Baghramyan, H.M.; Barseghyan, M.G.; Kirakosyan, A.A. [Department of Solid State Physics, Yerevan State University, Al. Manookian 1, 0025 Yerevan (Armenia); Restrepo, R.L. [Física Teórica y Aplicada, Escuela de Ingeniería de Antioquia, AA 7516, Medellín (Colombia); Grupo de Materia Condensada-UdeA, Instituto de Física, Facultadde Ciencias Exactas y Naturales, Universidad de Antioquia UdeA, Calle 70 No. 52-21,Medellín (Colombia); Mora-Ramos, M.E. [Grupo de Materia Condensada-UdeA, Instituto de Física, Facultadde Ciencias Exactas y Naturales, Universidad de Antioquia UdeA, Calle 70 No. 52-21,Medellín (Colombia); Facultad de Ciencias, Universidad Autónoma del Estado de Morelos, Av. Universidad 1001, CP 62209, Cuernavaca, Morelos (Mexico); Duque, C.A., E-mail: cduque@fisica.udea.edu.co [Grupo de Materia Condensada-UdeA, Instituto de Física, Facultadde Ciencias Exactas y Naturales, Universidad de Antioquia UdeA, Calle 70 No. 52-21,Medellín (Colombia)

    2014-01-15

    The linear and nonlinear optical absorption associated with the transition between 1s and 2s states corresponding to the electron-donor-impurity complex in GaAs/Ga{sub 1−x}Al{sub x}As three-dimensional concentric double quantum rings are investigated. Taking into account the combined effects of hydrostatic pressure and the variation of the aluminum concentration, the energies of the ground and first excited s-like states of a donor impurity in such a system have been calculated using the effective mass approximation and a variational method. The energies of these states and the corresponding threshold energy of the optical transitions are examined as functions of hydrostatic pressure, aluminum concentration, radial impurity position, as well as the geometrical dimensions of the structure. The dependencies of the linear, nonlinear and total optical absorption coefficients as functions of the incident photon energy are investigated for different values of those mentioned parameters. It is found that the influences mentioned above lead to either redshifts or blueshifts of the resonant peaks of the optical absorption spectrum. It is particularly discussed the unusual property exhibited by the third-order nonlinear of becoming positive for photon energies below the resonant transition one. It is shown that this phenomenon is associated with the particular features of the system under study, which determine the values of the electric dipole moment matrix elements. -- Highlights: • Intra-band optical absorption associated to impurity states in double quantum rings. • Combined effects of hydrostatic pressure and aluminum concentration are studied. • The influences mentioned above lead to shifts of resonant peaks. • It is discussed an unusual property exhibited by the third-order nonlinear absorption.

  2. Donor impurity-related linear and nonlinear optical absorption coefficients in GaAs/Ga1−xAlxAs concentric double quantum rings: Effects of geometry, hydrostatic pressure, and aluminum concentration

    International Nuclear Information System (INIS)

    Baghramyan, H.M.; Barseghyan, M.G.; Kirakosyan, A.A.; Restrepo, R.L.; Mora-Ramos, M.E.; Duque, C.A.

    2014-01-01

    The linear and nonlinear optical absorption associated with the transition between 1s and 2s states corresponding to the electron-donor-impurity complex in GaAs/Ga 1−x Al x As three-dimensional concentric double quantum rings are investigated. Taking into account the combined effects of hydrostatic pressure and the variation of the aluminum concentration, the energies of the ground and first excited s-like states of a donor impurity in such a system have been calculated using the effective mass approximation and a variational method. The energies of these states and the corresponding threshold energy of the optical transitions are examined as functions of hydrostatic pressure, aluminum concentration, radial impurity position, as well as the geometrical dimensions of the structure. The dependencies of the linear, nonlinear and total optical absorption coefficients as functions of the incident photon energy are investigated for different values of those mentioned parameters. It is found that the influences mentioned above lead to either redshifts or blueshifts of the resonant peaks of the optical absorption spectrum. It is particularly discussed the unusual property exhibited by the third-order nonlinear of becoming positive for photon energies below the resonant transition one. It is shown that this phenomenon is associated with the particular features of the system under study, which determine the values of the electric dipole moment matrix elements. -- Highlights: • Intra-band optical absorption associated to impurity states in double quantum rings. • Combined effects of hydrostatic pressure and aluminum concentration are studied. • The influences mentioned above lead to shifts of resonant peaks. • It is discussed an unusual property exhibited by the third-order nonlinear absorption

  3. Geometry on the space of geometries

    International Nuclear Information System (INIS)

    Christodoulakis, T.; Zanelli, J.

    1988-06-01

    We discuss the geometric structure of the configuration space of pure gravity. This is an infinite dimensional manifold, M, where each point represents one spatial geometry g ij (x). The metric on M is dictated by geometrodynamics, and from it, the Christoffel symbols and Riemann tensor can be found. A ''free geometry'' tracing a geodesic on the manifold describes the time evolution of space in the strong gravity limit. In a regularization previously introduced by the authors, it is found that M does not have the same dimensionality, D, everywhere, and that D is not a scalar, although it is covariantly constant. In this regularization, it is seen that the path integral measure can be absorbed in a renormalization of the cosmological constant. (author). 19 refs

  4. Penyelesaian Numerik Persamaan Advection Dengan Radial Point Interpolation Method dan Integrasi Waktu Dengan Discontinuous Galerkin Method

    Directory of Open Access Journals (Sweden)

    Kresno Wikan Sadono

    2016-12-01

    Full Text Available Persamaan differensial banyak digunakan untuk menggambarkan berbagai fenomena dalam bidang sains dan rekayasa. Berbagai masalah komplek dalam kehidupan sehari-hari dapat dimodelkan dengan persamaan differensial dan diselesaikan dengan metode numerik. Salah satu metode numerik, yaitu metode meshfree atau meshless berkembang akhir-akhir ini, tanpa proses pembuatan elemen pada domain. Penelitian ini menggabungkan metode meshless yaitu radial basis point interpolation method (RPIM dengan integrasi waktu discontinuous Galerkin method (DGM, metode ini disebut RPIM-DGM. Metode RPIM-DGM diaplikasikan pada advection equation pada satu dimensi. RPIM menggunakan basis function multiquadratic function (MQ dan integrasi waktu diturunkan untuk linear-DGM maupun quadratic-DGM. Hasil simulasi menunjukkan, metode ini mendekati hasil analitis dengan baik. Hasil simulasi numerik dengan RPIM DGM menunjukkan semakin banyak node dan semakin kecil time increment menunjukkan hasil numerik semakin akurat. Hasil lain menunjukkan, integrasi numerik dengan quadratic-DGM untuk suatu time increment dan jumlah node tertentu semakin meningkatkan akurasi dibandingkan dengan linear-DGM.  [Title: Numerical solution of advection equation with radial basis interpolation method and discontinuous Galerkin method for time integration] Differential equation is widely used to describe a variety of phenomena in science and engineering. A variety of complex issues in everyday life can be modeled with differential equations and solved by numerical method. One of the numerical methods, the method meshfree or meshless developing lately, without making use of the elements in the domain. The research combines methods meshless, i.e. radial basis point interpolation method with discontinuous Galerkin method as time integration method. This method is called RPIM-DGM. The RPIM-DGM applied to one dimension advection equation. The RPIM using basis function multiquadratic function and time

  5. Biased motion vector interpolation for reduced video artifacts.

    NARCIS (Netherlands)

    2011-01-01

    In a video processing system where motion vectors are estimated for a subset of the blocks of data forming a video frame, and motion vectors are interpolated for the remainder of the blocks of the frame, a method includes determining, for at least at least one block of the current frame for which a

  6. A Note on Interpolation of Stable Processes | Nassiuma | Journal of ...

    African Journals Online (AJOL)

    Interpolation procedures tailored for gaussian processes may not be applied to infinite variance stable processes. Alternative techniques suitable for a limited set of stable case with index α∈(1,2] were initially studied by Pourahmadi (1984) for harmonizable processes. This was later extended to the ARMA stable process ...

  7. Analysis of Spatial Interpolation in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2010-01-01

    are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...

  8. Fast interpolation for Global Positioning System (GPS) satellite orbits

    OpenAIRE

    Clynch, James R.; Sagovac, Christopher Patrick; Danielson, D. A. (Donald A.); Neta, Beny

    1995-01-01

    In this report, we discuss and compare several methods for polynomial interpolation of Global Positioning Systems ephemeris data. We show that the use of difference tables is more efficient than the method currently in use to construct and evaluate the Lagrange polynomials.

  9. Interpolation in computing science : the semantics of modularization

    NARCIS (Netherlands)

    Renardel de Lavalette, Gerard R.

    2008-01-01

    The Interpolation Theorem, first formulated and proved by W. Craig fifty years ago for predicate logic, has been extended to many other logical frameworks and is being applied in several areas of computer science. We give a short overview, and focus on the theory of software systems and modules. An

  10. Parallel optimization of IDW interpolation algorithm on multicore platform

    Science.gov (United States)

    Guan, Xuefeng; Wu, Huayi

    2009-10-01

    Due to increasing power consumption, heat dissipation, and other physical issues, the architecture of central processing unit (CPU) has been turning to multicore rapidly in recent years. Multicore processor is packaged with multiple processor cores in the same chip, which not only offers increased performance, but also presents significant challenges to application developers. As a matter of fact, in GIS field most of current GIS algorithms were implemented serially and could not best exploit the parallelism potential on such multicore platforms. In this paper, we choose Inverse Distance Weighted spatial interpolation algorithm (IDW) as an example to study how to optimize current serial GIS algorithms on multicore platform in order to maximize performance speedup. With the help of OpenMP, threading methodology is introduced to split and share the whole interpolation work among processor cores. After parallel optimization, execution time of interpolation algorithm is greatly reduced and good performance speedup is achieved. For example, performance speedup on Intel Xeon 5310 is 1.943 with 2 execution threads and 3.695 with 4 execution threads respectively. An additional output comparison between pre-optimization and post-optimization is carried out and shows that parallel optimization does to affect final interpolation result.

  11. LIP: The Livermore Interpolation Package, Version 1.6

    Energy Technology Data Exchange (ETDEWEB)

    Fritsch, F. N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-04

    This report describes LIP, the Livermore Interpolation Package. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since it is a general-purpose package that need not be restricted to equation of state data, which uses variables ρ (density) and T (temperature).

  12. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  13. Functional Commutant Lifting and Interpolation on Generalized Analytic Polyhedra

    Czech Academy of Sciences Publication Activity Database

    Ambrozie, Calin-Grigore

    2008-01-01

    Roč. 34, č. 2 (2008), s. 519-543 ISSN 0362-1588 R&D Projects: GA ČR(CZ) GA201/06/0128 Institutional research plan: CEZ:AV0Z10190503 Keywords : intertwining lifting * interpolation * analytic functions Subject RIV: BA - General Mathematics Impact factor: 0.327, year: 2008

  14. Interpolation solution of the single-impurity Anderson model

    International Nuclear Information System (INIS)

    Kuzemsky, A.L.

    1990-10-01

    The dynamical properties of the single-impurity Anderson model (SIAM) is studied using a novel Irreducible Green's Function method (IGF). The new solution for one-particle GF interpolating between the strong and weak correlation limits is obtained. The unified concept of relevant mean-field renormalizations is indispensable for strong correlation limit. (author). 21 refs

  15. Interpolant Tree Automata and their Application in Horn Clause Verification

    Directory of Open Access Journals (Sweden)

    Bishoksan Kafle

    2016-07-01

    Full Text Available This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this paper. The role of an interpolant tree automaton is to provide a generalisation of a spurious counterexample during refinement, capturing a possibly infinite set of spurious counterexample traces. In our approach these traces are then eliminated using a transformation of the Horn clauses. We compare this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead.

  16. Two-dimensional interpolation with experimental data smoothing

    International Nuclear Information System (INIS)

    Trejbal, Z.

    1989-01-01

    A method of two-dimensional interpolation with smoothing of time statistically deflected points is developed for processing of magnetic field measurements at the U-120M field measurements at the U-120M cyclotron. Mathematical statement of initial requirements and the final result of relevant algebraic transformations are given. 3 refs

  17. Recent developments in free-viewpoint interpolation for 3DTV

    NARCIS (Netherlands)

    Zinger, S.; Do, Q.L.; With, de P.H.N.

    2012-01-01

    Current development of 3D technologies brings 3DTV within reach for the customers. We discuss in this article the recent advancements in free-viewpoint interpolation for 3D video. This technology is still a research topic and many efforts are dedicated to creation, evaluation and improvement of new

  18. Twitch interpolation technique in testing of maximal muscle strength

    DEFF Research Database (Denmark)

    Bülow, P M; Nørregaard, J; Danneskiold-Samsøe, B

    1993-01-01

    The aim was to study the methodological aspects of the muscle twitch interpolation technique in estimating the maximal force of contraction in the quadriceps muscle utilizing commercial muscle testing equipment. Six healthy subjects participated in seven sets of experiments testing the effects...

  19. Limiting reiteration for real interpolation with slowly varying functions

    Czech Academy of Sciences Publication Activity Database

    Gogatishvili, Amiran; Opic, Bohumír; Trebels, W.

    2005-01-01

    Roč. 278, 1-2 (2005), s. 86-107 ISSN 0025-584X R&D Projects: GA ČR(CZ) GA201/01/0333 Institutional research plan: CEZ:AV0Z10190503 Keywords : real interpolation * K-functional * limiting reiteration Subject RIV: BA - General Mathematics Impact factor: 0.465, year: 2005

  20. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    Science.gov (United States)

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  1. Blind Authentication Using Periodic Properties ofInterpolation

    Czech Academy of Sciences Publication Activity Database

    Mahdian, Babak; Saic, Stanislav

    2008-01-01

    Roč. 3, č. 3 (2008), s. 529-538 ISSN 1556-6013 R&D Projects: GA ČR GA102/08/0470 Institutional research plan: CEZ:AV0Z10750506 Keywords : image forensics * digital forgery * image tampering * interpolation detection * resampling detection Subject RIV: IN - Informatics, Computer Science Impact factor: 2.230, year: 2008

  2. Interpolation Inequalities and Spectral Estimates for Magnetic Operators

    Science.gov (United States)

    Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael

    2018-05-01

    We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.

  3. Nuclear data banks generation by interpolation; Generacion de bancos de datos nucleares mediante interpolacion

    Energy Technology Data Exchange (ETDEWEB)

    Castillo M, J A

    1999-07-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks.

  4. GENIE - Generation of computational geometry-grids for internal-external flow configurations

    Science.gov (United States)

    Soni, B. K.

    1988-01-01

    Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.

  5. Complex and symplectic geometry

    CERN Document Server

    Medori, Costantino; Tomassini, Adriano

    2017-01-01

    This book arises from the INdAM Meeting "Complex and Symplectic Geometry", which was held in Cortona in June 2016. Several leading specialists, including young researchers, in the field of complex and symplectic geometry, present the state of the art of their research on topics such as the cohomology of complex manifolds; analytic techniques in Kähler and non-Kähler geometry; almost-complex and symplectic structures; special structures on complex manifolds; and deformations of complex objects. The work is intended for researchers in these areas.

  6. Non-Euclidean geometry

    CERN Document Server

    Kulczycki, Stefan

    2008-01-01

    This accessible approach features two varieties of proofs: stereometric and planimetric, as well as elementary proofs that employ only the simplest properties of the plane. A short history of geometry precedes a systematic exposition of the principles of non-Euclidean geometry.Starting with fundamental assumptions, the author examines the theorems of Hjelmslev, mapping a plane into a circle, the angle of parallelism and area of a polygon, regular polygons, straight lines and planes in space, and the horosphere. Further development of the theory covers hyperbolic functions, the geometry of suff

  7. Solution to the Diffusion equation for multi groups in X Y geometry using Linear Perturbation theory; Solucion a la Ecuacion de Difusion para multigrupos en geometria XY utilizando teoria de perturbacion lineal

    Energy Technology Data Exchange (ETDEWEB)

    Mugica R, C.A. [IPN, ESFM, Depto. de Ingenieria Nuclear, 07738 Mexico D.F. (Mexico)

    2004-07-01

    Diverse methods exist to solve numerically the neutron diffusion equation for several energy groups in stationary state among those that highlight those of finite elements. In this work the numerical solution of this equation is presented using Raviart-Thomas nodal methods type finite element, the RT0 and RT1, in combination with iterative techniques that allow to obtain the approached solution in a quick form. Nevertheless the above mentioned, the precision of a method is intimately bound to the dimension of the approach space by cell, 5 for the case RT0 and 12 for the RT1, and/or to the mesh refinement, that makes the order of the problem of own value to solve to grow considerably. By this way if it wants to know an acceptable approach to the value of the effective multiplication factor of the system when this it has experimented a small perturbation it was appeal to the Linear perturbation theory with which is possible to determine it starting from the neutron flow and of the effective multiplication factor of the not perturbed case. Results are presented for a reference problem in which a perturbation is introduced in an assemble that simulates changes in the control bar. (Author)

  8. Linear algebra

    CERN Document Server

    Shilov, Georgi E

    1977-01-01

    Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.

  9. Leak Isolation in Pressurized Pipelines using an Interpolation Function to approximate the Fitting Losses

    Science.gov (United States)

    Badillo-Olvera, A.; Begovich, O.; Peréz-González, A.

    2017-01-01

    The present paper is motivated by the purpose of detection and isolation of a single leak considering the Fault Model Approach (FMA) focused on pipelines with changes in their geometry. These changes generate a different pressure drop that those produced by the friction, this phenomenon is a common scenario in real pipeline systems. The problem arises, since the dynamical model of the fluid in a pipeline only considers straight geometries without fittings. In order to address this situation, several papers work with a virtual model of a pipeline that generates a equivalent straight length, thus, friction produced by the fittings is taking into account. However, when this method is applied, the leak is isolated in a virtual length, which for practical reasons does not represent a complete solution. This research proposes as a solution to the problem of leak isolation in a virtual length, the use of a polynomial interpolation function in order to approximate the conversion of the virtual position to a real-coordinates value. Experimental results in a real prototype are shown, concluding that the proposed methodology has a good performance.

  10. Lectures on coarse geometry

    CERN Document Server

    Roe, John

    2003-01-01

    Coarse geometry is the study of spaces (particularly metric spaces) from a 'large scale' point of view, so that two spaces that look the same from a great distance are actually equivalent. This point of view is effective because it is often true that the relevant geometric properties of metric spaces are determined by their coarse geometry. Two examples of important uses of coarse geometry are Gromov's beautiful notion of a hyperbolic group and Mostow's proof of his famous rigidity theorem. The first few chapters of the book provide a general perspective on coarse structures. Even when only metric coarse structures are in view, the abstract framework brings the same simplification as does the passage from epsilons and deltas to open sets when speaking of continuity. The middle section reviews notions of negative curvature and rigidity. Modern interest in large scale geometry derives in large part from Mostow's rigidity theorem and from Gromov's subsequent 'large scale' rendition of the crucial properties of n...

  11. Lectures on Symplectic Geometry

    CERN Document Server

    Silva, Ana Cannas

    2001-01-01

    The goal of these notes is to provide a fast introduction to symplectic geometry for graduate students with some knowledge of differential geometry, de Rham theory and classical Lie groups. This text addresses symplectomorphisms, local forms, contact manifolds, compatible almost complex structures, Kaehler manifolds, hamiltonian mechanics, moment maps, symplectic reduction and symplectic toric manifolds. It contains guided problems, called homework, designed to complement the exposition or extend the reader's understanding. There are by now excellent references on symplectic geometry, a subset of which is in the bibliography of this book. However, the most efficient introduction to a subject is often a short elementary treatment, and these notes attempt to serve that purpose. This text provides a taste of areas of current research and will prepare the reader to explore recent papers and extensive books on symplectic geometry where the pace is much faster. For this reprint numerous corrections and cl...

  12. Complex algebraic geometry

    CERN Document Server

    Kollár, János

    1997-01-01

    This volume contains the lectures presented at the third Regional Geometry Institute at Park City in 1993. The lectures provide an introduction to the subject, complex algebraic geometry, making the book suitable as a text for second- and third-year graduate students. The book deals with topics in algebraic geometry where one can reach the level of current research while starting with the basics. Topics covered include the theory of surfaces from the viewpoint of recent higher-dimensional developments, providing an excellent introduction to more advanced topics such as the minimal model program. Also included is an introduction to Hodge theory and intersection homology based on the simple topological ideas of Lefschetz and an overview of the recent interactions between algebraic geometry and theoretical physics, which involve mirror symmetry and string theory.

  13. Geometry and Combinatorics

    DEFF Research Database (Denmark)

    Kokkendorff, Simon Lyngby

    2002-01-01

    The subject of this Ph.D.-thesis is somewhere in between continuous and discrete geometry. Chapter 2 treats the geometry of finite point sets in semi-Riemannian hyperquadrics,using a matrix whose entries are a trigonometric function of relative distances in a given point set. The distance...... to the geometry of a simplex in a semi-Riemannian hyperquadric. In chapter 3 we study which finite metric spaces that are realizable in a hyperbolic space in the limit where curvature goes to -∞. We show that such spaces are the so called leaf spaces, the set of degree 1 vertices of weighted trees. We also...... establish results on the limiting geometry of such an isometrically realized leaf space simplex in hyperbolic space, when curvature goes to -∞. Chapter 4 discusses negative type of metric spaces. We give a measure theoretic treatment of this concept and related invariants. The theory developed...

  14. The geometry of geodesics

    CERN Document Server

    Busemann, Herbert

    2005-01-01

    A comprehensive approach to qualitative problems in intrinsic differential geometry, this text examines Desarguesian spaces, perpendiculars and parallels, covering spaces, the influence of the sign of the curvature on geodesics, more. 1955 edition. Includes 66 figures.

  15. Geometry and billiards

    CERN Document Server

    Tabachnikov, Serge

    2005-01-01

    Mathematical billiards describe the motion of a mass point in a domain with elastic reflections off the boundary or, equivalently, the behavior of rays of light in a domain with ideally reflecting boundary. From the point of view of differential geometry, the billiard flow is the geodesic flow on a manifold with boundary. This book is devoted to billiards in their relation with differential geometry, classical mechanics, and geometrical optics. The topics covered include variational principles of billiard motion, symplectic geometry of rays of light and integral geometry, existence and nonexistence of caustics, optical properties of conics and quadrics and completely integrable billiards, periodic billiard trajectories, polygonal billiards, mechanisms of chaos in billiard dynamics, and the lesser-known subject of dual (or outer) billiards. The book is based on an advanced undergraduate topics course (but contains more material than can be realistically taught in one semester). Although the minimum prerequisit...

  16. Rudiments of algebraic geometry

    CERN Document Server

    Jenner, WE

    2017-01-01

    Aimed at advanced undergraduate students of mathematics, this concise text covers the basics of algebraic geometry. Topics include affine spaces, projective spaces, rational curves, algebraic sets with group structure, more. 1963 edition.

  17. Implosions and hypertoric geometry

    DEFF Research Database (Denmark)

    Dancer, A.; Kirwan, F.; Swann, A.

    2013-01-01

    The geometry of the universal hyperkahler implosion for SU (n) is explored. In particular, we show that the universal hyperkahler implosion naturally contains a hypertoric variety described in terms of quivers. Furthermore, we discuss a gauge theoretic approach to hyperkahler implosion.......The geometry of the universal hyperkahler implosion for SU (n) is explored. In particular, we show that the universal hyperkahler implosion naturally contains a hypertoric variety described in terms of quivers. Furthermore, we discuss a gauge theoretic approach to hyperkahler implosion....

  18. d-geometries revisited

    CERN Document Server

    Ceresole, Anna; Gnecchi, Alessandra; Marrani, Alessio

    2013-01-01

    We analyze some properties of the four dimensional supergravity theories which originate from five dimensions upon reduction. They generalize to N>2 extended supersymmetries the d-geometries with cubic prepotentials, familiar from N=2 special K\\"ahler geometry. We emphasize the role of a suitable parametrization of the scalar fields and the corresponding triangular symplectic basis. We also consider applications to the first order flow equations for non-BPS extremal black holes.

  19. High-Dimensional Intrinsic Interpolation Using Gaussian Process Regression and Diffusion Maps

    International Nuclear Information System (INIS)

    Thimmisetty, Charanraj A.; Ghanem, Roger G.; White, Joshua A.; Chen, Xiao

    2017-01-01

    This article considers the challenging task of estimating geologic properties of interest using a suite of proxy measurements. The current work recast this task as a manifold learning problem. In this process, this article introduces a novel regression procedure for intrinsic variables constrained onto a manifold embedded in an ambient space. The procedure is meant to sharpen high-dimensional interpolation by inferring non-linear correlations from the data being interpolated. The proposed approach augments manifold learning procedures with a Gaussian process regression. It first identifies, using diffusion maps, a low-dimensional manifold embedded in an ambient high-dimensional space associated with the data. It relies on the diffusion distance associated with this construction to define a distance function with which the data model is equipped. This distance metric function is then used to compute the correlation structure of a Gaussian process that describes the statistical dependence of quantities of interest in the high-dimensional ambient space. The proposed method is applicable to arbitrarily high-dimensional data sets. Here, it is applied to subsurface characterization using a suite of well log measurements. The predictions obtained in original, principal component, and diffusion space are compared using both qualitative and quantitative metrics. Considerable improvement in the prediction of the geological structural properties is observed with the proposed method.

  20. Interpolation approach to Hamiltonian-varying quantum systems and the adiabatic theorem

    International Nuclear Information System (INIS)

    Pan, Yu; James, Matthew R.; Miao, Zibo; Amini, Nina H.; Ugrinovskii, Valery

    2015-01-01

    Quantum control could be implemented by varying the system Hamiltonian. According to adiabatic theorem, a slowly changing Hamiltonian can approximately keep the system at the ground state during the evolution if the initial state is a ground state. In this paper we consider this process as an interpolation between the initial and final Hamiltonians. We use the mean value of a single operator to measure the distance between the final state and the ideal ground state. This measure resembles the excitation energy or excess work performed in thermodynamics, which can be taken as the error of adiabatic approximation. We prove that under certain conditions, this error can be estimated for an arbitrarily given interpolating function. This error estimation could be used as guideline to induce adiabatic evolution. According to our calculation, the adiabatic approximation error is not linearly proportional to the average speed of the variation of the system Hamiltonian and the inverse of the energy gaps in many cases. In particular, we apply this analysis to an example in which the applicability of the adiabatic theorem is questionable. (orig.)

  1. Sinusoidal Parameter Estimation Using Quadratic Interpolation around Power-Scaled Magnitude Spectrum Peaks

    Directory of Open Access Journals (Sweden)

    Kurt James Werner

    2016-10-01

    Full Text Available The magnitude of the Discrete Fourier Transform (DFT of a discrete-time signal has a limited frequency definition. Quadratic interpolation over the three DFT samples surrounding magnitude peaks improves the estimation of parameters (frequency and amplitude of resolved sinusoids beyond that limit. Interpolating on a rescaled magnitude spectrum using a logarithmic scale has been shown to improve those estimates. In this article, we show how to heuristically tune a power scaling parameter to outperform linear and logarithmic scaling at an equivalent computational cost. Although this power scaling factor is computed heuristically rather than analytically, it is shown to depend in a structured way on window parameters. Invariance properties of this family of estimators are studied and the existence of a bias due to noise is shown. Comparing to two state-of-the-art estimators, we show that an optimized power scaling has a lower systematic bias and lower mean-squared-error in noisy conditions for ten out of twelve common windowing functions.

  2. CMS geometry through 2020

    International Nuclear Information System (INIS)

    Osborne, I; Brownson, E; Eulisse, G; Jones, C D; Sexton-Kennedy, E; Lange, D J

    2014-01-01

    CMS faces real challenges with upgrade of the CMS detector through 2020 and beyond. One of the challenges, from the software point of view, is managing upgrade simulations with the same software release as the 2013 scenario. We present the CMS geometry description software model, its integration with the CMS event setup and core software. The CMS geometry configuration and selection is implemented in Python. The tools collect the Python configuration fragments into a script used in CMS workflow. This flexible and automated geometry configuration allows choosing either transient or persistent version of the same scenario and specific version of the same scenario. We describe how the geometries are integrated and validated, and how we define and handle different geometry scenarios in simulation and reconstruction. We discuss how to transparently manage multiple incompatible geometries in the same software release. Several examples are shown based on current implementation assuring consistent choice of scenario conditions. The consequences and implications for multiple/different code algorithms are discussed.

  3. Software Geometry in Simulations

    Science.gov (United States)

    Alion, Tyler; Viren, Brett; Junk, Tom

    2015-04-01

    The Long Baseline Neutrino Experiment (LBNE) involves many detectors. The experiment's near detector (ND) facility, may ultimately involve several detectors. The far detector (FD) will be significantly larger than any other Liquid Argon (LAr) detector yet constructed; many prototype detectors are being constructed and studied to motivate a plethora of proposed FD designs. Whether it be a constructed prototype or a proposed ND/FD design, every design must be simulated and analyzed. This presents a considerable challenge to LBNE software experts; each detector geometry must be described to the simulation software in an efficient way which allows for multiple authors to easily collaborate. Furthermore, different geometry versions must be tracked throughout their use. We present a framework called General Geometry Description (GGD), written and developed by LBNE software collaborators for managing software to generate geometries. Though GGD is flexible enough to be used by any experiment working with detectors, we present it's first use in generating Geometry Description Markup Language (GDML) files to interface with LArSoft, a framework of detector simulations, event reconstruction, and data analyses written for all LAr technology users at Fermilab. Brett is the other of the framework discussed here, the General Geometry Description (GGD).

  4. Introduction to combinatorial geometry

    International Nuclear Information System (INIS)

    Gabriel, T.A.; Emmett, M.B.

    1985-01-01

    The combinatorial geometry package as used in many three-dimensional multimedia Monte Carlo radiation transport codes, such as HETC, MORSE, and EGS, is becoming the preferred way to describe simple and complicated systems. Just about any system can be modeled using the package with relatively few input statements. This can be contrasted against the older style geometry packages in which the required input statements could be large even for relatively simple systems. However, with advancements come some difficulties. The users of combinatorial geometry must be able to visualize more, and, in some instances, all of the system at a time. Errors can be introduced into the modeling which, though slight, and at times hard to detect, can have devastating effects on the calculated results. As with all modeling packages, the best way to learn the combinatorial geometry is to use it, first on a simple system then on more complicated systems. The basic technique for the description of the geometry consists of defining the location and shape of the various zones in terms of the intersections and unions of geometric bodies. The geometric bodies which are generally included in most combinatorial geometry packages are: (1) box, (2) right parallelepiped, (3) sphere, (4) right circular cylinder, (5) right elliptic cylinder, (6) ellipsoid, (7) truncated right cone, (8) right angle wedge, and (9) arbitrary polyhedron. The data necessary to describe each of these bodies are given. As can be easily noted, there are some subsets included for simplicity

  5. An algorithm for treating flat areas and depressions in digital elevation models using linear interpolation

    Science.gov (United States)

    F. Pan; M. Stieglitz; R.B. McKane

    2012-01-01

    Digital elevation model (DEM) data are essential to hydrological applications and have been widely used to calculate a variety of useful topographic characteristics, e.g., slope, flow direction, flow accumulation area, stream channel network, topographic index, and others. Except for slope, none of the other topographic characteristics can be calculated until the flow...

  6. Generalized geometry and partial supersymmetry breaking

    Energy Technology Data Exchange (ETDEWEB)

    Triendl, Hagen Mathias

    2010-08-15

    This thesis consists of two parts. In the first part we use the formalism of (exceptional) generalized geometry to derive the scalar field space of SU(2) x SU(2)-structure compactifications. We show that in contrast to SU(3) x SU(3) structures, there is no dynamical SU(2) x SU(2) structure interpolating between an SU(2) structure and an identity structure. Furthermore, we derive the scalar manifold of the low-energy effective action for consistent Kaluza-Klein truncations as expected from N = 4 supergravity. In the second part we then determine the general conditions for the existence of stable Minkowski and AdS N = 1 vacua in spontaneously broken gauged N = 2 supergravities and construct the general solution under the assumption that two appropriate commuting isometries exist in the hypermultiplet sector. Furthermore, we derive the low-energy effective action below the scale of partial supersymmetry breaking and show that it satisfies the constraints of N = 1 supergravity. We then apply the discussion to special quaternionic-Kaehler geometries which appear in the low-energy limit of SU(3) x SU(3)-structure compactifications and construct Killing vectors with the right properties. Finally we discuss the string theory realizations for these solutions. (orig.)

  7. Generalized geometry and partial supersymmetry breaking

    International Nuclear Information System (INIS)

    Triendl, Hagen Mathias

    2010-08-01

    This thesis consists of two parts. In the first part we use the formalism of (exceptional) generalized geometry to derive the scalar field space of SU(2) x SU(2)-structure compactifications. We show that in contrast to SU(3) x SU(3) structures, there is no dynamical SU(2) x SU(2) structure interpolating between an SU(2) structure and an identity structure. Furthermore, we derive the scalar manifold of the low-energy effective action for consistent Kaluza-Klein truncations as expected from N = 4 supergravity. In the second part we then determine the general conditions for the existence of stable Minkowski and AdS N = 1 vacua in spontaneously broken gauged N = 2 supergravities and construct the general solution under the assumption that two appropriate commuting isometries exist in the hypermultiplet sector. Furthermore, we derive the low-energy effective action below the scale of partial supersymmetry breaking and show that it satisfies the constraints of N = 1 supergravity. We then apply the discussion to special quaternionic-Kaehler geometries which appear in the low-energy limit of SU(3) x SU(3)-structure compactifications and construct Killing vectors with the right properties. Finally we discuss the string theory realizations for these solutions. (orig.)

  8. Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation

    KAUST Repository

    Murarasu, Alin; Weidendorfer, Josef

    2012-01-01

    bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation

  9. Global aspects of complex geometry

    CERN Document Server

    Catanese, Fabrizio; Huckleberry, Alan T

    2006-01-01

    Present an overview of developments in Complex Geometry. This book covers topics that range from curve and surface theory through special varieties in higher dimensions, moduli theory, Kahler geometry, and group actions to Hodge theory and characteristic p-geometry.

  10. Effect of interpolation on parameters extracted from seating interface pressure arrays

    OpenAIRE

    Michael Wininger, PhD; Barbara Crane, PhD, PT

    2015-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pre...

  11. Single image interpolation via adaptive nonlocal sparsity-based modeling.

    Science.gov (United States)

    Romano, Yaniv; Protter, Matan; Elad, Michael

    2014-07-01

    Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

  12. Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery.

    Science.gov (United States)

    Ratliff, Bradley M; LaCasse, Charles F; Tyo, J Scott

    2009-05-25

    Microgrid polarimeters are composed of an array of micro-polarizing elements overlaid upon an FPA sensor. In the past decade systems have been designed and built in all regions of the optical spectrum. These systems have rugged, compact designs and the ability to obtain a complete set of polarimetric measurements during a single image capture. However, these systems acquire the polarization measurements through spatial modulation and each measurement has a varying instantaneous field-of-view (IFOV). When these measurements are combined to estimate the polarization images, strong edge artifacts are present that severely degrade the estimated polarization imagery. These artifacts can be reduced when interpolation strategies are first applied to the intensity data prior to Stokes vector estimation. Here we formally study IFOV error and the performance of several bilinear interpolation strategies used for reducing it.

  13. Bi-local baryon interpolating fields with two flavors

    Energy Technology Data Exchange (ETDEWEB)

    Dmitrasinovic, V. [Belgrade University, Institute of Physics, Pregrevica 118, Zemun, P.O. Box 57, Beograd (RS); Chen, Hua-Xing [Institutos de Investigacion de Paterna, Departamento de Fisica Teorica and IFIC, Centro Mixto Universidad de Valencia-CSIC, Valencia (Spain); Peking University, Department of Physics and State Key Laboratory of Nuclear Physics and Technology, Beijing (China)

    2011-02-15

    We construct bi-local interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We use the restrictions following from the Pauli principle to derive relations/identities among the baryon operators with identical quantum numbers. Such relations that follow from the combined spatial, Dirac, color, and isospin Fierz transformations may be called the (total/complete) Fierz identities. These relations reduce the number of independent baryon operators with any given spin and isospin. We also study the Abelian and non-Abelian chiral transformation properties of these fields and place them into baryon chiral multiplets. Thus we derive the independent baryon interpolating fields with given values of spin (Lorentz group representation), chiral symmetry (U{sub L}(2) x U{sub R}(2) group representation) and isospin appropriate for the first angular excited states of the nucleon. (orig.)

  14. Kriging for interpolation of sparse and irregularly distributed geologic data

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, K.

    1986-12-31

    For many geologic problems, subsurface observations are available only from a small number of irregularly distributed locations, for example from a handful of drill holes in the region of interest. These observations will be interpolated one way or another, for example by hand-drawn stratigraphic cross-sections, by trend-fitting techniques, or by simple averaging which ignores spatial correlation. In this paper we consider an interpolation technique for such situations which provides, in addition to point estimates, the error estimates which are lacking from other ad hoc methods. The proposed estimator is like a kriging estimator in form, but because direct estimation of the spatial covariance function is not possible the parameters of the estimator are selected by cross-validation. Its use in estimating subsurface stratigraphy at a candidate site for geologic waste repository provides an example.

  15. The interpolation method of stochastic functions and the stochastic variational principle

    International Nuclear Information System (INIS)

    Liu Xianbin; Chen Qiu

    1993-01-01

    Uncertainties have been attaching more importance to increasingly in modern engineering structural design. Viewed on an appropriate scale, the inherent physical attributes (material properties) of many structural systems always exhibit some patterns of random variation in space and time, generally the random variation shows a small parameter fluctuation. For a linear mechanical system, the random variation is modeled as a random one of a linear partial differential operator and, in stochastic finite element method, a random variation of a stiffness matrix. Besides the stochasticity of the structural physical properties, the influences of random loads which always represent themselves as the random boundary conditions bring about much more complexities in structural analysis. Now the stochastic finite element method or the probabilistic finite element method is used to study the structural systems with random physical parameters, whether or not the loads are random. Differing from the general finite element theory, the main difficulty which the stochastic finite element method faces is the inverse operation of stochastic operators and stochastic matrices, since the inverse operators and the inverse matrices are statistically correlated to the random parameters and random loads. So far, many efforts have been made to obtain the reasonably approximate expressions of the inverse operators and inverse matrices, such as Perturbation Method, Neumann Expansion Method, Galerkin Method (in appropriate Hilbert Spaces defined for random functions), Orthogonal Expansion Method. Among these methods, Perturbation Method appear to be the most available. The advantage of these methods is that the fairly accurate response statistics can be obtained under the condition of the finite information of the input. However, the second-order statistics obtained by use of Perturbation Method and Neumann Expansion Method are not always the appropriate ones, because the relevant second

  16. Sources of hyperbolic geometry

    CERN Document Server

    Stillwell, John

    1996-01-01

    This book presents, for the first time in English, the papers of Beltrami, Klein, and Poincaré that brought hyperbolic geometry into the mainstream of mathematics. A recognition of Beltrami comparable to that given the pioneering works of Bolyai and Lobachevsky seems long overdue-not only because Beltrami rescued hyperbolic geometry from oblivion by proving it to be logically consistent, but because he gave it a concrete meaning (a model) that made hyperbolic geometry part of ordinary mathematics. The models subsequently discovered by Klein and Poincaré brought hyperbolic geometry even further down to earth and paved the way for the current explosion of activity in low-dimensional geometry and topology. By placing the works of these three mathematicians side by side and providing commentaries, this book gives the student, historian, or professional geometer a bird's-eye view of one of the great episodes in mathematics. The unified setting and historical context reveal the insights of Beltrami, Klein, and Po...

  17. Generalizing optical geometry

    International Nuclear Information System (INIS)

    Jonsson, Rickard; Westman, Hans

    2006-01-01

    We show that by employing the standard projected curvature as a measure of spatial curvature, we can make a certain generalization of optical geometry (Abramowicz M A and Lasota J-P 1997 Class. Quantum Grav. A 14 23-30). This generalization applies to any spacetime that admits a hypersurface orthogonal shearfree congruence of worldlines. This is a somewhat larger class of spacetimes than the conformally static spacetimes assumed in standard optical geometry. In the generalized optical geometry, which in the generic case is time dependent, photons move with unit speed along spatial geodesics and the sideways force experienced by a particle following a spatially straight line is independent of the velocity. Also gyroscopes moving along spatial geodesics do not precess (relative to the forward direction). Gyroscopes that follow a curved spatial trajectory precess according to a very simple law of three-rotation. We also present an inertial force formalism in coordinate representation for this generalization. Furthermore, we show that by employing a new sense of spatial curvature (Jonsson R 2006 Class. Quantum Grav. 23 1)) closely connected to Fermat's principle, we can make a more extensive generalization of optical geometry that applies to arbitrary spacetimes. In general this optical geometry will be time dependent, but still geodesic photons move with unit speed and follow lines that are spatially straight in the new sense. Also, the sideways experienced (comoving) force on a test particle following a line that is straight in the new sense will be independent of the velocity

  18. The modal surface interpolation method for damage localization

    Science.gov (United States)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  19. Strip interpolation in silicon and germanium strip detectors

    International Nuclear Information System (INIS)

    Wulf, E. A.; Phlips, B. F.; Johnson, W. N.; Kurfess, J. D.; Lister, C. J.; Kondev, F.; Physics; Naval Research Lab.

    2004-01-01

    The position resolution of double-sided strip detectors is limited by the strip pitch and a reduction in strip pitch necessitates more electronics. Improved position resolution would improve the imaging capabilities of Compton telescopes and PET detectors. Digitizing the preamplifier waveform yields more information than can be extracted with regular shaping electronics. In addition to the energy, depth of interaction, and which strip was hit, the digitized preamplifier signals can locate the interaction position to less than the strip pitch of the detector by looking at induced signals in neighboring strips. This allows the position of the interaction to be interpolated in three dimensions and improve the imaging capabilities of the system. In a 2 mm thick silicon strip detector with a strip pitch of 0.891 mm, strip interpolation located the interaction of 356 keV gamma rays to 0.3 mm FWHM. In a 2 cm thick germanium detector with a strip pitch of 5 mm, strip interpolation of 356 keV gamma rays yielded a position resolution of 1.5 mm FWHM

  20. Importance of interpolation and coincidence errors in data fusion

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2018-02-01

    Full Text Available The complete data fusion (CDF method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  1. Interpolation of daily rainfall using spatiotemporal models and clustering

    KAUST Repository

    Militino, A. F.

    2014-06-11

    Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

  2. Interpolation of daily rainfall using spatiotemporal models and clustering

    KAUST Repository

    Militino, A. F.; Ugarte, M. D.; Goicoa, T.; Genton, Marc G.

    2014-01-01

    Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

  3. Global sensitivity analysis using sparse grid interpolation and polynomial chaos

    International Nuclear Information System (INIS)

    Buzzard, Gregery T.

    2012-01-01

    Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990) [1] and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) [2]. Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008) [3] to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial. - Highlights: ► Efficient estimation of variance-based sensitivity coefficients. ► Efficient estimation of derivative-based sensitivity coefficients. ► Use of homotopy methods for approximation of local maxima and minima.

  4. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    Science.gov (United States)

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  5. On removing interpolation and resampling artifacts in rigid image registration.

    Science.gov (United States)

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  6. Modelo digital do terreno através de diferentes interpolações do programa Surfer 12 | Digital terrain model through different interpolations in the surfer 12 software

    Directory of Open Access Journals (Sweden)

    José Machado

    2016-04-01

    the MDT interpolation of measured points is required. The use of TDM, 3D surfaces and contours in moving fast computer programs and can create some problems, such as the type of interpolation used. This work aims to analyze the interpolation methods in points quoted from an irregular geometric figure generated by the Surfer program. They used 12 interpolations available (Data Metrics, Inverse Distance, Kriging, Local Polynomial, Minimum Curvature, Modified Shepard Method, Moving Average, Natural Neighbor, Nearest Neighbor, Polynomial Regression, Radial fuction and Triangulation with Linear Interpolation and analyzed the generated topographic maps. The relief was generated graphical representation via the MDT. They were awarded the excellent concepts, excellent, good, average and bad representation of relief and discussed according Relief representations to the listed geometric image. Data Metrics, Polynomial Regression, Moving Average e Local Polynomial (bad; Moving Average e Modified Shepard Method (regular; Nearest Neighbor (media; Inverse Distance (good; Kriging e Radial Function (great e Triangulation With Linear Interpolation e Natural Neighbor (excellent conditions to representation presented dates.

  7. Differential geometry of group lattices

    International Nuclear Information System (INIS)

    Dimakis, Aristophanes; Mueller-Hoissen, Folkert

    2003-01-01

    In a series of publications we developed ''differential geometry'' on discrete sets based on concepts of noncommutative geometry. In particular, it turned out that first-order differential calculi (over the algebra of functions) on a discrete set are in bijective correspondence with digraph structures where the vertices are given by the elements of the set. A particular class of digraphs are Cayley graphs, also known as group lattices. They are determined by a discrete group G and a finite subset S. There is a distinguished subclass of ''bicovariant'' Cayley graphs with the property ad(S)S subset of S. We explore the properties of differential calculi which arise from Cayley graphs via the above correspondence. The first-order calculi extend to higher orders and then allow us to introduce further differential geometric structures. Furthermore, we explore the properties of ''discrete'' vector fields which describe deterministic flows on group lattices. A Lie derivative with respect to a discrete vector field and an inner product with forms is defined. The Lie-Cartan identity then holds on all forms for a certain subclass of discrete vector fields. We develop elements of gauge theory and construct an analog of the lattice gauge theory (Yang-Mills) action on an arbitrary group lattice. Also linear connections are considered and a simple geometric interpretation of the torsion is established. By taking a quotient with respect to some subgroup of the discrete group, generalized differential calculi associated with so-called Schreier diagrams are obtained

  8. Computational synthetic geometry

    CERN Document Server

    Bokowski, Jürgen

    1989-01-01

    Computational synthetic geometry deals with methods for realizing abstract geometric objects in concrete vector spaces. This research monograph considers a large class of problems from convexity and discrete geometry including constructing convex polytopes from simplicial complexes, vector geometries from incidence structures and hyperplane arrangements from oriented matroids. It turns out that algorithms for these constructions exist if and only if arbitrary polynomial equations are decidable with respect to the underlying field. Besides such complexity theorems a variety of symbolic algorithms are discussed, and the methods are applied to obtain new mathematical results on convex polytopes, projective configurations and the combinatorics of Grassmann varieties. Finally algebraic varieties characterizing matroids and oriented matroids are introduced providing a new basis for applying computer algebra methods in this field. The necessary background knowledge is reviewed briefly. The text is accessible to stud...

  9. Discrete and computational geometry

    CERN Document Server

    Devadoss, Satyan L

    2011-01-01

    Discrete geometry is a relatively new development in pure mathematics, while computational geometry is an emerging area in applications-driven computer science. Their intermingling has yielded exciting advances in recent years, yet what has been lacking until now is an undergraduate textbook that bridges the gap between the two. Discrete and Computational Geometry offers a comprehensive yet accessible introduction to this cutting-edge frontier of mathematics and computer science. This book covers traditional topics such as convex hulls, triangulations, and Voronoi diagrams, as well as more recent subjects like pseudotriangulations, curve reconstruction, and locked chains. It also touches on more advanced material, including Dehn invariants, associahedra, quasigeodesics, Morse theory, and the recent resolution of the Poincaré conjecture. Connections to real-world applications are made throughout, and algorithms are presented independently of any programming language. This richly illustrated textbook also fe...

  10. Geometry and Cloaking Devices

    Science.gov (United States)

    Ochiai, T.; Nacher, J. C.

    2011-09-01

    Recently, the application of geometry and conformal mappings to artificial materials (metamaterials) has attracted the attention in various research communities. These materials, characterized by a unique man-made structure, have unusual optical properties, which materials found in nature do not exhibit. By applying the geometry and conformal mappings theory to metamaterial science, it may be possible to realize so-called "Harry Potter cloaking device". Although such a device is still in the science fiction realm, several works have shown that by using such metamaterials it may be possible to control the direction of the electromagnetic field at will. We could then make an object hidden inside of a cloaking device. Here, we will explain how to design invisibility device using differential geometry and conformal mappings.

  11. Lectures on discrete geometry

    CERN Document Server

    2002-01-01

    Discrete geometry investigates combinatorial properties of configurations of geometric objects. To a working mathematician or computer scientist, it offers sophisticated results and techniques of great diversity and it is a foundation for fields such as computational geometry or combinatorial optimization. This book is primarily a textbook introduction to various areas of discrete geometry. In each area, it explains several key results and methods, in an accessible and concrete manner. It also contains more advanced material in separate sections and thus it can serve as a collection of surveys in several narrower subfields. The main topics include: basics on convex sets, convex polytopes, and hyperplane arrangements; combinatorial complexity of geometric configurations; intersection patterns and transversals of convex sets; geometric Ramsey-type results; polyhedral combinatorics and high-dimensional convexity; and lastly, embeddings of finite metric spaces into normed spaces. Jiri Matousek is Professor of Com...

  12. Complex differential geometry

    CERN Document Server

    Zheng, Fangyang

    2002-01-01

    The theory of complex manifolds overlaps with several branches of mathematics, including differential geometry, algebraic geometry, several complex variables, global analysis, topology, algebraic number theory, and mathematical physics. Complex manifolds provide a rich class of geometric objects, for example the (common) zero locus of any generic set of complex polynomials is always a complex manifold. Yet complex manifolds behave differently than generic smooth manifolds; they are more coherent and fragile. The rich yet restrictive character of complex manifolds makes them a special and interesting object of study. This book is a self-contained graduate textbook that discusses the differential geometric aspects of complex manifolds. The first part contains standard materials from general topology, differentiable manifolds, and basic Riemannian geometry. The second part discusses complex manifolds and analytic varieties, sheaves and holomorphic vector bundles, and gives a brief account of the surface classifi...

  13. Quantization of the Schwarzschild geometry

    International Nuclear Information System (INIS)

    Melas, Evangelos

    2013-01-01

    The conditional symmetries of the reduced Einstein-Hilbert action emerging from a static, spherically symmetric geometry are used as supplementary conditions on the wave function. Based on their integrability conditions, only one of the three existing symmetries can be consistently imposed, while the unique Casimir invariant, being the product of the remaining two symmetries, is calculated as the only possible second condition on the wave function. This quadratic integral of motion is identified with the reparametrization generator, as an implication of the uniqueness of the dynamical evolution, by fixing a suitable parametrization of the r-lapse function. In this parametrization, the determinant of the supermetric plays the role of the mesure. The combined Wheeler – DeWitt and linear conditional symmetry equations are analytically solved. The solutions obtained depend on the product of the two ''scale factors''.

  14. Geometry and symmetry

    CERN Document Server

    Yale, Paul B

    2012-01-01

    This book is an introduction to the geometry of Euclidean, affine, and projective spaces with special emphasis on the important groups of symmetries of these spaces. The two major objectives of the text are to introduce the main ideas of affine and projective spaces and to develop facility in handling transformations and groups of transformations. Since there are many good texts on affine and projective planes, the author has concentrated on the n-dimensional cases.Designed to be used in advanced undergraduate mathematics or physics courses, the book focuses on ""practical geometry,"" emphasi

  15. Using ‘snapshot’ measurements of CH4 fluxes from an ombrotrophic peatland to estimate annual budgets: interpolation versus modelling

    Directory of Open Access Journals (Sweden)

    S.M. Green

    2017-03-01

    Full Text Available Flux-chamber measurements of greenhouse gas exchanges between the soil and the atmosphere represent a snapshot of the conditions on a particular site and need to be combined or used in some way to provide integrated fluxes for the longer time periods that are often of interest. In contrast to carbon dioxide (CO2, most studies that have estimated the time-integrated flux of CH4 on ombrotrophic peatlands have not used models. Typically, linear interpolation is used to estimate CH4 fluxes during the time periods between flux-chamber measurements. CH4 fluxes generally show a rise followed by a fall through the growing season that may be captured reasonably well by interpolation, provided there are sufficiently frequent measurements. However, day-to-day and week-to-week variability is also often evident in CH4 flux data, and will not necessarily be properly represented by interpolation. Using flux chamber data from a UK blanket peatland, we compared annualised CH4 fluxes estimated by interpolation with those estimated using linear models and found that the former tended to be higher than the latter. We consider the implications of these results for the calculation of the radiative forcing effect of ombrotrophic peatlands.

  16. A numerical calculation method for flow discretisation in complex geometry with body-fitted grids

    International Nuclear Information System (INIS)

    Jin, X.

    2001-04-01

    A numerical calculation method basing on body fitted grids is developed in this work for computational fluid dynamics in complex geometry. The method solves the conservation equations in a general nonorthogonal coordinate system which matches the curvilinear boundary. The nonorthogonal, patched grid is generated by a grid generator which solves algebraic equations. By means of an interface its geometrical data can be used by this method. The conservation equations are transformed from the Cartesian system to a general curvilinear system keeping the physical Cartesian velocity components as dependent variables. Using a staggered arrangement of variables, the three Cartesian velocity components are defined on every cell surface. Thus the coupling between pressure and velocity is ensured, and numerical oscillations are avoided. The contravariant velocity for calculating mass flux on one cell surface is resulting from dependent Cartesian velocity components. After the discretisation and linear interpolation, a three dimensional 19-point pressure equation is found. Using the explicit treatment for cross-derivative terms, it reduces to the usual 7-point equation. Under the same data and process structure, this method is compatible with the code FLUTAN using Cartesian coordinates. In order to verify this method, several laminar flows are simulated in orthogonal grids at tilted space directions and in nonorthogonal grids with variations of cell angles. The simulated flow types are considered like various duct flows, transient heat conduction, natural convection in a chimney and natural convection in cavities. Their results achieve very good agreement with analytical solutions or empirical data. Convergence for highly nonorthogonal grids is obtained. After the successful validation of this method, it is applied for a reactor safety case. A transient natural convection flow for an optional sump cooling concept SUCO is simulated. The numerical result is comparable with the

  17. Ambient Occlusion Effects for Combined Volumes and Tubular Geometry

    KAUST Repository

    Schott, M.

    2013-06-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.

  18. Ambient Occlusion Effects for Combined Volumes and Tubular Geometry

    KAUST Repository

    Schott, M.; Martin, T.; Grosset, A. V. P.; Smith, S. T.; Hansen, C. D.

    2013-01-01

    This paper details a method for interactive direct volume rendering that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube-shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The algorithm extends the recently presented the directional occlusion shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. Stream tube geometries are computed using an effective spline-based interpolation and approximation scheme that avoids self-intersection and maintains coherent orientation of the stream tube segments to avoid surface deforming twists. Furthermore, strategies to reduce the geometric and specular aliasing of the stream tubes are discussed.

  19. Geometric Monte Carlo and black Janus geometries

    Energy Technology Data Exchange (ETDEWEB)

    Bak, Dongsu, E-mail: dsbak@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); B.W. Lee Center for Fields, Gravity & Strings, Institute for Basic Sciences, Daejeon 34047 (Korea, Republic of); Kim, Chanju, E-mail: cjkim@ewha.ac.kr [Department of Physics, Ewha Womans University, Seoul 03760 (Korea, Republic of); Kim, Kyung Kiu, E-mail: kimkyungkiu@gmail.com [Department of Physics, Sejong University, Seoul 05006 (Korea, Republic of); Department of Physics, College of Science, Yonsei University, Seoul 03722 (Korea, Republic of); Min, Hyunsoo, E-mail: hsmin@uos.ac.kr [Physics Department, University of Seoul, Seoul 02504 (Korea, Republic of); Song, Jeong-Pil, E-mail: jeong_pil_song@brown.edu [Department of Chemistry, Brown University, Providence, RI 02912 (United States)

    2017-04-10

    We describe an application of the Monte Carlo method to the Janus deformation of the black brane background. We present numerical results for three and five dimensional black Janus geometries with planar and spherical interfaces. In particular, we argue that the 5D geometry with a spherical interface has an application in understanding the finite temperature bag-like QCD model via the AdS/CFT correspondence. The accuracy and convergence of the algorithm are evaluated with respect to the grid spacing. The systematic errors of the method are determined using an exact solution of 3D black Janus. This numerical approach for solving linear problems is unaffected initial guess of a trial solution and can handle an arbitrary geometry under various boundary conditions in the presence of source fields.

  20. Towards relativistic quantum geometry

    Energy Technology Data Exchange (ETDEWEB)

    Ridao, Luis Santiago [Instituto de Investigaciones Físicas de Mar del Plata (IFIMAR), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Mar del Plata (Argentina); Bellini, Mauricio, E-mail: mbellini@mdp.edu.ar [Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata, Funes 3350, C.P. 7600, Mar del Plata (Argentina); Instituto de Investigaciones Físicas de Mar del Plata (IFIMAR), Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Mar del Plata (Argentina)

    2015-12-17

    We obtain a gauge-invariant relativistic quantum geometry by using a Weylian-like manifold with a geometric scalar field which provides a gauge-invariant relativistic quantum theory in which the algebra of the Weylian-like field depends on observers. An example for a Reissner–Nordström black-hole is studied.

  1. Multiplicity in difference geometry

    OpenAIRE

    Tomasic, Ivan

    2011-01-01

    We prove a first principle of preservation of multiplicity in difference geometry, paving the way for the development of a more general intersection theory. In particular, the fibres of a \\sigma-finite morphism between difference curves are all of the same size, when counted with correct multiplicities.

  2. Spacetime and Euclidean geometry

    Science.gov (United States)

    Brill, Dieter; Jacobson, Ted

    2006-04-01

    Using only the principle of relativity and Euclidean geometry we show in this pedagogical article that the square of proper time or length in a two-dimensional spacetime diagram is proportional to the Euclidean area of the corresponding causal domain. We use this relation to derive the Minkowski line element by two geometric proofs of the spacetime Pythagoras theorem.

  3. Physics and geometry

    International Nuclear Information System (INIS)

    Konopleva, N.P.

    2009-01-01

    The basic ideas of description methods of physical fields and elementary particle interactions are discussed. One of such ideas is the conception of space-time geometry. In this connection experimental measurement methods are analyzed. It is shown that measure procedures are the origin of geometrical axioms. The connection between space symmetry properties and the conservation laws is considered

  4. Origami, Geometry and Art

    Science.gov (United States)

    Wares, Arsalan; Elstak, Iwan

    2017-01-01

    The purpose of this paper is to describe the mathematics that emanates from the construction of an origami box. We first construct a simple origami box from a rectangular sheet and then discuss some of the mathematical questions that arise in the context of geometry and algebra. The activity can be used as a context for illustrating how algebra…

  5. Gravity is Geometry.

    Science.gov (United States)

    MacKeown, P. K.

    1984-01-01

    Clarifies two concepts of gravity--those of a fictitious force and those of how space and time may have geometry. Reviews the position of Newton's theory of gravity in the context of special relativity and considers why gravity (as distinct from electromagnetics) lends itself to Einstein's revolutionary interpretation. (JN)

  6. Towards a Nano Geometry?

    DEFF Research Database (Denmark)

    Booss-Bavnbek, Bernhelm

    2011-01-01

    This paper applies I.M. Gelfand's distinction between adequate and non-adequate use of mathematical language in different contexts to the newly opened window of model-based measurements of intracellular dynamics. The specifics of geometry and dynamics on the mesoscale of cell physiology are elabo...

  7. Diophantine geometry an introduction

    CERN Document Server

    Hindry, Marc

    2000-01-01

    This is an introduction to diophantine geometry at the advanced graduate level. The book contains a proof of the Mordell conjecture which will make it quite attractive to graduate students and professional mathematicians. In each part of the book, the reader will find numerous exercises.

  8. Sliding vane geometry turbines

    Science.gov (United States)

    Sun, Harold Huimin; Zhang, Jizhong; Hu, Liangjun; Hanna, Dave R

    2014-12-30

    Various systems and methods are described for a variable geometry turbine. In one example, a turbine nozzle comprises a central axis and a nozzle vane. The nozzle vane includes a stationary vane and a sliding vane. The sliding vane is positioned to slide in a direction substantially tangent to an inner circumference of the turbine nozzle and in contact with the stationary vane.

  9. History of analytic geometry

    CERN Document Server

    Boyer, Carl B

    2012-01-01

    Designed as an integrated survey of the development of analytic geometry, this study presents the concepts and contributions from before the Alexandrian Age through the eras of the great French mathematicians Fermat and Descartes, and on through Newton and Euler to the "Golden Age," from 1789 to 1850.

  10. Non-euclidean geometry

    CERN Document Server

    Coxeter, HSM

    1965-01-01

    This textbook introduces non-Euclidean geometry, and the third edition adds a new chapter, including a description of the two families of 'mid-lines' between two given lines and an elementary derivation of the basic formulae of spherical trigonometry and hyperbolic trigonometry, and other new material.

  11. Topics in Riemannian geometry

    International Nuclear Information System (INIS)

    Ezin, J.P.

    1988-08-01

    The lectures given at the ''5th Symposium of Mathematics in Abidjan: Differential Geometry and Mechanics'' are presented. They are divided into four chapters: Riemannian metric on a differential manifold, curvature tensor fields on a Riemannian manifold, some classical functionals on Riemannian manifolds and questions. 11 refs

  12. Geometry Euclid and beyond

    CERN Document Server

    Hartshorne, Robin

    2000-01-01

    In recent years, I have been teaching a junior-senior-level course on the classi­ cal geometries. This book has grown out of that teaching experience. I assume only high-school geometry and some abstract algebra. The course begins in Chapter 1 with a critical examination of Euclid's Elements. Students are expected to read concurrently Books I-IV of Euclid's text, which must be obtained sepa­ rately. The remainder of the book is an exploration of questions that arise natu­ rally from this reading, together with their modern answers. To shore up the foundations we use Hilbert's axioms. The Cartesian plane over a field provides an analytic model of the theory, and conversely, we see that one can introduce coordinates into an abstract geometry. The theory of area is analyzed by cutting figures into triangles. The algebra of field extensions provides a method for deciding which geometrical constructions are possible. The investigation of the parallel postulate leads to the various non-Euclidean geometries. And ...

  13. Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  14. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  15. Systems of Inhomogeneous Linear Equations

    Science.gov (United States)

    Scherer, Philipp O. J.

    Many problems in physics and especially computational physics involve systems of linear equations which arise e.g. from linearization of a general nonlinear problem or from discretization of differential equations. If the dimension of the system is not too large standard methods like Gaussian elimination or QR decomposition are sufficient. Systems with a tridiagonal matrix are important for cubic spline interpolation and numerical second derivatives. They can be solved very efficiently with a specialized Gaussian elimination method. Practical applications often involve very large dimensions and require iterative methods. Convergence of Jacobi and Gauss-Seidel methods is slow and can be improved by relaxation or over-relaxation. An alternative for large systems is the method of conjugate gradients.

  16. Cubic-spline interpolation to estimate effects of inbreeding on milk yield in first lactation Holstein cows

    Directory of Open Access Journals (Sweden)

    Makram J. Geha

    2011-01-01

    Full Text Available Milk yield records (305d, 2X, actual milk yield of 123,639 registered first lactation Holstein cows were used to compare linear regression (y = β0 + β1X + e ,quadratic regression, (y = β0 + β1X + β2X2 + e cubic regression (y = β0 + β1X + β2X2 + β3X3 + e and fixed factor models, with cubic-spline interpolation models, for estimating the effects of inbreeding on milk yield. Ten animal models, all with herd-year-season of calving as fixed effect, were compared using the Akaike corrected-Information Criterion (AICc. The cubic-spline interpolation model with seven knots had the lowest AICc, whereas for all those labeled as "traditional", AICc was higher than the best model. Results from fitting inbreeding using a cubic-spline with seven knots were compared to results from fitting inbreeding as a linear covariate or as a fixed factor with seven levels. Estimates of inbreeding effects were not significantly different between the cubic-spline model and the fixed factor model, but were significantly different from the linear regression model. Milk yield decreased significantly at inbreeding levels greater than 9%. Variance component estimates were similar for the three models. Ranking of the top 100 sires with daughter records remained unaffected by the model used.

  17. Linear gate

    International Nuclear Information System (INIS)

    Suwono.

    1978-01-01

    A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)

  18. Linear Accelerators

    International Nuclear Information System (INIS)

    Vretenar, M

    2014-01-01

    The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics

  19. Phased array ultrasound testing on complex geometry

    International Nuclear Information System (INIS)

    Tuan Arif Tuan Mat; Khazali Mohd Zin

    2009-01-01

    Phase array ultrasonic inspection is used to investigate its response to complex welded joints geometry. A 5 MHz probe with 64 linear array elements was employed to scan mild steel T-joint, nozzle and node samples. These samples contain many defects such as cracks, lack of penetration and lack of fusion. Ultrasonic respond is analysed and viewed using the Tomoview software. The results show the actual phase array images on respective types of defect. (author)

  20. LEARNING GEOMETRY THROUGH MIMESIS AND DIGITAL CONSTRUCT

    OpenAIRE

    Maria Mion POP; Mihaela GIURGIULESCU

    2015-01-01

    The theme proposed by us is useful to teachers and students for mathematics in the compulsory school cycle. The issues faced by school teachers/parents are the difficulty with which students read and understand the lessons/examples/synthesis in order to assimilate technical terms. The echoic and iconic memory facilitates the learning of the specific curriculum of linear, spatial and analytical geometry by the students using digital platform designed by us; it facilitates the acquiring of the ...