Interpolative Boolean Networks
Directory of Open Access Journals (Sweden)
Vladimir Dobrić
2017-01-01
Full Text Available Boolean networks are used for modeling and analysis of complex systems of interacting entities. Classical Boolean networks are binary and they are relevant for modeling systems with complex switch-like causal interactions. More descriptive power can be provided by the introduction of gradation in this model. If this is accomplished by using conventional fuzzy logics, the generalized model cannot secure the Boolean frame. Consequently, the validity of the model’s dynamics is not secured. The aim of this paper is to present the Boolean consistent generalization of Boolean networks, interpolative Boolean networks. The generalization is based on interpolative Boolean algebra, the [0,1]-valued realization of Boolean algebra. The proposed model is adaptive with respect to the nature of input variables and it offers greater descriptive power as compared with traditional models. For illustrative purposes, IBN is compared to the models based on existing real-valued approaches. Due to the complexity of the most systems to be analyzed and the characteristics of interpolative Boolean algebra, the software support is developed to provide graphical and numerical tools for complex system modeling and analysis.
Traffic volume estimation using network interpolation techniques.
2013-12-01
Kriging method is a frequently used interpolation methodology in geography, which enables estimations of unknown values at : certain places with the considerations of distances among locations. When it is used in transportation field, network distanc...
INTERPOL's Surveillance Network in Curbing Transnational Terrorism
Gardeazabal, Javier; Sandler, Todd
2015-01-01
Abstract This paper investigates the role that International Criminal Police Organization (INTERPOL) surveillance—the Mobile INTERPOL Network Database (MIND) and the Fixed INTERPOL Network Database (FIND)—played in the War on Terror since its inception in 2005. MIND/FIND surveillance allows countries to screen people and documents systematically at border crossings against INTERPOL databases on terrorists, fugitives, and stolen and lost travel documents. Such documents have been used in the past by terrorists to transit borders. By applying methods developed in the treatment‐effects literature, this paper establishes that countries adopting MIND/FIND experienced fewer transnational terrorist attacks than they would have had they not adopted MIND/FIND. Our estimates indicate that, on average, from 2008 to 2011, adopting and using MIND/FIND results in 0.5 fewer transnational terrorist incidents each year per 100 million people. Thus, a country like France with a population just above 64 million people in 2008 would have 0.32 fewer transnational terrorist incidents per year owing to its use of INTERPOL surveillance. This amounts to a sizeable average proportional reduction of about 30 percent.
Discrete Orthogonal Transforms and Neural Networks for Image Interpolation
Directory of Open Access Journals (Sweden)
J. Polec
1999-09-01
Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.
Data mining techniques in sensor networks summarization, interpolation and surveillance
Appice, Annalisa; Fumarola, Fabio; Malerba, Donato
2013-01-01
Sensor networks comprise of a number of sensors installed across a spatially distributed network, which gather information and periodically feed a central server with the measured data. The server monitors the data, issues possible alarms and computes fast aggregates. As data analysis requests may concern both present and past data, the server is forced to store the entire stream. But the limited storage capacity of a server may reduce the amount of data stored on the disk. One solution is to compute summaries of the data as it arrives, and to use these summaries to interpolate the real data.
Sparsity-Based Spatial Interpolation in Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Yan Yao
2011-02-01
Full Text Available In wireless sensor networks, due to environmental limitations or bad wireless channel conditions, not all sensor samples can be successfully gathered at the sink. In this paper, we try to recover these missing samples without retransmission. The missing samples estimation problem is mathematically formulated as a 2-D spatial interpolation. Assuming the 2-D sensor data can be sparsely represented by a dictionary, a sparsity-based recovery approach by solving for l1 norm minimization is proposed. It is shown that these missing samples can be reasonably recovered based on the null space property of the dictionary. This property also points out the way to choose an appropriate sparsifying dictionary to further reduce the recovery errors. The simulation results on synthetic and real data demonstrate that the proposed approach can recover the missing data reasonably well and that it outperforms the weighted average interpolation methods when the data change relatively fast or blocks of samples are lost. Besides, there exists a range of missing rates where the proposed approach is robust to missing block sizes.
Sparsity-based spatial interpolation in wireless sensor networks.
Guo, Di; Qu, Xiaobo; Huang, Lianfen; Yao, Yan
2011-01-01
In wireless sensor networks, due to environmental limitations or bad wireless channel conditions, not all sensor samples can be successfully gathered at the sink. In this paper, we try to recover these missing samples without retransmission. The missing samples estimation problem is mathematically formulated as a 2-D spatial interpolation. Assuming the 2-D sensor data can be sparsely represented by a dictionary, a sparsity-based recovery approach by solving for l(1) norm minimization is proposed. It is shown that these missing samples can be reasonably recovered based on the null space property of the dictionary. This property also points out the way to choose an appropriate sparsifying dictionary to further reduce the recovery errors. The simulation results on synthetic and real data demonstrate that the proposed approach can recover the missing data reasonably well and that it outperforms the weighted average interpolation methods when the data change relatively fast or blocks of samples are lost. Besides, there exists a range of missing rates where the proposed approach is robust to missing block sizes.
Hiemstra, P.H.; Pebesma, E.J.; Twenhöfel, C.J.W.; Heuvelink, G.B.M.
2009-01-01
Detection of radiological accidents and monitoring the spread of the contamination is of great importance. Following the Chernobyl accident many European countries have installed monitoring networks to perform this task. Real-time availability of automatically interpolated maps showing the spread of
View-interpolation of sparsely sampled sinogram using convolutional neural network
Lee, Hoyeon; Lee, Jongha; Cho, Suengryong
2017-02-01
Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.
Measurement and interpolation uncertainties in rainfall maps from cellular communication networks
Rios Gaona, M. F.; Overeem, A.; Leijnse, H.; Uijlenhoet, R.
2015-08-01
Accurate measurements of rainfall are important in many hydrological and meteorological applications, for instance, flash-flood early-warning systems, hydraulic structures design, irrigation, weather forecasting, and climate modelling. Whenever possible, link networks measure and store the received power of the electromagnetic signal at regular intervals. The decrease in power can be converted to rainfall intensity, and is largely due to the attenuation by raindrops along the link paths. Such an alternative technique fulfils the continuous effort to obtain measurements of rainfall in time and space at higher resolutions, especially in places where traditional rain gauge networks are scarce or poorly maintained. Rainfall maps from microwave link networks have recently been introduced at country-wide scales. Despite their potential in rainfall estimation at high spatiotemporal resolutions, the uncertainties present in rainfall maps from link networks are not yet fully comprehended. The aim of this work is to identify and quantify the sources of uncertainty present in interpolated rainfall maps from link rainfall depths. In order to disentangle these sources of uncertainty, we classified them into two categories: (1) those associated with the individual microwave link measurements, i.e. the errors involved in link rainfall retrievals, such as wet antenna attenuation, sampling interval of measurements, wet/dry period classification, dry weather baseline attenuation, quantization of the received power, drop size distribution (DSD), and multi-path propagation; and (2) those associated with mapping, i.e. the combined effect of the interpolation methodology and the spatial density of link measurements. We computed ~ 3500 rainfall maps from real and simulated link rainfall depths for 12 days for the land surface of the Netherlands. Simulated link rainfall depths refer to path-averaged rainfall depths obtained from radar data. The ~ 3500 real and simulated rainfall maps were
Tapoglou, Evdokia; Karatzas, George P.; Trichakis, Ioannis C.; Varouchakis, Emmanouil A.
2014-05-01
The purpose of this study is to examine the use of Artificial Neural Networks (ANN) combined with kriging interpolation method, in order to simulate the hydraulic head both spatially and temporally. Initially, ANNs are used for the temporal simulation of the hydraulic head change. The results of the most appropriate ANNs, determined through a fuzzy logic system, are used as an input for the kriging algorithm where the spatial simulation is conducted. The proposed algorithm is tested in an area located across Isar River in Bayern, Germany and covers an area of approximately 7800 km2. The available data extend to a time period from 1/11/2008 to 31/10/2012 (1460 days) and include the hydraulic head at 64 wells, temperature and rainfall at 7 weather stations and surface water elevation at 5 monitoring stations. One feedforward ANN was trained for each of the 64 wells, where hydraulic head data are available, using a backpropagation algorithm. The most appropriate input parameters for each wells' ANN are determined considering their proximity to the measuring station, as well as their statistical characteristics. For the rainfall, the data for two consecutive time lags for best correlated weather station, as well as a third and fourth input from the second best correlated weather station, are used as an input. The surface water monitoring stations with the three best correlations for each well are also used in every case. Finally, the temperature for the best correlated weather station is used. Two different architectures are considered and the one with the best results is used henceforward. The output of the ANNs corresponds to the hydraulic head change per time step. These predictions are used in the kriging interpolation algorithm. However, not all 64 simulated values should be used. The appropriate neighborhood for each prediction point is constructed based not only on the distance between known and prediction points, but also on the training and testing error of
Radial Basis Functions, Multi-Variable Functional Interpolation and Adaptive Networks
1988-03-28
form of the interpolating func- tions Sk(T) = \\0k+ AikO (l[-1- y 11) xE En k = 1,2,...,n’ (8) j=1 These coefficients enter the least squares formalism...the m distinct data points in R n are associated with rn vectors L E Rn’. The interpolation condition of equation (1) thus generalises to Sk()= , k s = k ...2,..., k = 1,2,...,n’ (5) which leads to interpolating functions of the form m si-(Z) ZA kA,(I1x- y x E IR kn (6) The expansion coefficients Ajk are
Pegram, Geoff; Gyasi-Agyei, Yeboah
2014-05-01
designed to give a measure of the hydrological response's sensitivity to the uncertainty of spatial interpolation of gauge network rainfall [observed or simulated] by simulating many conditioned spatial replicates, each of which is plausible
Tokumitsu, Masahiro; Hasegawa, Keisuke; Ishida, Yoshiteru
2016-04-15
This paper attempts to construct a resilient sensor network model with an example of space weather forecasting. The proposed model is based on a dynamic relational network. Space weather forecasting is vital for a satellite operation because an operational team needs to make a decision for providing its satellite service. The proposed model is resilient to failures of sensors or missing data due to the satellite operation. In the proposed model, the missing data of a sensor is interpolated by other sensors associated. This paper demonstrates two examples of space weather forecasting that involves the missing observations in some test cases. In these examples, the sensor network for space weather forecasting continues a diagnosis by replacing faulted sensors with virtual ones. The demonstrations showed that the proposed model is resilient against sensor failures due to suspension of hardware failures or technical reasons.
Neural network interpolation of the magnetic field for the LISA Pathfinder Diagnostics Subsystem
Diaz-Aguilo, Marc; García-Berro, Enrique
2011-01-01
LISA Pathfinder is a science and technology demonstrator of the European Space Agency within the framework of its LISA mission, which aims to be the first space-borne gravitational wave observatory. The payload of LISA Pathfinder is the so-called LISA Technology Package, which is designed to measure relative accelerations between two test masses in nominal free fall. Its disturbances are monitored and dealt by the diagnostics subsystem. This subsystem consists of several modules, and one of these is the magnetic diagnostics system, which includes a set of four tri-axial fluxgate magnetometers, intended to measure with high precision the magnetic field at the positions of the test masses. However, since the magnetometers are located far from the positions of the test masses, the magnetic field at their positions must be interpolated. It has been recently shown that because there are not enough magnetic channels, classical interpolation methods fail to derive reliable measurements at the positions of the test m...
Directory of Open Access Journals (Sweden)
Umut Bulucu
2008-09-01
Full Text Available Wireless communication networks offer subscribers the possibilities of free mobility and access to information anywhere at any time. Therefore, electromagnetic coverage calculations are important for wireless mobile communication systems, especially in Wireless Local Area Networks (WLANs. Before any propagation computation is performed, modeling of indoor radio wave propagation needs accurate geographical information in order to avoid the interruption of data transmissions. Geographic Information Systems (GIS and spatial interpolation techniques are very efficient for performing indoor radio wave propagation modeling. This paper describes the spatial interpolation of electromagnetic field measurements using a feed-forward back-propagation neural network programmed as a tool in GIS. The accuracy of Artificial Neural Networks (ANN and geostatistical Kriging were compared by adjusting procedures. The feedforward back-propagation ANN provides adequate accuracy for spatial interpolation, but the predictions of Kriging interpolation are more accurate than the selected ANN. The proposed GIS ensures indoor radio wave propagation model and electromagnetic coverage, the number, position and transmitter power of access points and electromagnetic radiation level. Pollution analysis in a given propagation environment was done and it was demonstrated that WLAN (2.4 GHz electromagnetic coverage does not lead to any electromagnetic pollution due to the low power levels used. Example interpolated electromagnetic field values for WLAN system in a building of Yildiz Technical University, Turkey, were generated using the selected network architectures to illustrate the results with an ANN.
Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao
2017-12-26
Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.
Directory of Open Access Journals (Sweden)
Zhenyi Jia
2017-12-01
Full Text Available Soil pollution by metal(loids resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As and cadmium (Cd pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loids in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid pollution.
Measurement and interpolation uncertainties in rainfall maps from cellular communication networks
Rios Gaona, M.F.; Overeem, A.; Leijnse, H.; Uijlenhoet, R.
2015-01-01
Accurate measurements of rainfall are important in many hydrological and meteorological applications, for instance, flash-flood early-warning systems, hydraulic structures design, irrigation, weather forecasting, and climate modelling. Whenever possible, link networks measure and store the received
Spatio-temporal interpolation of soil moisture in 3D+T using automated sensor network data
Gasch, C.; Hengl, T.; Magney, T. S.; Brown, D. J.; Gräler, B.
2014-12-01
Soil sensor networks provide frequent in situ measurements of dynamic soil properties at fixed locations, producing data in 2- or 3-dimensions and through time (2D+T and 3D+T). Spatio-temporal interpolation of 3D+T point data produces continuous estimates that can then be used for prediction at unsampled times and locations, as input for process models, and can simply aid in visualization of properties through space and time. Regression-kriging with 3D and 2D+T data has successfully been implemented, but currently the field of geostatistics lacks an analytical framework for modeling 3D+T data. Our objective is to develop robust 3D+T models for mapping dynamic soil data that has been collected with high spatial and temporal resolution. For this analysis, we use data collected from a sensor network installed on the R.J. Cook Agronomy Farm (CAF), a 37-ha Long-Term Agro-Ecosystem Research (LTAR) site in Pullman, WA. For five years, the sensors have collected hourly measurements of soil volumetric water content at 42 locations and five depths. The CAF dataset also includes a digital elevation model and derivatives, a soil unit description map, crop rotations, electromagnetic induction surveys, daily meteorological data, and seasonal satellite imagery. The soil-water sensor data, combined with the spatial and temporal covariates, provide an ideal dataset for developing 3D+T models. The presentation will include preliminary results and address main implementation strategies.
Energy Technology Data Exchange (ETDEWEB)
MACKAY, W.W.; LUCCIO, A.U.
2006-06-23
It is important to have symplectic maps for the various electromagnetic elements in an accelerator ring. For some tracking problems we must consider elements which evolve during a ramp. Rather than performing a computationally intensive numerical integration for every turn, it should be possible to integrate the trajectory for a few sets of parameters, and then interpolate the transport map as a function of one or more parameters, such as energy. We present two methods for interpolation of symplectic matrices as a function of parameters: one method is based on the calculation of a representation in terms of a basis of group generators [2, 3] and the other is based on the related but simpler symplectification method of Healy [1]. Both algorithms guarantee a symplectic result.
2016-02-11
2011. [3] Martin Sundermeyer, Ilya Oparin , Jean-Luc Gauvain, Ben Freiberg, Ralf Schluter, and Hermann Ney, “Comparison of feedforward and recurrent...model interpolation and adaptation,” Com- puter Speech & Language, pp. 301–321, 2013. [14] Ilya Oparin , Martin Sundermeyer, Hermann Ney, and Jean-Luc...Improved neural network based language modelling and adaptation,” in Proc. ISCA Interspeech, 2010. [20] Hai-Son Le, Ilya Oparin , Alexandre Allauzen, J
Huawei Zhao; Chi Chen; Jiankun Hu; Jing Qin
2015-01-01
We present two approaches that exploit biometric data to address security problems in the body sensor networks: a new key negotiation scheme based on the fuzzy extractor technology and an improved linear interpolation encryption method. The first approach designs two attack games to give the formal definition of fuzzy negotiation that forms a new key negotiation scheme based on fuzzy extractor technology. According to the definition, we further define a concrete structure of fuzzy negotiation...
Laruelle, Goulven G.; Landschützer, Peter; Gruber, Nicolas; Tison, Jean-Louis; Delille, Bruno; Regnier, Pierre
2017-10-01
In spite of the recent strong increase in the number of measurements of the partial pressure of CO2 in the surface ocean (pCO2), the air-sea CO2 balance of the continental shelf seas remains poorly quantified. This is a consequence of these regions remaining strongly under-sampled in both time and space and of surface pCO2 exhibiting much higher temporal and spatial variability in these regions compared to the open ocean. Here, we use a modified version of a two-step artificial neural network method (SOM-FFN; Landschützer et al., 2013) to interpolate the pCO2 data along the continental margins with a spatial resolution of 0.25° and with monthly resolution from 1998 to 2015. The most important modifications compared to the original SOM-FFN method are (i) the much higher spatial resolution and (ii) the inclusion of sea ice and wind speed as predictors of pCO2. The SOM-FFN is first trained with pCO2 measurements extracted from the SOCATv4 database. Then, the validity of our interpolation, in both space and time, is assessed by comparing the generated pCO2 field with independent data extracted from the LDVEO2015 database. The new coastal pCO2 product confirms a previously suggested general meridional trend of the annual mean pCO2 in all the continental shelves with high values in the tropics and dropping to values beneath those of the atmosphere at higher latitudes. The monthly resolution of our data product permits us to reveal significant differences in the seasonality of pCO2 across the ocean basins. The shelves of the western and northern Pacific, as well as the shelves in the temperate northern Atlantic, display particularly pronounced seasonal variations in pCO2, while the shelves in the southeastern Atlantic and in the southern Pacific reveal a much smaller seasonality. The calculation of temperature normalized pCO2 for several latitudes in different oceanic basins confirms that the seasonality in shelf pCO2 cannot solely be explained by temperature
Directory of Open Access Journals (Sweden)
G. G. Laruelle
2017-10-01
Full Text Available In spite of the recent strong increase in the number of measurements of the partial pressure of CO2 in the surface ocean (pCO2, the air–sea CO2 balance of the continental shelf seas remains poorly quantified. This is a consequence of these regions remaining strongly under-sampled in both time and space and of surface pCO2 exhibiting much higher temporal and spatial variability in these regions compared to the open ocean. Here, we use a modified version of a two-step artificial neural network method (SOM-FFN; Landschützer et al., 2013 to interpolate the pCO2 data along the continental margins with a spatial resolution of 0.25° and with monthly resolution from 1998 to 2015. The most important modifications compared to the original SOM-FFN method are (i the much higher spatial resolution and (ii the inclusion of sea ice and wind speed as predictors of pCO2. The SOM-FFN is first trained with pCO2 measurements extracted from the SOCATv4 database. Then, the validity of our interpolation, in both space and time, is assessed by comparing the generated pCO2 field with independent data extracted from the LDVEO2015 database. The new coastal pCO2 product confirms a previously suggested general meridional trend of the annual mean pCO2 in all the continental shelves with high values in the tropics and dropping to values beneath those of the atmosphere at higher latitudes. The monthly resolution of our data product permits us to reveal significant differences in the seasonality of pCO2 across the ocean basins. The shelves of the western and northern Pacific, as well as the shelves in the temperate northern Atlantic, display particularly pronounced seasonal variations in pCO2, while the shelves in the southeastern Atlantic and in the southern Pacific reveal a much smaller seasonality. The calculation of temperature normalized pCO2 for several latitudes in different oceanic basins confirms that the seasonality in shelf pCO2 cannot solely be explained by
Interpolation functors and interpolation spaces
Brudnyi, Yu A
1991-01-01
The theory of interpolation spaces has its origin in the classical work of Riesz and Marcinkiewicz but had its first flowering in the years around 1960 with the pioneering work of Aronszajn, Calderón, Gagliardo, Krein, Lions and a few others. It is interesting to note that what originally triggered off this avalanche were concrete problems in the theory of elliptic boundary value problems related to the scale of Sobolev spaces. Later on, applications were found in many other areas of mathematics: harmonic analysis, approximation theory, theoretical numerical analysis, geometry of Banach spaces, nonlinear functional analysis, etc. Besides this the theory has a considerable internal beauty and must by now be regarded as an independent branch of analysis, with its own problems and methods. Further development in the 1970s and 1980s included the solution by the authors of this book of one of the outstanding questions in the theory of the real method, the K-divisibility problem. In a way, this book harvests the r...
Deep conversion of black oils with Eni Slurry technology
Energy Technology Data Exchange (ETDEWEB)
Panariti, Nicoletta; Rispoli, Giacomo
2010-09-15
Eni Slurry Technology represents a significant technological innovation in residue conversion and unconventional oils upgrading. EST allows the almost total conversion of heavy feedstocks into useful products, mainly transportation fuels, with a great major impact on the economic and environmental valorization of hydrocarbon resources. The peculiar characteristics of EST in terms of yields, products quality, absence of undesired by-products and feedstock flexibility constitute its superior economic and environmental attractiveness. The first full scale industrial plant based on this new technology will be realized in Eni's Sannazzaro refinery (23,000 bpd). Oil in is scheduled by 4th quarter 2012.
Yao, J. G.; Lagrosas, N.; Ampil, L. J. Y.; Lorenzo, G. R. H.; Simpas, J.
2016-12-01
A hybrid piecewise rainfall value interpolation algorithm was formulated using the commonly known Inverse Distance Weighting (IDW) and Gauss-Seidel variant Successive Over Relaxation (SOR) to interpolate rainfall values over Metro Manila, Philippines. Due to the fact that the SOR requires boundary values for its algorithm to work, the IDW method has been used to estimate rainfall values at the boundary. Iterations using SOR were then done on the defined boundaries to obtain the desired results corresponding to the lowest RMSE value. The hybrid method was applied to rainfall datasets obtained from a dense network of 30 stations in Metro Manila which has been collecting meteorological data every 5 minutes since 2012. Implementing the Davis Vantage Pro 2 Plus weather monitoring system, each station sends data to a central server which could be accessed through the website metroweather.com.ph. The stations are spread over approximately 625 sq km of area such that each station is approximately within 25 sq km from each other. The locations of the stations determined by the Metro Manila Development Authority (MMDA) are in critical sections of Metro Manila such as watersheds and flood-prone areas. Three cases have been investigated in this study, one for each type of rainfall present in Metro Manila: monsoon-induced (8/20/13), typhoon (6/29/13), and thunderstorm (7/3/15 & 7/4/15). The area where the rainfall stations are located is divided such that large measured rainfall values are used as part of the boundaries for the SOR. Measured station values found inside the area where SOR is implemented are compared with results from interpolated values. Root mean square error (RMSE) and correlation trends between measured and interpolated results are quantified. Results from typhoon, thunderstorm and monsoon cases show RMSE values ranged from 0.25 to 2.46 mm for typhoons, 1.55 to 10.69 mm for monsoon-induced rain and 0.01 to 6.27 mm for thunderstorms. R2 values, on the other
Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkın, Halim; Çevik, Uğur
2017-09-01
The aim of this study was to determine spatial risk dispersion of ambient gamma dose rate (AGDR) by using both artificial neural network (ANN) and fuzzy logic (FL) methods, compare the performances of methods, make dose estimations for intermediate stations with no previous measurements and create dose rate risk maps of the study area. In order to determine the dose distribution by using artificial neural networks, two main networks and five different network structures were used; feed forward ANN; Multi-layer perceptron (MLP), Radial basis functional neural network (RBFNN), Quantile regression neural network (QRNN) and recurrent ANN; Jordan networks (JN), Elman networks (EN). In the evaluation of estimation performance obtained for the test data, all models appear to give similar results. According to the cross-validation results obtained for explaining AGDR distribution, Pearson's r coefficients were calculated as 0.94, 0.91, 0.89, 0.91, 0.91 and 0.92 and RMSE values were calculated as 34.78, 43.28, 63.92, 44.86, 46.77 and 37.92 for MLP, RBFNN, QRNN, JN, EN and FL, respectively. In addition, spatial risk maps showing distributions of AGDR of the study area were created by all models and results were compared with geological, topological and soil structure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Feature displacement interpolation
DEFF Research Database (Denmark)
Nielsen, Mads; Andresen, Per Rønsholt
1998-01-01
Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features...... often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov regularizations, also including scale......-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface....
2013-01-22
... carrier and with which trade is not prohibited by U.S. law or policy. Eni USA Gas Marketing is requesting... law or policy. Eni USA Gas Marketing states that it does not seek authorization to export domestically... USA Gas Marketing LLC; Application for Blanket Authorization To Export Previously Imported Liquefied...
EOS Interpolation and Thermodynamic Consistency
Energy Technology Data Exchange (ETDEWEB)
Gammel, J. Tinka [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-11-16
As discussed in LA-UR-08-05451, the current interpolator used by Grizzly, OpenSesame, EOSPAC, and similar routines is the rational function interpolator from Kerley. While the rational function interpolator is well-suited for interpolation on sparse grids with logarithmic spacing and it preserves monotonicity in 1-d, it has some known problems.
Data interpolation beyond aliasing
Volker, A.W.F.; Neer, P.L.M.J. van
2017-01-01
Proper spatial sampling is critical for many applications. If the sampling criterion is not met, artifacts appear for example in images. Last year an iterative approach was presented using wave field extrapolation to interpolate spatially aliased signals. The main idea behind this approach is that
Optimal Sampling and Interpolation
Shekhawat, Hanumant
2012-01-01
The main objective in this thesis is to design optimal samplers, downsamplers and interpolators (holds) which are required in signal processing. The sampled-data system theory is used to fulfill this objective in a generic setup. Signal processing, which includes signal transmission, storage and
A novel computational approach to approximate fuzzy interpolation polynomials.
Jafarian, Ahmad; Jafari, Raheleh; Mohamed Al Qurashi, Maysaa; Baleanu, Dumitru
2016-01-01
This paper build a structure of fuzzy neural network, which is well sufficient to gain a fuzzy interpolation polynomial of the form [Formula: see text] where [Formula: see text] is crisp number (for [Formula: see text], which interpolates the fuzzy data [Formula: see text]. Thus, a gradient descent algorithm is constructed to train the neural network in such a way that the unknown coefficients of fuzzy polynomial are estimated by the neural network. The numeral experimentations portray that the present interpolation methodology is reliable and efficient.
Fuzzy Interpolation and Other Interpolation Methods Used in Robot Calibrations
Directory of Open Access Journals (Sweden)
Ying Bai
2012-01-01
Full Text Available A novel interpolation algorithm, fuzzy interpolation, is presented and compared with other popular interpolation methods widely implemented in industrial robots calibrations and manufacturing applications. Different interpolation algorithms have been developed, reported, and implemented in many industrial robot calibrations and manufacturing processes in recent years. Most of them are based on looking for the optimal interpolation trajectories based on some known values on given points around a workspace. However, it is rare to build an optimal interpolation results based on some random noises, and this is one of the most popular topics in industrial testing and measurement applications. The fuzzy interpolation algorithm (FIA reported in this paper provides a convenient and simple way to solve this problem and offers more accurate interpolation results based on given position or orientation errors that are randomly distributed in real time. This method can be implemented in many industrial applications, such as manipulators measurements and calibrations, industrial automations, and semiconductor manufacturing processes.
Samsonov, Vladislav
2017-01-01
This work presents a supervised learning based approach to the computer vision problem of frame interpolation. The presented technique could also be used in the cartoon animations since drawing each individual frame consumes a noticeable amount of time. The most existing solutions to this problem use unsupervised methods and focus only on real life videos with already high frame rate. However, the experiments show that such methods do not work as well when the frame rate becomes low and objec...
Onozawa, Masakatsu; Nihei, Keiji; Ishikura, Satoshi; Minashi, Keiko; Yano, Tomonori; Muto, Manabu; Ohtsu, Atsushi; Ogino, Takashi
2009-08-01
There are some reports indicating that prophylactic three-field lymph node dissection for esophageal cancer can lead to improved survival. But the benefit of ENI in CRT for thoracic esophageal cancer remains controversial. The purpose of the present study is to retrospectively evaluate the efficacy of elective nodal irradiation (ENI) in definitive chemoradiotherapy (CRT) for thoracic esophageal cancer. Patients with squamous cell carcinoma (SCC) of the thoracic esophagus newly diagnosed between February 1999 and April 2001 in our institution was recruited from our database. Definitive chemoradiotherapy consisted of two cycles of cisplatin/5FU repeated every 5 weeks, with concurrent radiation therapy of 60 Gy in 30 fractions. Up to 40 Gy radiation therapy was delivered to the cervical, periesophageal, mediastinal and perigastric lymph nodes as ENI. One hundred two patients were included in this analysis, and their characteristics were as follows: median age, 65 years; male/female, 85/17; T1/T2/T3/T4, 16/11/61/14; N0/N1, 48/54; M0/M1, 84/18. The median follow-up period for the surviving patients was 41 months. Sixty patients achieved complete response (CR). After achieving CR, only one (1.0%; 95% CI, 0-5.3%) patient experienced elective nodal failure without any other site of recurrence. In CRT for esophageal SCC, ENI is effective for preventing regional nodal failure. Further evaluation of whether ENI leads to an improved overall survival is needed.
Quantum realization of the bilinear interpolation method for NEQR.
Zhou, Ri-Gui; Hu, Wenwen; Fan, Ping; Ian, Hou
2017-05-31
In recent years, quantum image processing is one of the most active fields in quantum computation and quantum information. Image scaling as a kind of image geometric transformation has been widely studied and applied in the classical image processing, however, the quantum version of which does not exist. This paper is concerned with the feasibility of the classical bilinear interpolation based on novel enhanced quantum image representation (NEQR). Firstly, the feasibility of the bilinear interpolation for NEQR is proven. Then the concrete quantum circuits of the bilinear interpolation including scaling up and scaling down for NEQR are given by using the multiply Control-Not operation, special adding one operation, the reverse parallel adder, parallel subtractor, multiplier and division operations. Finally, the complexity analysis of the quantum network circuit based on the basic quantum gates is deduced. Simulation result shows that the scaled-up image using bilinear interpolation is clearer and less distorted than nearest interpolation.
Modeling Spatiotemporal Precipitation: Effects of Density, Interpolation, and Land Use Distribution
National Research Council Canada - National Science Library
Shope, Christopher L; Maharjan, Ganga Ram
2015-01-01
.... Introduction Rain gauges provide a point estimation of precipitation, which, depending on the network size and density, require interpolation techniques to estimate the spatial distribution thro...
2011-01-12
... Gas Marketing LLC; Application for Blanket Authorization To Export Liquefied Natural Gas AGENCY... November 30, 2010, by Eni USA Gas Marketing LLC (Eni USA), requesting blanket authorization to export..., Louisiana, to any country with the capacity to import LNG via ocean-going carrier and with which trade is...
Directory of Open Access Journals (Sweden)
P. Dames
2012-01-01
Full Text Available Eny2, the mammalian ortholog of yeast Sus1 and drosophila E(y2, is a nuclear factor that participates in several steps of gene transcription and in mRNA export. We had previously found that Eny2 expression changes in mouse pancreatic islets during the metabolic adaptation to pregnancy. We therefore hypothesized that the protein contributes to the regulation of islet endocrine cell function and tested this hypothesis in rat INS-1E insulinoma cells. Overexpression of Eny2 had no effect but siRNA-mediated knockdown of Eny2 resulted in markedly increased glucose and exendin-4-induced insulin secretion from otherwise poorly glucose-responsive INS-1E cells. Insulin content, cellular viability, and the expression levels of several key components of glucose sensing remained unchanged; however glucose-dependent cellular metabolism was higher after Eny2 knockdown. Suppression of Eny2 enhanced the intracellular incretin signal downstream of cAMP. The use of specific cAMP analogues and pathway inhibitors primarily implicated the PKA and to a lesser extent the EPAC pathway. In summary, we identified a potential link between the nuclear protein Eny2 and insulin secretion. Suppression of Eny2 resulted in increased glucose and incretin-induced insulin release from a poorly glucose-responsive INS-1E subline. Whether these findings extend to other experimental conditions or to in vivo physiology needs to be determined in further studies.
A disposition of interpolation techniques
Knotters, M.; Heuvelink, G.B.M.
2010-01-01
A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated
Interpolative Boolean algebra based multicriteria routing algorithm
Directory of Open Access Journals (Sweden)
Jeremić Marina
2015-01-01
Full Text Available In order to improve the quality-of-service of distributed applications, we propose a multi-criteria algorithm based on interpolative Boolean algebra for routing in an overlay network. We use a mesh topology because it can be easily implemented, and it makes addressing of the cores quite simple during routing. In this paper, we consider four criteria: buffer usage, the distance between peers, bandwidth, and remaining battery power. The proposed routing algorithm determines the path which satisfies quality-of service requirements using interpolative Boolean algebra; the decision at each node is made based on the ranking of available options considering multiple constraints. The simulation shows that the proposed approach provides better results than the standard shortest path routing algorithm.
Flexibility of the exportins Cse1p and Xpot depicted by elastic network model.
Hu, Mingwen; Kim, Byung
2011-07-01
Nucleocytoplasmic transport in eukaryotic cells involves many interactions between macromolecules, and has been an active area for many researchers. However, the precise mechanism still evades us and more efforts are needed to better understand it. In this study, the authors investigated exportins (Cse1p and Xpot) by elastic network interpolation (ENI) and elastic network based normal mode analysis (EN-NMA). Results of the study on Cse1p were in good agreement with the results obtained by molecular dynamics simulation in another study but with the benefit of time-efficiency. First, a formation of ring closure obtained by ENI was observed. Second, HEAT 1 to 3 and HEAT 14 to 17 had the largest values of root mean square deviation (RMSD) which indicated the flexibility of Cse1p during the transition. In the case of Xpot, a possible pathway from nuclear state to cytoplasmic state was shown, and the predicted pathway was also quantitatively analyzed in terms of RMSD. The results suggested two flexible regions of Xpot that might be important to the transporting mechanism. Moreover, the dominant mode of Xpot in the nuclear state obtained by EN-NMA not only showed the tendency to match the predicted pathway to the cytoplasmic state of Xpot, but also displayed the flexible regions of Xpot. A time-efficient computational approach was presented in this paper and the results indicated that the flexibility of tested exportins might be required to perform the biological function of transporting cargos.
Abolition of Trial by Ordeal at Eni-Lake, Uzere, Delta State of Nigeria ...
African Journals Online (AJOL)
The imposition of British Colonial rule in Isokoland affected the political and social values of the people. One of such area drastically affected was Uzere where trials at Eni-Lake had become popular even before the coming of the British. The British misunderstanding or lack of knowledge of the workings and psychological ...
Occlusion-Aware View Interpolation
Directory of Open Access Journals (Sweden)
Janusz Konrad
2009-01-01
Full Text Available View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.
Occlusion-Aware View Interpolation
Directory of Open Access Journals (Sweden)
Ince Serdar
2008-01-01
Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.
Interpolation in Spaces of Functions
Directory of Open Access Journals (Sweden)
K. Mosaleheh
2006-03-01
Full Text Available In this paper we consider the interpolation by certain functions such as trigonometric and rational functions for finite dimensional linear space X. Then we extend this to infinite dimensional linear spaces
SOUND MORPHING BY FEATURE INTERPOLATION
Freitas Caetano, Marcelo; Rodet, Xavier
2011-01-01
International audience; The goal of sound morphing by feature interpolation is to obtain sounds whose values of features are intermediate between those of the source and target sounds. In order to do this, we should be able to resynthesize sounds that present a set of predefined feature values, a notoriously difficult problem. In this work, we present morphing techniques to obtain hybrid musical instrument sounds whose feature values correspond as close as possible to the ideal interpolated v...
Kriging interpolation in seismic attribute space applied to the South Arne Field, North Sea
DEFF Research Database (Denmark)
Hansen, Thomas Mejer; Mosegaard, Klaus; Schiøtt, Christian
2010-01-01
Seismic attributes can be used to guide interpolation in-between and extrapolation away from well log locations using for example linear regression, neural networks, and kriging. Kriging-based estimation methods (and most other types of interpolation/extrapolation techniques) are intimately linked...... geological variability. The North Sea case study demonstrates that kriging in attribute space performs better than linear regression and cokriging....
Interpolating of climate data using R
Reinhardt, Katja
2017-04-01
Interpolation methods are used in many different geoscientific areas, such as soil physics, climatology and meteorology. Thereby, unknown values are calculated by using statistical calculation approaches applied on known values. So far, the majority of climatologists have been using computer languages, such as FORTRAN or C++, but there is also an increasing number of climate scientists using R for data processing and visualization. Most of them, however, are still working with arrays and vector based data which is often associated with complex R code structures. For the presented study, I have decided to convert the climate data into geodata and to perform the whole data processing using the raster package, gstat and similar packages, providing a much more comfortable way for data handling. A central goal of my approach is to create an easy to use, powerful and fast R script, implementing the entire geodata processing and visualization into a single and fully automated R based procedure, which allows avoiding the necessity of using other software packages, such as ArcGIS or QGIS. Thus, large amount of data with recurrent process sequences can be processed. The aim of the presented study, which is located in western Central Asia, is to interpolate wind data based on the European reanalysis data Era-Interim, which are available as raster data with a resolution of 0.75˚ x 0.75˚ , to a finer grid. Therefore, various interpolation methods are used: inverse distance weighting, the geostatistical methods ordinary kriging and regression kriging, generalized additve model and the machine learning algorithms support vector machine and neural networks. Besides the first two mentioned methods, the methods are used with influencing factors, e.g. geopotential and topography.
Liu, Mina; Zhao, Kuaile; Chen, Yun; Jiang, Guo-Liang
2014-10-25
A retrospective study to compare the failure patterns and effects of elective nodal irradiation (ENI) or involved field irradiation (IFI) for cervical and upper thoracic esophageal squamous cell carcinoma (SCC) patients. One hundred and sixty nine patients with the cervical and upper thoracic esophageal SCC were analyzed retrospectively; 99 patients (59%) underwent IFI and 70 patients (41%) received ENI. We defined "Out-PTVifi in-PTVeni metastasis" as lymph node metastasis occurring in the cervical prophylactic field of PTVeni thus out of PTVifi. Out-PTVifi in-PTVeni cervical node metastasis occurred in 8% of patients in the IFI group, all within 2 years after treatment. However, it occurred in 10% of patients in the ENI group, and these failures happened gradually since one year after treatments. No difference was found in OS and the incidences of Grade ≥ 3 treatment-related esophageal and lung toxicities between the two groups. ENI for cervical and upper thoracic esophageal SCC patients did not bring longer OS and better long-term control of cervical lymph nodes. Although ENI might delay cervical nodes progression in elective field; it could not decrease the incidence of these failures.
Improving image registration by correspondence interpolation
DEFF Research Database (Denmark)
Ólafsdóttir, Hildur; Pedersen, Henrik; Hansen, Michael Sass
2011-01-01
This paper presents how using a correspondence-based interpolation scheme for 3D image registration improves the registration accuracy. The interpolator takes into account correspondences across slices, which is an advantage, particularly when the volume has thick slices, and where anatomies lie ...... on downsampled data, increasing the registration accuracy of original data to 5.8% on average with respect to a standard interpolator.......This paper presents how using a correspondence-based interpolation scheme for 3D image registration improves the registration accuracy. The interpolator takes into account correspondences across slices, which is an advantage, particularly when the volume has thick slices, and where anatomies lie......) quantitatively by registering downsampled brain data using two different interpolators and subsequently applying the deformation fields to the original data. The results show that the interpolator provides better gradient images and a more sharp cardiac atlas. Moreover, it provides better deformation fields...
Polygon interpolation for serial cross sections.
Shiao, Ya-Hui; Chuang, Keh-Shih; Chen, Tzong-Jer; Chen, Chun-Yuan
2007-09-01
In this paper, a new technique for contour interpolation between slices is presented. We assumed that contour interpolation is equivalent to the interpolation of a polygon that approximates the object shape. The location of each polygon vertex is characterized by a set of parameters. Polygon interpolation can be performed on these parameters. These interpolated parameters are then used to reconstruct the vertices of the new polygon. Finally, the contour is approximated from this polygon using a cubic spline interpolation. This new technique takes into account the shape, the translation, the size, and the orientation of the object's contours. A comparison with regular shape-based interpolation is made on several object contours. The preliminary results show that this new method yields a better contour and is computationally more efficient than shape-based interpolation. This technique can be applied to gray-level images too. The interpolation result of an MR image does not show artifact of intermediate substance commonly seen in a typical linear gray-level interpolation.
2011-12-15
... AGENCY Notice of Issuance of Final Air Permits for Eni US Operating Co., Inc. and Port Dolphin Energy..., the EPA issued a final Prevention of Significant Deterioration (PSD) air permit for Port Dolphin Energy, LLC (Port Dolphin), which was issued and became effective on December 1, 2011. The Eni permit...
Interpolation of rational matrix functions
Ball, Joseph A; Rodman, Leiba
1990-01-01
This book aims to present the theory of interpolation for rational matrix functions as a recently matured independent mathematical subject with its own problems, methods and applications. The authors decided to start working on this book during the regional CBMS conference in Lincoln, Nebraska organized by F. Gilfeather and D. Larson. The principal lecturer, J. William Helton, presented ten lectures on operator and systems theory and the interplay between them. The conference was very stimulating and helped us to decide that the time was ripe for a book on interpolation for matrix valued functions (both rational and non-rational). When the work started and the first partial draft of the book was ready it became clear that the topic is vast and that the rational case by itself with its applications is already enough material for an interesting book. In the process of writing the book, methods for the rational case were developed and refined. As a result we are now able to present the rational case as an indepe...
Analysis of ECT Synchronization Performance Based on Different Interpolation Methods
Directory of Open Access Journals (Sweden)
Yang Zhixin
2014-01-01
Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.
Tadić, Jovan M.; Ilić, Velibor; Biraud, Sebastien
2015-06-01
Selecting which interpolation method to use significantly affects the results of atmospheric studies. The goal of this study is to examine the performance of several interpolation techniques under typical atmospheric conditions. Several types of kriging and artificial neural networks used as spatial interpolators are here compared and evaluated against ordinary kriging, using real airborne CO2 mixing-ratio data and synthetic data. The real data were measured (on December 26, 2012) between Billings and Lamont, near Oklahoma City, Oklahoma, within and above the planetary boundary layer (PBL). Predictions were made all along the flight trajectory within a total volume of 5000 km3 of atmospheric air (27 × 33 × 5.6 km). We evaluated (a) universal kriging, (b) ensemble neural networks, (c) universal kriging with ensemble neural network outputs used as covariates, and (d) ensemble neural networks with ordinary kriging of the residuals as interpolation tools. We found that in certain cases, when the weaknesses of ordinary kriging interpolation schemes (based on an omnidirectional isotropic variogram presumption) became apparent, more sophisticated interpolation methods were in order. In this study, preservation of the potentially nonlinear relationship between the trend and coordinates (by using neural kriging output as a covariate in a universal kriging scheme) was attempted, with varying degrees of success (it was best performer in 4 out of 8 cases). The study confirmed the necessity of selecting an interpolation approach that includes a combination of expert understanding and appropriate interpolation tools. The error analysis showed that uncertainty representations generated by the kriging methods are superior to neural networks, but that the actual error varies from case to case.
Temporal interpolation in Meteosat images
DEFF Research Database (Denmark)
Larsen, Rasmus; Hansen, Johan Dore; Ersbøll, Bjarne Kjær
The geostationary weather satellite Meteosat supplies us with a visual and an infrared image of the earth every 30 minutes. However, due to transmission error s some images may be missing. European TV weather reports are often supported by such infrared image sequences. The cloud movements...... in such animated films are perceived as being jerky due to t he low temporal sampling rate in general and missing images in particular. In order to perform a satisfactory temporal interpolation we estimate and use the optical flow corresponding to every image in the sequenc e. The estimation of the optical flow...... is based on images sequences where the clouds are segmented from the land/water that might a lso be visible in the images. Because the pixel values measured correspond directly to temperature and because clouds (normally) are colder than land/water we use an estimated lan d temperature map to perform...
Automatic Image Interpolation Using Homography
Directory of Open Access Journals (Sweden)
Tang Cheng-Yuan
2010-01-01
Full Text Available While taking photographs, we often face the problem that unwanted foreground objects (e.g., vehicles, signs, and pedestrians occlude the main subject(s. We propose to apply image interpolation (also known as inpainting techniques to remove unwanted objects in the photographs and to automatically patch the vacancy after the unwanted objects are removed. When given only a single image, if the information loss after the unwanted objects in images being removed is too great, the patching results are usually unsatisfactory. The proposed inpainting techniques employ the homographic constraints in geometry to incorporate multiple images taken from different viewpoints. Our experiment results showed that the proposed techniques could effectively reduce process in searching for potential patches from multiple input images and decide the best patches for the missing regions.
Differential Interpolation Effects in Free Recall
Petrusic, William M.; Jamieson, Donald G.
1978-01-01
Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…
Kriging for Interpolation in Random Simulation
van Beers, W.C.M.; Kleijnen, J.P.C.
2001-01-01
Whenever simulation requires much computer time, interpolation is needed. There are several interpolation techniques in use (for example, linear regression), but this paper focuses on Kriging.This technique was originally developed in geostatistics by D.G.Krige, and has recently been widely applied
Structure-preserving tangential interpolation for model reduction of port-Hamiltonian Systems
Gugercin, Serkan; Polyuga, Rostyslav V.; Beattie, Christopher; van der Schaft, Arjan
2011-01-01
Port-Hamiltonian systems result from port-based network modeling of physical systems and are an important example of passive state-space systems. In this paper, we develop the framework for model reduction of large-scale multi-input/multi-output port-Hamiltonian systems via tangential rational interpolation. The resulting reduced-order model not only is a rational tangential interpolant but also retains the port-Hamiltonian structure; hence is passive. This reduction methodology is described ...
A Lobatto interpolation grid over the triangle
Blyth, M. G.; Pozrikidis, C.
2006-02-01
A sequence of increasingly refined interpolation grids over the triangle is proposed, with the goal of achieving uniform convergence and ensuring high interpolation accuracy. The number of interpolation nodes, N, corresponds to a complete mth-order polynomial expansion with respect to the triangle barycentric coordinates, which arises by the horizontal truncation of the Pascal triangle. The proposed grid is generated by deploying Lobatto interpolation nodes along the three edges of the triangle, and then computing interior nodes by averaged intersections to achieve three-fold rotational symmetry. Numerical computations show that the Lebesgue constant and interpolation accuracy of the proposed grid compares favorably with those of the best-known grids consisting of the Fekete points. Integration weights corresponding to the set of Lobatto triangle base points are tabulated.
An Improved Rotary Interpolation Based on FPGA
Directory of Open Access Journals (Sweden)
Mingyu Gao
2014-08-01
Full Text Available This paper presents an improved rotary interpolation algorithm, which consists of a standard curve interpolation module and a rotary process module. Compared to the conventional rotary interpolation algorithms, the proposed rotary interpolation algorithm is simpler and more efficient. The proposed algorithm was realized on a FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe, which uses rotary ellipse and rotary parabolic as an example. According to the theoretical analysis and practical process validation, the algorithm has the following advantages: firstly, less arithmetic items is conducive for interpolation operation; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.
Kriging interpolating cosmic velocity field
Yu, Yu; Zhang, Jun; Jing, Yipeng; Zhang, Pengjie
2015-10-01
Volume-weighted statistics of large-scale peculiar velocity is preferred by peculiar velocity cosmology, since it is free of the uncertainties of galaxy density bias entangled in observed number density-weighted statistics. However, measuring the volume-weighted velocity statistics from galaxy (halo/simulation particle) velocity data is challenging. Therefore, the exploration of velocity assignment methods with well-controlled sampling artifacts is of great importance. For the first time, we apply the Kriging interpolation to obtain the volume-weighted velocity field. Kriging is a minimum variance estimator. It predicts the most likely velocity for each place based on the velocity at other places. We test the performance of Kriging quantified by the E-mode velocity power spectrum from simulations. Dependences on the variogram prior used in Kriging, the number nk of the nearby particles to interpolate, and the density nP of the observed sample are investigated. First, we find that Kriging induces 1% and 3% systematics at k ˜0.1 h Mpc-1 when nP˜6 ×1 0-2(h-1 Mpc )-3 and nP˜6 ×1 0-3(h-1 Mpc )-3 , respectively. The deviation increases for decreasing nP and increasing k . When nP≲6 ×1 0-4(h-1 Mpc )-3 , a smoothing effect dominates small scales, causing significant underestimation of the velocity power spectrum. Second, increasing nk helps to recover small-scale power. However, for nP≲6 ×1 0-4(h-1 Mpc )-3 cases, the recovery is limited. Finally, Kriging is more sensitive to the variogram prior for a lower sample density. The most straightforward application of Kriging on the cosmic velocity field does not show obvious advantages over the nearest-particle method [Y. Zheng, P. Zhang, Y. Jing, W. Lin, and J. Pan, Phys. Rev. D 88, 103510 (2013)] and could not be directly applied to cosmology so far. However, whether potential improvements may be achieved by more delicate versions of Kriging is worth further investigation.
Elastic Network Model of a Nuclear Transport Complex
Ryan, Patrick; Liu, Wing K.; Lee, Dockjin; Seo, Sangjae; Kim, Young-Jin; Kim, Moon K.
2010-05-01
The structure of Kap95p was obtained from the Protein Data Bank (www.pdb.org) and analyzed RanGTP plays an important role in both nuclear protein import and export cycles. In the nucleus, RanGTP releases macromolecular cargoes from importins and conversely facilitates cargo binding to exportins. Although the crystal structure of the nuclear import complex formed by importin Kap95p and RanGTP was recently identified, its molecular mechanism still remains unclear. To understand the relationship between structure and function of a nuclear transport complex, a structure-based mechanical model of Kap95p:RanGTP complex is introduced. In this model, a protein structure is simply modeled as an elastic network in which a set of coarse-grained point masses are connected by linear springs representing biochemical interactions at atomic level. Harmonic normal mode analysis (NMA) and anharmonic elastic network interpolation (ENI) are performed to predict the modes of vibrations and a feasible pathway between locked and unlocked conformations of Kap95p, respectively. Simulation results imply that the binding of RanGTP to Kap95p induces the release of the cargo in the nucleus as well as prevents any new cargo from attaching to the Kap95p:RanGTP complex.
Interferometric interpolation of sparse marine data
Hanafy, Sherif M.
2013-10-11
We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.
An interpolation method for stream habitat assessments
Sheehan, Kenneth R.; Welsh, Stuart A.
2015-01-01
Interpolation of stream habitat can be very useful for habitat assessment. Using a small number of habitat samples to predict the habitat of larger areas can reduce time and labor costs as long as it provides accurate estimates of habitat. The spatial correlation of stream habitat variables such as substrate and depth improves the accuracy of interpolated data. Several geographical information system interpolation methods (natural neighbor, inverse distance weighted, ordinary kriging, spline, and universal kriging) were used to predict substrate and depth within a 210.7-m2 section of a second-order stream based on 2.5% and 5.0% sampling of the total area. Depth and substrate were recorded for the entire study site and compared with the interpolated values to determine the accuracy of the predictions. In all instances, the 5% interpolations were more accurate for both depth and substrate than the 2.5% interpolations, which achieved accuracies up to 95% and 92%, respectively. Interpolations of depth based on 2.5% sampling attained accuracies of 49–92%, whereas those based on 5% percent sampling attained accuracies of 57–95%. Natural neighbor interpolation was more accurate than that using the inverse distance weighted, ordinary kriging, spline, and universal kriging approaches. Our findings demonstrate the effective use of minimal amounts of small-scale data for the interpolation of habitat over large areas of a stream channel. Use of this method will provide time and cost savings in the assessment of large sections of rivers as well as functional maps to aid the habitat-based management of aquatic species.
The role of boric acid in the synthesis of Eni Carbon Silicates.
Zanardi, Stefano; Bellussi, Giuseppe; Parker, Wallace O'Neil; Montanari, Erica; Bellettato, Michela; Cruciani, Giuseppe; Carati, Angela; Guidetti, Stefania; Rizzo, Caterina; Millini, Roberto
2014-07-21
The influence of H3BO3 on the crystallization of hybrid organic-inorganic aluminosilicates denoted as Eni Carbon Silicates (ECS's) was investigated. Syntheses were carried out at 100 °C under different experimental conditions, using bridged silsesquioxanes of general formula (EtO)3Si-R-Si(OEt)3 (R = -C6H4- (BTEB), -C10H6- (BTEN) and -C6H4-C6H4- (BTEBP)), in the presence of equimolar concentrations of NaAlO2 and H3BO3. The study, involving the synthesis of three different but structurally related phases (ECS-14 from BTEB, ECS-13 here described for the first time from BTEN, and ECS-5 from BTEBP), confirmed a catalytic role for H3BO3 which in general increased the crystallization rate and improved the product quality in terms of amount of crystallized phase (crystallinity), size of the crystallites and phase purity, while it was weakly incorporated in trace amounts in the framework of ECS's.
Yamagata, Akira; Kato, Junichi; Hirota, Ryuichi; Kuroda, Akio; Ikeda, Tsukasa; Takiguchi, Noboru; Ohtake, Hisao
1999-01-01
Two plasmids were discovered in the ammonia-oxidizing bacterium Nitrosomonas sp. strain ENI-11, which was isolated from activated sludge. The plasmids, designated pAYS and pAYL, were relatively small, being approximately 1.9 kb long. They were cryptic plasmids, having no detectable plasmid-linked antibiotic resistance or heavy metal resistance markers. The complete nucleotide sequences of pAYS and pAYL were determined, and their physical maps were constructed. There existed two major open reading frames, ORF1 in pAYS and ORF2 in pAYL, each of which was more than 500 bp long. The predicted product of ORF2 was 28% identical to part of the replication protein of a Bacillus plasmid, pBAA1. However, no significant similarity to any known protein sequences was detected with the predicted product of ORF1. pAYS and pAYL had a highly homologous region, designated HHR, of 262 bp. The overall identity was 98% between the two nucleotide sequences. Interestingly, HHR-homologous sequences were also detected in the genomes of ENI-11 and the plasmidless strain Nitrosomonas europaea IFO14298. Deletion analysis of pAYS and pAYL indicated that HHR, together with either ORF1 or ORF2, was essential for plasmid maintenance in ENI-11. To our knowledge, pAYS and pAYL are the first plasmids found in the ammonia-oxidizing autotrophic bacteria. PMID:10348848
Hirota, Ryuichi; Kuroda, Akio; Ikeda, Tsukasa; Takiguchi, Noboru; Ohtake, Hisao; Kato, Junichi
2006-08-01
The nitrifying bacterium Nitrosomonas sp. strain ENI-11 has three copies of the gene encoding hydroxylamine oxidoreductase (hao(1), hao(2), and hao(3)) on its genome. Broad-host-range reporter plasmids containing transcriptional fusion genes between hao copies and lacZ were constructed to analyze the expression of each hydroxylamine oxidoreductase gene (hao) copy individually and quantitatively. beta-Galactosidase assays of ENI-11 harboring reporter plasmids revealed that all hao copies were transcribed in the wild-type strain. Promoter analysis of hao copies revealed that transcription of hao(3) was highest among the hao copies. Expression levels of hao(1) and hao(2) were 40% and 62% of that of hao(3) respectively. Transcription of hao(1) was negatively regulated, whereas a portion of hao(3) transcription was read through transcription from the rpsT promoter. When energy-depleted cells were incubated in the growth medium, only hao(3) expression increased. This result suggests that it is hao(3) that is responsible for recovery from energy-depleted conditions in Nitrosomonas sp. strain ENI-11.
Interpolation of diffusion weighted imaging datasets
DEFF Research Database (Denmark)
Dyrby, Tim B; Lundell, Henrik; Burke, Mark W
2014-01-01
anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal...... to the voxel size showed that conventional higher-order interpolation methods improved the geometrical representation of white-matter tracts with reduced partial-volume-effect (PVE), except at tract boundaries. Simulations and interpolation of ex-vivo monkey brain DWI datasets revealed that conventional...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...
NOAA Daily Optimum Interpolation Sea Surface Temperature
National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA 1/4° daily Optimum Interpolation Sea Surface Temperature (or daily OISST) is an analysis constructed by combining observations from different platforms...
Extended Lagrange interpolation in L1 spaces
Occorsio, Donatella; Russo, Maria Grazia
2016-10-01
Let w (x )=e-xβxα , w ¯(x )=x w (x ) and denote by {pm(w)}m,{pn(w¯)}n the corresponding sequences of orthonormal polynomials. The zeros of the polynomial Q2 m +1=pm +1(w )pm(w ¯) are simple and are sufficiently far among them. Therefore it is possible to construct an interpolation process essentially based on the zeros of Q2m+1, which is called "Extended Lagrange Interpolation". Here we study the convergence of this interpolation process in suitable weighted L1 spaces. This study completes the results given by the authors in previous papers in weighted Lup((0 ,+∞ )) , for 1≤p≤∞. Moreover an application of the proposed interpolation process in order to construct an e cient product quadrature scheme for weakly singular integrals is given.
Quantum Communication and Quantum Multivariate Polynomial Interpolation
Diep, Do Ngoc; Giang, Do Hoang
2017-09-01
The paper is devoted to the problem of multivariate polynomial interpolation and its application to quantum secret sharing. We show that using quantum Fourier transform one can produce the protocol for quantum secret sharing distribution.
Kuu plaat : Interpol Antics. Plaadid kauplusest Lasering
2005-01-01
Heliplaatidest: "Interpol Antics", Scooter "Mind the Gap", Slide-Fifty "The Way Ahead", Psyhhoterror "Freddy, löö esimesena!", Riho Sibul "Must", Bossacucanova "Uma Batida Diferente", "Biscantorat - Sound of the spirit from Glenstal Abbey"
Interpolation algorithm for asynchronous ADC-data
Bramburger, Stefan; Zinke, Benny; Killat, Dirk
2017-09-01
This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT) algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.
Revisiting Veerman’s interpolation method
DEFF Research Database (Denmark)
Christiansen, Peter; Bay, Niels Oluf
2016-01-01
This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established for compari......This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established...
Calculation of electromagnetic parameter based on interpolation algorithm
Energy Technology Data Exchange (ETDEWEB)
Zhang, Wenqiang, E-mail: zwqcau@gmail.com [College of Engineering, China Agricultural University, Beijing 100083 (China); Bionic and Micro/Nano/Bio Manufacturing Technology Research Center, Beihang University, Beijing 100191 (China); Yuan, Liming; Zhang, Deyuan [Bionic and Micro/Nano/Bio Manufacturing Technology Research Center, Beihang University, Beijing 100191 (China)
2015-11-01
Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment.
High-resolution studies of rainfall on Norfolk Island. Part II: Interpolation of rainfall data
Dirks, K. N.; Hay, J. E.; Stow, C. D.; Harris, D.
1998-07-01
Four spatial interpolation methods are compared using rainfall data from a network of thirteen rain gauges on Norfolk Island (area 35 km 2). The purpose is to obtain spatially continuous rainfall estimates across the island, from point measurements and for different integration times, by the most effective means. The more computationally demanding method of kriging provided no significant improvement over any of the much simpler inverse-distance, Thiessen, or areal-mean methods. In order to assimilate some of the characteristics of spatially varying rainfall, and based on the comparisons performed, the inverse-distance method is recommended for interpolations using spatially dense networks.
Edge-detect interpolation for direct digital periapical images
Energy Technology Data Exchange (ETDEWEB)
Song, Nam Kyu; Koh, Kwang Joon [Dept. of Oral and Maxillofacial Radiology, College of Dentistry, Chonbuk National University, Chonju (Korea, Republic of)
1998-02-15
The purpose of this study was to aid in the use of the digital images by edge-detect interpolation for direct digital periapical images using edge-deted interpolation. This study was performed by image processing of 20 digital periapical images; pixel replication, linear non-interpolation, linear interpolation, and edge-sensitive interpolation. The obtained results were as follows: 1. Pixel replication showed blocking artifact and serious image distortion. 2. Linear interpolation showed smoothing effect on the edge. 3. Edge-sensitive interpolation overcame the smoothing effect on the edge and showed better image.
Fernandes, Annemarie T; Shen, Jason; Finlay, Jarod; Mitra, Nandita; Evans, Tracey; Stevenson, James; Langer, Corey; Lin, Lilie; Hahn, Stephen; Glatstein, Eli; Rengan, Ramesh
2010-05-01
Elective nodal irradiation (ENI) and involved field radiotherapy (IFRT) are definitive radiotherapeutic approaches used to treat patients with locally advanced non-small cell lung cancer (NSCLC). ENI delivers prophylactic radiation to clinically uninvolved lymph nodes, while IFRT only targets identifiable gross nodal disease. Because clinically uninvolved nodal stations may harbor microscopic disease, IFRT raises concerns for increased nodal failures. This retrospective cohort analysis evaluates failure rates and treatment-related toxicities in patients treated at a single institution with ENI and IFRT. We assessed all patients with stage III locally advanced or stage IV oligometastatic NSCLC treated with definitive radiotherapy from 2003 to 2008. Each physician consistently treated with either ENI or IFRT, based on their treatment philosophy. Of the 108 consecutive patients assessed (60 ENI vs. 48 IFRT), 10 patients had stage IV disease and 95 patients received chemotherapy. The median follow-up time for survivors was 18.9 months. On multivariable logistic regression analysis, patients treated with IFRT demonstrated a significantly lower risk of high grade esophagitis (Odds ratio: 0.31, p = 0.036). The differences in 2-year local control (39.2% vs. 59.6%), elective nodal control (84.3% vs. 84.3%), distant control (47.7% vs. 52.7%) and overall survival (40.1% vs. 43.7%) rates were not statistically significant between ENI vs. IFRT. Nodal failure rates in clinically uninvolved nodal stations were not increased with IFRT when compared to ENI. IFRT also resulted in significantly decreased esophageal toxicity, suggesting that IFRT may allow for integration of concurrent systemic chemotherapy in a greater proportion of patients. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Quantum interpolation for high-resolution sensing.
Ajoy, Ashok; Liu, Yi-Xiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Bissbort, Ulf; Cappellaro, Paola
2017-02-28
Recent advances in engineering and control of nanoscale quantum sensors have opened new paradigms in precision metrology. Unfortunately, hardware restrictions often limit the sensor performance. In nanoscale magnetic resonance probes, for instance, finite sampling times greatly limit the achievable sensitivity and spectral resolution. Here we introduce a technique for coherent quantum interpolation that can overcome these problems. Using a quantum sensor associated with the nitrogen vacancy center in diamond, we experimentally demonstrate that quantum interpolation can achieve spectroscopy of classical magnetic fields and individual quantum spins with orders of magnitude finer frequency resolution than conventionally possible. Not only is quantum interpolation an enabling technique to extract structural and chemical information from single biomolecules, but it can be directly applied to other quantum systems for superresolution quantum spectroscopy.
Quadratic Interpolation and Linear Lifting Design
Directory of Open Access Journals (Sweden)
Joel Solé
2007-03-01
Full Text Available A quadratic image interpolation method is stated. The formulation is connected to the optimization of lifting steps. This relation triggers the exploration of several interpolation possibilities within the same context, which uses the theory of convex optimization to minimize quadratic functions with linear constraints. The methods consider possible knowledge available from a given application. A set of linear equality constraints that relate wavelet bases and coefficients with the underlying signal is introduced in the formulation. As a consequence, the formulation turns out to be adequate for the design of lifting steps. The resulting steps are related to the prediction minimizing the detail signal energy and to the update minimizing the l2-norm of the approximation signal gradient. Results are reported for the interpolation methods in terms of PSNR and also, coding results are given for the new update lifting steps.
Interpolation of missing data in image sequences.
Kokaram, A C; Morris, R D; Fitzgerald, W J; Rayner, P W
1995-01-01
This paper presents a number of model based interpolation schemes tailored to the problem of interpolating missing regions in image sequences. These missing regions may be of arbitrary size and of random, but known, location. This problem occurs regularly with archived film material. The film is abraded or obscured in patches, giving rise to bright and dark flashes, known as "dirt and sparkle" in the motion picture industry. Both 3-D autoregressive models and 3-D Markov random fields are considered in the formulation of the different reconstruction processes. The models act along motion directions estimated using a multiresolution block matching scheme. It is possible to address this sort of impulsive noise suppression problem with median filters, and comparisons with earlier work using multilevel median filters are performed. These comparisons demonstrate the higher reconstruction fidelity of the new interpolators.
Multiscale empirical interpolation for solving nonlinear PDEs
Calo, Victor M.
2014-12-01
In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.
Bugoi, R.; Oanţă-Marghitu, R.; Calligaro, T.
2016-03-01
This paper reports the archaeometric investigations of 418 loose garnets from Pietroasa and Cluj-Someşeni treasures and Apahida II and III princely grave inventories (5th century AD). The chemical composition of the gems was determined by external beam micro-PIXE technique at the AGLAE accelerator of C2RMF, Paris, France. Complementary observations made by Optical Microscopy revealed details on the gemstones cutting and polishing and permitted to identify certain mineral inclusions. The compositional results evidenced several types of garnets from the pyralspite series, suggesting distinct provenances for these Early Medieval gems.
Energy Technology Data Exchange (ETDEWEB)
Bugoi, R., E-mail: bugoi@nipne.ro [Horia Hulubei National Institute for Nuclear Physics and Engineering, Măgurele 077125 (Romania); Oanţă-Marghitu, R., E-mail: rodicamarghitu@yahoo.com [Muzeul Naţional de Istorie a României, Bucureşti 030026 (Romania); Calligaro, T., E-mail: thomas.calligaro@culture.gouv.fr [Centre de Recherche et de Restauration des Musées de France, C2RMF, Palais du Louvre – Porte des Lions, 75001 Paris (France); PSL Research University, Chimie ParisTech – CNRS, Institut de Recherche Chimie Paris, UMR8247, 75005 Paris (France)
2016-03-15
This paper reports the archaeometric investigations of 418 loose garnets from Pietroasa and Cluj-Someşeni treasures and Apahida II and III princely grave inventories (5th century AD). The chemical composition of the gems was determined by external beam micro-PIXE technique at the AGLAE accelerator of C2RMF, Paris, France. Complementary observations made by Optical Microscopy revealed details on the gemstones cutting and polishing and permitted to identify certain mineral inclusions. The compositional results evidenced several types of garnets from the pyralspite series, suggesting distinct provenances for these Early Medieval gems.
Interpolation for a subclass of H∞
Indian Academy of Sciences (India)
We introduce and characterize two types of interpolating sequences in the unit disc D of the complex plane for the class of all functions being the product of two analytic functions in D , one bounded and another regular up to the boundary of D , concretely in the Lipschitz class, and at least one of them vanishing at some ...
Interpolation and Iteration for Nonlinear Filters
Energy Technology Data Exchange (ETDEWEB)
Chorin, Alexandre J. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Tu, Xuemin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States)
2009-10-16
We present a general form of the iteration and interpolation process used in implicit particle filters. Implicit filters are based on a pseudo-Gaussian representation of posterior densities, and are designed to focus the particle paths so as to reduce the number of particles needed in nonlinear data assimilation. Examples are given.
MINIMAL RATIONAL INTERPOLATION AND PRONYS METHOD
ANTOULAS, AC; WILLEMS, JC
1990-01-01
A new method is proposed for dealing with the rational interpolation problem. It is based on the reachability of an appropriately defined pair of matrices. This method permits a complete clarification of several issues raised, but not answered, by the so-called Prony method of fitting a linear model
Interpolation for a subclass of H
Indian Academy of Sciences (India)
We introduce and characterize two types of interpolating sequences in the unit disc D of the complex ... We write (zn) for any sequence in D having no accumulation points in D. Recall that the pseudo-hyperbolic distance is .... Writing this inequality for G = g, z = zm and zk = zm , we obtain ψ(zm,z. ∗ m) |zm − zm |. |f (zm)|.
Interpolation of intermolecular potentials using Gaussian processes
Uteva, Elena; Graham, Richard S.; Wilkinson, Richard D.; Wheatley, Richard J.
2017-10-01
A procedure is proposed to produce intermolecular potential energy surfaces from limited data. The procedure involves generation of geometrical configurations using a Latin hypercube design, with a maximin criterion, based on inverse internuclear distances. Gaussian processes are used to interpolate the data, using over-specified inverse molecular distances as covariates, greatly improving the interpolation. Symmetric covariance functions are specified so that the interpolation surface obeys all relevant symmetries, reducing prediction errors. The interpolation scheme can be applied to many important molecular interactions with trivial modifications. Results are presented for three systems involving CO2, a system with a deep energy minimum (HF-HF), and a system with 48 symmetries (CH4-N2). In each case, the procedure accurately predicts an independent test set. Training this method with high-precision ab initio evaluations of the CO2-CO interaction enables a parameter-free, first-principles prediction of the CO2-CO cross virial coefficient that agrees very well with experiments.
Interpolating atmospheric water vapor delay by incorporating terrain elevation information
Xu, W. B.; Li, Z. W.; Ding, X. L.; Zhu, J. J.
2011-09-01
In radio signal-based observing systems, such as Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR), the water vapor in the atmosphere will cause delays during the signal transmission. Such delays vary significantly with terrain elevation. In the case when atmospheric delays are to be eliminated from the measured raw signals, spatial interpolators may be needed. By taking advantage of available terrain elevation information during spatial interpolation process, the accuracy of the atmospheric delay mapping can be considerably improved. This paper first reviews three elevation-dependent water vapor interpolation models, i.e., the Best Linear Unbiased Estimator in combination with the water vapor Height Scaling Model (BLUE + HSM), the Best Linear Unbiased Estimator coupled with the Elevation-dependent Covariance Model (BLUE + ECM), and the Simple Kriging with varying local means based on the Baby semi-empirical model (SKlm + Baby for short). A revision to the SKlm + Baby model is then presented, where the Onn water vapor delay model is adopted to substitute the inaccurate Baby semi-empirical model (SKlm + Onn for short). Experiments with the zenith wet delays obtained through the GPS observations from the Southern California Integrated GPS Network (SCIGN) demonstrate that the SKlm + Onn model outperforms the other three. The RMS of SKlm + Onn is only 0.55 cm, while those of BLUE + HSM, BLUE + ECM and SKlm + Baby amount to 1.11, 1.49 and 0.77 cm, respectively. The proposed SKlm + Onn model therefore represents an improvement of 29-63% over the other known models.
Directory of Open Access Journals (Sweden)
Silvestre, Javier
2011-04-01
Full Text Available This article extends the existing literature on the internal migration patterns of the foreign-born in Spain. We analyze the spatial distribution of immigrants and their patterns of mobility at different levels. Socio-demographic characteristics of immigrants and characteristics of places of origin and destination are considered. We also examine repeat migration, duration of residence in each destination, as well as return migration within Spain. To this end, we make use of a new micro database corresponding to the National Immigrant Survey (Encuesta Nacional de Inmigrantes, ENI-2007.
Este artículo contribuye a la literatura sobre la migración interna de los inmigrantes nacidos fuera de España. En él se analiza la distribución espacial y las pautas de movilidad de los inmigrantes, considerando aspectos como las características sociodemográficas de los individuos y las características de los orígenes y destinos dentro de España. También se analiza la emigración repetida, la duración de la residencia en cada destino y la emigración de retorno (dentro de España. Para todo ello, se utiliza la nueva base de datos micro derivada de la Encuesta Nacional de Inmigrantes, ENI-2007.
Yamashita, Hideomi; Takenaka, Ryousuke; Omori, Mami; Imae, Toshikazu; Okuma, Kae; Ohtomo, Kuni; Nakagawa, Keiichi
2015-08-14
This retrospective study on early and locally advanced esophageal cancer was conducted to evaluate locoregional failure and its impact on survival by comparing involved field radiotherapy (IFRT) with elective nodal irradiation (ENI) in combination with concurrent chemotherapy. We assessed all patients with esophageal cancer of stages I-IV treated with definitive radiotherapy from June 2000 to March 2014. Between 2000 and 2011, ENI was used for all cases excluding high age cases. After Feb 2011, a prospective study about IFRT was started, and therefore IFRT was used since then for all cases. Concurrent chemotherapy regimen was nedaplatin (80 mg/m(2) at D1 and D29) and 5-fluorouracil (800 mg/m(2) at D1-4 and D29-32). Of the 239 consecutive patients assessed (120 ENI vs. 119 IFRT), 59 patients (24.7%) had stage IV disease and all patients received at least one cycle of chemotherapy. The median follow-up time for survivors was 34.0 months. There were differences in 3-year local control (44.8% vs. 55.5%, p = 0.039), distant control (53.8% vs. 69.9%, p = 0.021) and overall survival (34.8% vs. 51.6%, p = 0.087) rates between ENI vs. IFRT, respectively. Patients treated with IFRT (8 %) demonstrated a significantly lower risk (p = 0.047) of high grade late toxicities than with ENI (16%). IFRT did not increase the risk of initially uninvolved or isolated nodal failures (27.5% in ENI and 13.4% in IFRT). Nodal failure rates in clinically uninvolved nodal stations were not increased with IFRT when compared to ENI. IFRT also resulted in significantly decreased esophageal toxicity, suggesting that IFRT may allow for integration of concurrent systemic chemotherapy in a greater proportion of patients. Both tendencies of improved loco-regional progression-free survival and a significant increased overall survival rate favored the IFRT arm over the ENI arm in this study.
Directory of Open Access Journals (Sweden)
Longxiang Li
Full Text Available Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.
Selection of an Appropriate Interpolation Method for Rainfall Data In ...
African Journals Online (AJOL)
Interpolation technique can be used to establish the rainfall data at the location of interest from available data. There are many interpolation methods in use with various limitations and likelihood of errors. This study applied five interpolation methods to existing rainfall data in central Nigeria to determine the most appropriate ...
Interpolating sequences for H 1 ∞ (B H ) | Miralles | Quaestiones ...
African Journals Online (AJOL)
We prove that under the extended Carleson's condition, a sequence (xn) ⊂ BH is linear interpolating for H∞(BH) for an infinite dimensional Hilbert space H. In particular, we construct the interpolating functions for each sequence and find a bound for the constant of interpolation. Keywords: Infinite dimensional holomorphy, ...
A cubic interpolation algorithm for solving non-linear equations ...
African Journals Online (AJOL)
A new Algorithm - based on cubic interpolation have been developed for solving non-linear algebraic equations. The Algorithm is derived from LaGrange's interpolation polynomial. The method discussed here is faster than the \\"Regular Falsi\\" which is based on linear interpolation. Since this new method does not involve ...
Holographic interpolation between a and F
Energy Technology Data Exchange (ETDEWEB)
Kawano, Teruhiko [Department of Physics, Faculty of Science, The University of Tokyo,Bunkyo-ku, Tokyo 113-0033 (Japan); Nakaguchi, Yuki [Department of Physics, Faculty of Science, The University of Tokyo,Bunkyo-ku, Tokyo 113-0033 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (WPI), The University of Tokyo,5-1-5 Kashiwa-no-Ha, Kashiwa City, Chiba 277-8568 (Japan); Nishioka, Tatsuma [Department of Physics, Faculty of Science, The University of Tokyo,Bunkyo-ku, Tokyo 113-0033 (Japan)
2014-12-29
An interpolating function F-tilde between the a-anomaly coefficient in even dimensions and the free energy on an odd-dimensional sphere has been proposed recently and is conjectured to monotonically decrease along any renormalization group flow in continuous dimension d. We examine F-tilde in the large-N CFT’s in d dimensions holographically described by the Einstein-Hilbert gravity in the AdS{sub d+1} space. We show that F-tilde is a smooth function of d and correctly interpolates the a coefficients and the free energies. The monotonicity of F-tilde along an RG flow follows from the analytic continuation of the holographic c-theorem to continuous d, which completes the proof of the conjecture.
Air Quality Assessment Using Interpolation Technique
Awkash Kumar; Rashmi S. Patil; Anil Kumar Dikshit; Rakesh Kumar
2016-01-01
Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance...
Some splines produced by smooth interpolation
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
2018-01-01
Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub
Image Interpolation with Geometric Contour Stencils
Directory of Open Access Journals (Sweden)
Pascal Getreuer
2011-09-01
Full Text Available We consider the image interpolation problem where given an image vm,n with uniformly-sampled pixels vm,n and point spread function h, the goal is to find function u(x,y satisfying vm,n = (h*u(m,n for all m,n in Z. This article improves upon the IPOL article Image Interpolation with Contour Stencils. In the previous work, contour stencils are used to estimate the image contours locally as short line segments. This article begins with a continuous formulation of total variation integrated over a collection of curves and defines contour stencils as a consistent discretization. This discretization is more reliable than the previous approach and can effectively distinguish contours that are locally shaped like lines, curves, corners, and circles. These improved contour stencils sense more of the geometry in the image. Interpolation is performed using an extension of the method described in the previous article. Using the improved contour stencils, there is an increase in image quality while maintaining similar computational efficiency.
Interpolation of climate variables and temperature modeling
Samanta, Sailesh; Pal, Dilip Kumar; Lohar, Debasish; Pal, Babita
2012-01-01
Geographic Information Systems (GIS) and modeling are becoming powerful tools in agricultural research and natural resource management. This study proposes an empirical methodology for modeling and mapping of the monthly and annual air temperature using remote sensing and GIS techniques. The study area is Gangetic West Bengal and its neighborhood in the eastern India, where a number of weather systems occur throughout the year. Gangetic West Bengal is a region of strong heterogeneous surface with several weather disturbances. This paper also examines statistical approaches for interpolating climatic data over large regions, providing different interpolation techniques for climate variables' use in agricultural research. Three interpolation approaches, like inverse distance weighted averaging, thin-plate smoothing splines, and co-kriging are evaluated for 4° × 4° area, covering the eastern part of India. The land use/land cover, soil texture, and digital elevation model are used as the independent variables for temperature modeling. Multiple regression analysis with standard method is used to add dependent variables into regression equation. Prediction of mean temperature for monsoon season is better than winter season. Finally standard deviation errors are evaluated after comparing the predicted temperature and observed temperature of the area. For better improvement, distance from the coastline and seasonal wind pattern are stressed to be included as independent variables.
Delimiting Areas of Endemism through Kernel Interpolation
Oliveira, Ubirajara; Brescovit, Antonio D.; Santos, Adalberto J.
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units. PMID:25611971
A natural spline interpolation and exponential parameterization
Kozera, R.; Wilkołazka, M.
2016-06-01
We consider here a natural spline interpolation based on reduced data and the so-called exponential parameterization (depending on parameter λ ∈ [0, 1]). In particular, the latter is studied in the context of the trajectory approximation in arbitrary euclidean space. The term reduced data refers to an ordered collection of interpolation points without provision of the corresponding knots. The numerical verification of the intrinsic asymptotics α(λ) in γ approximation by natural spline γ^3'N is conducted here for regular and sufficiently smooth curve γ sampled more-or-less uniformly. We select in this paper the substitutes for the missing knots according to the exponential parameterization. The outcomes of the numerical tests manifest sharp linear convergence orders α(λ) = 1, for all λ ∈ [0, 1). In addition, the latter results in unexpected left-hand side dis-continuity at λ = 1, since as shown again here a sharp quadratic order α(1) = 2 prevails. Remarkably, the case of α(1)=2 (derived for reduced data) coincides with the well-known asymptotics established for a natural spline to fit non-reduced data determined by the sequence of interpolation points supplemented with the corresponding knots (see e.g. [1]).
Interpolation Grid for Local Area of Iasi City
Directory of Open Access Journals (Sweden)
Mihalache Raluca Maria
2014-05-01
Full Text Available Definitive transition to GNSS technology of achieving geodetic networks for cadastre implementation in cities and municipalities, enforce establishing a unique way of linking between current measurements and existing geodetic data, with a sufficient accuracy proper to urban cadastre standards. Regarding city of Iasi, is presented a different method of transformation which consist in an interpolation grid for heights system. The Romanian national height system is „Black Sea-1975” normal heights system. Founded in 1945 by Molodenski, this system uses the quasigeoid as reference surface, being in relation with the ellipsoid through the height anomalies sizes in each point. The unitary transformation between the ETRS- 89 ellipsoidal height system and the normal one, at national level is provided through the „TransdatRo” program developed by NACLR (National Agency for Cadastre and Land Registration.
Valenziano, L.; Zerbi, F.M.; Cimatti, A.; Bianco, A.; Bonoli, C.; Bortoletto, F.; Bulgarelli, A.; Butler, R.C.; Content, R.; Corcione, L.; Rosa, A.de; Franzetti, P.; Garilli, B.; Gianotti, F.; Giro, E.; Grange, R.; Leutenegger, P.; Ligori, S.; Martin, L.; Mandolesi, N.; Morgante, G.; Nicastro, L.; Riva, M.; Robberto, M.; Sharples, R.; Spanó, P.; Talbot, G.; Trifoglio, M.; Wink, R.; Zamkotsian, F.
2010-01-01
The Euclid Near-Infrared Spectrometer (E-NIS) Instrument was conceived as the spectroscopic probe on-board the ESA Dark Energy Mission Euclid. Together with the Euclid Imaging Channel (EIC) in its Visible (VIS) and Near Infrared (NIP) declinations, NIS formed part of the Euclid Mission Concept
Spline interpolations besides wood model widely used in lactation
Korkmaz, Mehmet
2017-04-01
In this study, for lactation curve, spline interpolations, alternative modeling passing through exactly all data points with respect to widely used Wood model applied to lactation data were be discussed. These models are linear spline, quadratic spline and cubic spline. The observed and estimated values according to spline interpolations and Wood model were given with their Error Sum of Squares and also the lactation curves of spline interpolations and widely used Wood model were shown on the same graph. Thus, the differences have been observed. The estimates for some intermediate values were done by using spline interpolations and Wood model. By using spline interpolations, the estimates of intermediate values could be made more precise. Furthermore, by using spline interpolations, the predicted values for missing or incorrect observation were very successful according to the values of Wood model. By using spline interpolations, new ideas and interpretations in addition to the information of the well-known classical analysis were shown to the investigators.
Interpolated Sounding Value-Added Product
Energy Technology Data Exchange (ETDEWEB)
Troyan, D [Brookhaven National Laboratory
2013-04-01
The Interpolated Sounding (INTERPSONDE) value-added product (VAP) uses a combination of observations from radiosonde soundings, the microwave radiometer (MWR), and surface meteorological instruments in order to define profiles of the atmospheric thermodynamic state at one-minute temporal intervals and a total of at least 266 altitude levels. This VAP is part of the Merged Sounding (MERGESONDE) suite of VAPs. INTERPSONDE is the profile of the atmospheric thermodynamic state created using the algorithms of MERGESONDE without including the model data from the European Centre for Medium-range Weather Forecasting (ECMWF). More specifically, INTERPSONDE VAP represents an intermediate step within the larger MERGESONDE process.
Topics in multivariate approximation and interpolation
Jetter, Kurt
2005-01-01
This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr
The Design of Free Surface Interpolator for CNC Machining
Tseng, Pai-Chung; Hon, Jung-Yong
This article provides a design for the real-time interpolator of the B-Spline free surface. The goal is to increase both the accuracy and the speed of manufacturing production by eliminating the bottleneck caused by the off-line interpolator of the CNC tool machines, by reducing manufacturing errors, and by resolving the issue of long numeric control code (NC code). The design of the surface interpolator includes location planning using the real-time cutter contact data, the interpolation of the cutter contact data, and the cutting compensation of the cutter location data. This paper provides a surface interpolation method for reading the surface NC code of a three-dimensional surface and for implementing the surface interpolation. To simplify the complexity of the calculation, one can first use knot interpolation to decompose the B-Spline surface into sections of Bezier surface, which are then used as the surface for real-time surface interpolation. Subsequently, the approximation of the second order Taylor expansion is used to obtain the location interpolation points. Fixed machining parameters are adopted to avoid the low efficiency and low quality of manufacturing caused by variation of the feeding speed. The derived cutting locations are further combined to obtain the cutting locations of the original B-Spline surface. The verification of the experiments is shown with comparing the manufacturing profile errors and any variation of the feeding speed derived from computer simulation and the corresponding data obtained from the off-line interpolators.
Air Quality Assessment Using Interpolation Technique
Directory of Open Access Journals (Sweden)
Awkash Kumar
2016-07-01
Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.
Size-Dictionary Interpolation for Robot's Adjustment
Directory of Open Access Journals (Sweden)
Morteza eDaneshmand
2015-05-01
Full Text Available This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically-inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps towards completion of the whole realistic online fitting package is provided.
Directory of Open Access Journals (Sweden)
A. Verworn
2011-02-01
Full Text Available Hydrological modelling of floods relies on precipitation data with a high resolution in space and time. A reliable spatial representation of short time step rainfall is often difficult to achieve due to a low network density. In this study hourly precipitation was spatially interpolated with the multivariate geostatistical method kriging with external drift (KED using additional information from topography, rainfall data from the denser daily networks and weather radar data. Investigations were carried out for several flood events in the time period between 2000 and 2005 caused by different meteorological conditions. The 125 km radius around the radar station Ummendorf in northern Germany covered the overall study region. One objective was to assess the effect of different approaches for estimation of semivariograms on the interpolation performance of short time step rainfall. Another objective was the refined application of the method kriging with external drift. Special attention was not only given to find the most relevant additional information, but also to combine the additional information in the best possible way. A multi-step interpolation procedure was applied to better consider sub-regions without rainfall.
The impact of different semivariogram types on the interpolation performance was low. While it varied over the events, an averaged semivariogram was sufficient overall. Weather radar data were the most valuable additional information for KED for convective summer events. For interpolation of stratiform winter events using daily rainfall as additional information was sufficient. The application of the multi-step procedure significantly helped to improve the representation of fractional precipitation coverage.
Monotonicity preserving splines using rational cubic Timmer interpolation
Zakaria, Wan Zafira Ezza Wan; Alimin, Nur Safiyah; Ali, Jamaludin Md
2017-08-01
In scientific application and Computer Aided Design (CAD), users usually need to generate a spline passing through a given set of data, which preserves certain shape properties of the data such as positivity, monotonicity or convexity. The required curve has to be a smooth shape-preserving interpolant. In this paper a rational cubic spline in Timmer representation is developed to generate interpolant that preserves monotonicity with visually pleasing curve. To control the shape of the interpolant three parameters are introduced. The shape parameters in the description of the rational cubic interpolant are subjected to monotonicity constrained. The necessary and sufficient conditions of the rational cubic interpolant are derived and visually the proposed rational cubic Timmer interpolant gives very pleasing results.
Systems and methods for interpolation-based dynamic programming
Rockwood, Alyn
2013-01-03
Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.
Bivariate Lagrange interpolation at the Padua points: Computational aspects
Caliari, Marco; de Marchi, Stefano; Vianello, Marco
2008-11-01
The so-called "Padua points" give a simple, geometric and explicit construction of bivariate polynomial interpolation in the square. Moreover, the associated Lebesgue constant has minimal order of growth . Here we show four families of Padua points for interpolation at any even or odd degree n, and we present a stable and efficient implementation of the corresponding Lagrange interpolation formula, based on the representation in a suitable orthogonal basis. We also discuss extension of (non-polynomial) Padua-like interpolation to other domains, such as triangles and ellipses; we give complexity and error estimates, and several numerical tests.
Analysis of Spatial Interpolation in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2010-01-01
This paper analyses different types of spatial interpolation for the material-point method The interpolations include quadratic elements and cubic splines in addition to the standard linear shape functions usually applied. For the small-strain problem of a vibrating bar, the best results are obta......This paper analyses different types of spatial interpolation for the material-point method The interpolations include quadratic elements and cubic splines in addition to the standard linear shape functions usually applied. For the small-strain problem of a vibrating bar, the best results...
Vidović, Petra
2017-01-01
Otpadne lignocelulozne sirovine, u odnosu na šećerne i škrobne sirovine koje se uvelike koriste za prehranu ljudi i životinja, predstavljaju održivu alternativu za proizvodnju biokemikalija kao što je mliječna kiselina. U ovom radu provedena je predobrada pšenične slame s 2 %-tnom natijevom lužinom u visokotlačnom reaktoru pri različitim temperaturama (120°C-210°C) i vremenima zadržavanja od 1 do 20 minuta. Nakon predobrade pšenične slame, dobivene su dvije faze (čvrsta i tekuća faza) te je o...
Input variable selection for interpolating high-resolution climate ...
African Journals Online (AJOL)
Accurate climate surfaces are vital for applications relating to groundwater recharge modelling, evapotranspiration estimation, sediment yield, stream flow prediction and flood risk mapping. Interpolated climate surface accuracy is determined by the interpolation algorithm employed, the resolution of the generated surfaces, ...
Catmull-Rom Curve Fitting and Interpolation Equations
Jerome, Lawrence
2010-01-01
Computer graphics and animation experts have been using the Catmull-Rom smooth curve interpolation equations since 1974, but the vector and matrix equations can be derived and simplified using basic algebra, resulting in a simple set of linear equations with constant coefficients. A variety of uses of Catmull-Rom interpolation are demonstrated,…
The application of Bayesian interpolation in Monte Carlo simulations
Rajabali Nejad, Mohammadreza; van Gelder, P.H.A.J.M.; van Erp, N.; Martorell, Sebastian; Soares, C. Guedes; Barnett, Julie
2009-01-01
To reduce the cost of Monte Carlo (MC) simulations for time-consuming processes (like Finite Elements), a Bayesian interpolation method is coupled with the Monte Carlo technique. It is, therefore, possible to reduce the number of realizations in MC by interpolation. Besides, there is a possibility
Prony's method in several variables: symbolic solutions by universal interpolation
Sauer, Tomas
2016-01-01
The paper considers a symbolic approach to Prony's method in several variables and its close connection to multivariate polynomial interpolation. Based on the concept of universal interpolation that can be seen as a weak generalization of univariate Chebychev systems, we can give estimates on the minimal number of evaluations needed to solve Prony's problem.
Visualizing and Understanding the Components of Lagrange and Newton Interpolation
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article takes a close look at Lagrange and Newton interpolation by graphically examining the component functions of each of these formulas. Although interpolation methods are often considered simply to be computational procedures, we demonstrate how the components of the polynomial terms in these formulas provide insight into where these…
Compositional kriging : a spatial interpolation method for compositional data
Walvoort, D.J.J.; Gruijter, de J.J.
2001-01-01
Compositional data are very common in the earth sciences. Nevertheless, little attention has been paid to the spatial interpolation of these data sets. Most interpolators do not necessarily satisfy the constant sum and nonnegativity constraints of compositional data, nor take spatial structure into
A FRACTAL-BASED STOCHASTIC INTERPOLATION SCHEME IN SUBSURFACE HYDROLOGY
The need for a realistic and rational method for interpolating sparse data sets is widespread. Real porosity and hydraulic conductivity data do not vary smoothly over space, so an interpolation scheme that preserves irregularity is desirable. Such a scheme based on the properties...
Application Of Laplace Interpolation In The Analysis Of Geopotential ...
African Journals Online (AJOL)
Geophysical data is often collected at irregular intervals along a profile or over a surface area. But most methods for the treatment of geophysical data often require that any data collected at irregular intervals have to be interpolated to obtain values at regular grid. Unlike the common 2-dimensional interpolation procedures, ...
Comparing interpolation schemes in dynamic receive ultrasound beamforming
DEFF Research Database (Denmark)
Kortbek, Jacob; Andresen, Henrik; Nikolov, Svetoslav
2005-01-01
In medical ultrasound interpolation schemes are of- ten applied in receive focusing for reconstruction of image points. This paper investigates the performance of various interpolation scheme by means of ultrasound simulations of point scatterers in Field II. The investigation includes conventional...
Li, Ruijian; Yu, Liang; Lin, Sixiang; Wang, Lina; Dong, Xin; Yu, Lingxia; Li, Weiyi; Li, Baosheng
2016-09-21
The use of involved field radiotherapy (IFRT) has generated concern about the increasing incidence of elective nodal failure (ENF) in contrast to elective nodal irradiation (ENI). This meta-analysis aimed to provide more reliable and up-to-date evidence on the incidence of ENF between IFRT and ENI. We searched three databases for eligible studies where locally advanced non-small cell lung cancer (NSCLC) patients received IFRT or ENI. Outcome of interest was the incidence of ENF. The fixed-effects model was used to pool outcomes across the studies. There were 3 RCTs and 3 cohort studies included with low risk of bias. There was no significant difference in incidence of ENF between IFRT and ENI either among RCTs (RR = 1.38, 95 % CI: 0.59-3.25, p = 0.46) or among cohort studies (RR = 0.99, 95 % CI: 0.46-2.10, p = 0.97). There was also no significant difference in incidence of ENF between IFRT and ENI when RCTs and cohort studies were combined (RR = 1.15, 95 % CI: 0.65-2.01, p = 0.64). I 2 of test for heterogeneity was 0 %. This meta-analysis provides more reliable and stable evidence that there is no significant difference in incidence of ENF between IFRT and ENI.
Interpolation functions in control volume finite element method
Abbassi, H.; Turki, S.; Nasrallah, S. Ben
The main contribution of this paper is the study of interpolation functions in control volume finite element method used in equal order and applied to an incompressible two-dimensional fluid flow. Especially, the exponential interpolation function expressed in the elemental local coordinate system is compared to the classic linear interpolation function expressed in the global coordinate system. A quantitative comparison is achieved by the application of these two schemes to four flows that we know the analytical solutions. These flows are classified in two groups: flows with privileged direction and flows without. The two interpolation functions are applied to a triangular element of the domain then; a direct comparison of the results given by each interpolation function to the exact value is easily realized. The two functions are also compared when used to solve the discretized equations over the entire domain. Stability of the numerical process and accuracy of solutions are compared.
The Use of Wavelets in Image Interpolation: Possibilities and Limitations
Directory of Open Access Journals (Sweden)
M. Grgic
2007-12-01
Full Text Available Discrete wavelet transform (DWT can be used in various applications, such as image compression and coding. In this paper we examine how DWT can be used in image interpolation. Afterwards proposed method is compared with two other traditional interpolation methods. For the case of magnified image achieved by interpolation, original image is unknown and there is no perfect way to judge the magnification quality. Common approach is to start with an original image, generate a lower resolution version of original image by downscaling, and then use different interpolation methods to magnify low resolution image. After that original and magnified images are compared to evaluate difference between them using different picture quality measures. Our results show that comparison of image interpolation methods depends on downscaling technique, image contents and quality metric. For fair comparison all these parameters need to be considered.
Evaluating Interpolation Methods for Velocity and Strain Rate in the Western United States
Rand, D. S.; McCaffrey, R.; Rudolph, M. L.; King, R. W.
2016-12-01
We calculate horizontal strain rates in the Western United States using a geodetic Global Positioning System network of 1,742 stations. Three dimensional velocity vectors in the North American reference frame for GPS stations are based on data beginning in 1993 and reveal, among other features, large-scale clockwise rotation. We explore multiple interpolation techniques (linear, polynomial, and spline methods) to estimate velocity gradients along the Earth's surface. Using these interpolation techniques, we calculate strain rates from the velocity gradients and make a detailed comparison of the strengths and limitations of each method. We analyze the calculated velocity and strain rate fields with detailed attention to ongoing post-seismic deformation related to the 1872 North Cascades earthquake and strain in the fore arc across the Puget Sound area based on GPS observations made there by us in 2016.
Spatiotemporal video deinterlacing using control grid interpolation
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Functions with disconnected spectrum sampling, interpolation, translates
Olevskii, Alexander M
2016-01-01
The classical sampling problem is to reconstruct entire functions with given spectrum S from their values on a discrete set L. From the geometric point of view, the possibility of such reconstruction is equivalent to determining for which sets L the exponential system with frequencies in L forms a frame in the space L^2(S). The book also treats the problem of interpolation of discrete functions by analytic ones with spectrum in S and the problem of completeness of discrete translates. The size and arithmetic structure of both the spectrum S and the discrete set L play a crucial role in these problems. After an elementary introduction, the authors give a new presentation of classical results due to Beurling, Kahane, and Landau. The main part of the book focuses on recent progress in the area, such as construction of universal sampling sets, high-dimensional and non-analytic phenomena. The reader will see how methods of harmonic and complex analysis interplay with various important concepts in different areas, ...
High resolution quantum metrology via quantum interpolation
Ajoy, Ashok; Liu, Yixiang; Saha, Kasturi; Marseglia, Luca; Jaskula, Jean-Christophe; Cappellaro, Paola
2016-05-01
Nitrogen Vacancy (NV) centers in diamond are a promising platform for quantum metrology - in particular for nanoscale magnetic resonance imaging to determine high resolution structures of single molecules placed outside the diamond. The conventional technique for sensing of external nuclear spins involves monitoring the effects of the target nuclear spins on the NV center coherence under dynamical decoupling (the CPMG/XY8 pulse sequence). However, the nuclear spin affects the NV coherence only at precise free evolution times - and finite timing resolution set by hardware often severely limits the sensitivity and resolution of the method. In this work, we overcome this timing resolution barrier by developing a technique to supersample the metrology signal by effectively implementing a quantum interpolation of the spin system dynamics. This method will enable spin sensing at high magnetic fields and high repetition rate, allowing significant improvements in sensitivity and spectral resolution. We experimentally demonstrate a resolution boost by over a factor of 100 for spin sensing and AC magnetometry. The method is shown to be robust, versatile to sensing normal and spurious signal harmonics, and ultimately limited in resolution only by the number of pulses that can be applied.
Clustering metagenomic sequences with interpolated Markov models
2010-01-01
Background Sequencing of environmental DNA (often called metagenomics) has shown tremendous potential to uncover the vast number of unknown microbes that cannot be cultured and sequenced by traditional methods. Because the output from metagenomic sequencing is a large set of reads of unknown origin, clustering reads together that were sequenced from the same species is a crucial analysis step. Many effective approaches to this task rely on sequenced genomes in public databases, but these genomes are a highly biased sample that is not necessarily representative of environments interesting to many metagenomics projects. Results We present SCIMM (Sequence Clustering with Interpolated Markov Models), an unsupervised sequence clustering method. SCIMM achieves greater clustering accuracy than previous unsupervised approaches. We examine the limitations of unsupervised learning on complex datasets, and suggest a hybrid of SCIMM and supervised learning method Phymm called PHYSCIMM that performs better when evolutionarily close training genomes are available. Conclusions SCIMM and PHYSCIMM are highly accurate methods to cluster metagenomic sequences. SCIMM operates entirely unsupervised, making it ideal for environments containing mostly novel microbes. PHYSCIMM uses supervised learning to improve clustering in environments containing microbial strains from well-characterized genera. SCIMM and PHYSCIMM are available open source from http://www.cbcb.umd.edu/software/scimm. PMID:21044341
Clustering metagenomic sequences with interpolated Markov models.
Kelley, David R; Salzberg, Steven L
2010-11-02
Sequencing of environmental DNA (often called metagenomics) has shown tremendous potential to uncover the vast number of unknown microbes that cannot be cultured and sequenced by traditional methods. Because the output from metagenomic sequencing is a large set of reads of unknown origin, clustering reads together that were sequenced from the same species is a crucial analysis step. Many effective approaches to this task rely on sequenced genomes in public databases, but these genomes are a highly biased sample that is not necessarily representative of environments interesting to many metagenomics projects. We present SCIMM (Sequence Clustering with Interpolated Markov Models), an unsupervised sequence clustering method. SCIMM achieves greater clustering accuracy than previous unsupervised approaches. We examine the limitations of unsupervised learning on complex datasets, and suggest a hybrid of SCIMM and supervised learning method Phymm called PHYSCIMM that performs better when evolutionarily close training genomes are available. SCIMM and PHYSCIMM are highly accurate methods to cluster metagenomic sequences. SCIMM operates entirely unsupervised, making it ideal for environments containing mostly novel microbes. PHYSCIMM uses supervised learning to improve clustering in environments containing microbial strains from well-characterized genera. SCIMM and PHYSCIMM are available open source from http://www.cbcb.umd.edu/software/scimm.
Clustering metagenomic sequences with interpolated Markov models
Directory of Open Access Journals (Sweden)
Kelley David R
2010-11-01
Full Text Available Abstract Background Sequencing of environmental DNA (often called metagenomics has shown tremendous potential to uncover the vast number of unknown microbes that cannot be cultured and sequenced by traditional methods. Because the output from metagenomic sequencing is a large set of reads of unknown origin, clustering reads together that were sequenced from the same species is a crucial analysis step. Many effective approaches to this task rely on sequenced genomes in public databases, but these genomes are a highly biased sample that is not necessarily representative of environments interesting to many metagenomics projects. Results We present SCIMM (Sequence Clustering with Interpolated Markov Models, an unsupervised sequence clustering method. SCIMM achieves greater clustering accuracy than previous unsupervised approaches. We examine the limitations of unsupervised learning on complex datasets, and suggest a hybrid of SCIMM and supervised learning method Phymm called PHYSCIMM that performs better when evolutionarily close training genomes are available. Conclusions SCIMM and PHYSCIMM are highly accurate methods to cluster metagenomic sequences. SCIMM operates entirely unsupervised, making it ideal for environments containing mostly novel microbes. PHYSCIMM uses supervised learning to improve clustering in environments containing microbial strains from well-characterized genera. SCIMM and PHYSCIMM are available open source from http://www.cbcb.umd.edu/software/scimm.
Rainfall variation by geostatistical interpolation method
Directory of Open Access Journals (Sweden)
Glauber Epifanio Loureiro
2013-08-01
Full Text Available This article analyses the variation of rainfall in the Tocantins-Araguaia hydrographic region in the last two decades, based upon the rain gauge stations of the ANA (Brazilian National Water Agency HidroWeb database for the years 1983, 1993 and 2003. The information was systemized and treated with Hydrologic methods such as method of contour and interpolation for ordinary kriging. The treatment considered the consistency of the data, the density of the space distribution of the stations and the periods of study. The results demonstrated that the total volume of water precipitated annually did not change significantly in the 20 years analyzed. However, a significant variation occurred in its spatial distribution. By analyzing the isohyet it was shown that there is a displacement of the precipitation at Tocantins Baixo (TOB of approximately 10% of the total precipitated volume. This displacement can be caused by global change, by anthropogenic activities or by regional natural phenomena. However, this paper does not explore possible causes of the displacement.
Research of Cubic Bezier Curve NC Interpolation Signal Generator
Directory of Open Access Journals (Sweden)
Shijun Ji
2014-08-01
Full Text Available Interpolation technology is the core of the computer numerical control (CNC system, and the precision and stability of the interpolation algorithm directly affect the machining precision and speed of CNC system. Most of the existing numerical control interpolation technology can only achieve circular arc interpolation, linear interpolation or parabola interpolation, but for the numerical control (NC machining of parts with complicated surface, it needs to establish the mathematical model and generate the curved line and curved surface outline of parts and then discrete the generated parts outline into a large amount of straight line or arc to carry on the processing, which creates the complex program and a large amount of code, so it inevitably introduce into the approximation error. All these factors affect the machining accuracy, surface roughness and machining efficiency. The stepless interpolation of cubic Bezier curve controlled by analog signal is studied in this paper, the tool motion trajectory of Bezier curve can be directly planned out in CNC system by adjusting control points, and then these data were put into the control motor which can complete the precise feeding of Bezier curve. This method realized the improvement of CNC trajectory controlled ability from the simple linear and circular arc to the complex project curve, and it provides a new way for economy realizing the curve surface parts with high quality and high efficiency machining.
Pattern-oriented memory interpolation of sparse historical rainfall records
Matos, J. P.; Cohen Liechti, T.; Portela, M. M.; Schleiss, A. J.
2014-03-01
The pattern-oriented memory (POM) is a novel historical rainfall interpolation method that explicitly takes into account the time dimension in order to interpolate areal rainfall maps. The method is based on the idea that rainfall patterns exist and can be identified over a certain area by means of non-linear regressions. Having been previously benchmarked with a vast array of interpolation methods using proxy satellite data under different time and space availabilities, in the scope of the present contribution POM is applied to rain gauge data in order to produce areal rainfall maps. Tested over the Zambezi River Basin for the period from 1979 to 1997 (accurate satellite rainfall estimates based on spaceborne instruments are not available for dates prior to 1998), the novel pattern-oriented memory historical interpolation method has revealed itself as a better alternative than Kriging or Inverse Distance Weighing in the light of a Monte Carlo cross-validation procedure. Superior in most metrics to the other tested interpolation methods, in terms of the Pearson correlation coefficient and bias the accuracy of POM's historical interpolation results are even comparable with that of recent satellite rainfall products. The new method holds the possibility of calculating detailed and performing daily areal rainfall estimates, even in the case of sparse rain gauging grids. Besides their performance, the similarity to satellite rainfall estimates inherent to POM interpolations can contribute to substantially extend the length of the rainfall series used in hydrological models and water availability studies in remote areas.
Verworn, A.; Haberlandt, U.
2009-04-01
The most important input for distributed hydrological modelling of highly dynamic processes like floods are precipitation data with high resolution in space and time. In contrary to the sparse spatial resolution of hourly or shorter time step precipitation data from recording networks, radar derived precipitation provides a high spatial resolution, but often comes along with a large space-time variable bias in radar rainfall estimates. To provide optimal input for distributed hydrological modelling the best strategy is probably a combination of all available information about rainfall and applying sophisticated interpolation methods. Objective of this research was the investigation of spatial interpolation of hourly precipitation for mesoscale hydrological modelling. The multivariate geostatistical method external drift kriging (EDK) was applied and further developed for interpolation of short time precipitation using additional information, especially radar data, but also from denser daily measurement networks and physiographic factors. To address the problem of fractional precipitation coverage a multi-step interpolation applying binary indicator kriging as first step was used. Investigations were carried out for fifteen flood events from 2000 to 2005 caused by precipitation with different characteristics. The 125 km radius around the selected radar station, which is located northeast of the Harz Mountains in northern Germany, covers the study area including 22 recording stations. The hydrological modelling was carried out for a subcatchment of the Bode river basin in the southeastern part of the Harz Mountains with a drainage area of about 100 km2. For a first assessment of the interpolation performance of the multivariate methods cross validations in comparison with some univariate standard interpolation methods were carried out. Subsequently comparative hydrological simulations using the model WaSiM-ETH were applied for a more specific evaluation.
Study on Control Algorithm for Continuous Segments Trajectory Interpolation
Institute of Scientific and Technical Information of China (English)
SHI Chuan; YE Peiqing; LV Qiang
2006-01-01
In CNC machining, the complexity of the part contour causes a series of problems including the repeated start-stop of the motor, low machining efficiency, and poor machining quality. To relieve those problems, a new interpolation algorithm was put forward to realize the interpolation control of continuous sections trajectory. The relevant error analysis of the algorithm was also studied. The feasibility of the algorithm was proved by machining experiment using a laser machine to carve the interpolation trajectory in the CNC system GT100. This algorithm effectively improved the machining efficiency and the contour quality.
Geodesic-loxodromes for diffusion tensor interpolation and difference measurement.
Kindlmann, Gordon; Estépar, Raúl San José; Niethammer, Marc; Haker, Steven; Westin, Carl-Fredrik
2007-01-01
In algorithms for processing diffusion tensor images, two common ingredients are interpolating tensors, and measuring the distance between them. We propose a new class of interpolation paths for tensors, termed geodesic-loxodromes, which explicitly preserve clinically important tensor attributes, such as mean diffusivity or fractional anisotropy, while using basic differential geometry to interpolate tensor orientation. This contrasts with previous Riemannian and Log-Euclidean methods that preserve the determinant. Path integrals of tangents of geodesic-loxodromes generate novel measures of over-all difference between two tensors, and of difference in shape and in orientation.
C1 Rational Quadratic Trigonometric Interpolation Spline for Data Visualization
Directory of Open Access Journals (Sweden)
Shengjun Liu
2015-01-01
Full Text Available A new C1 piecewise rational quadratic trigonometric spline with four local positive shape parameters in each subinterval is constructed to visualize the given planar data. Constraints are derived on these free shape parameters to generate shape preserving interpolation curves for positive and/or monotonic data sets. Two of these shape parameters are constrained while the other two can be set free to interactively control the shape of the curves. Moreover, the order of approximation of developed interpolant is investigated as O(h3. Numeric experiments demonstrate that our method can construct nice shape preserving interpolation curves efficiently.
Trivariate Local Lagrange Interpolation and Macro Elements of Arbitrary Smoothness
Matt, Michael Andreas
2012-01-01
Michael A. Matt constructs two trivariate local Lagrange interpolation methods which yield optimal approximation order and Cr macro-elements based on the Alfeld and the Worsey-Farin split of a tetrahedral partition. The first interpolation method is based on cubic C1 splines over type-4 cube partitions, for which numerical tests are given. The second is the first trivariate Lagrange interpolation method using C2 splines. It is based on arbitrary tetrahedral partitions using splines of degree nine. The author constructs trivariate macro-elements based on the Alfeld split, where each tetrahedron
Effects on Retroaction of the Learned Strenght of Interpolated Material
Greenfield, Daryl; And Others
1974-01-01
Twelve moderately retarded children were trained on 2-choice visual discrimination problems with interpolation of another item between training and retention tests. Results indicated that well-learned items are rehearsed less. (SBT)
On the Universal Interpolating Sequences on H2 (β
Directory of Open Access Journals (Sweden)
B. Yousef
2007-06-01
Full Text Available In this paper we investigate the relation between universal interpolating sequence and the approximate point spectrum of the adjoint multiplication operator acting on the Hilbert spaces of formal power series.
Quadratic trigonometric B-spline for image interpolation using GA.
Directory of Open Access Journals (Sweden)
Malik Zawwar Hussain
Full Text Available In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA. The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM, Structure SIMilarity (SSIM and Multi-Scale Structure SIMilarity (MS-SSIM indices along with traditional Peak Signal-to-Noise Ratio (PSNR are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.
Quadratic trigonometric B-spline for image interpolation using GA.
Hussain, Malik Zawwar; Abbas, Samreen; Irshad, Misbah
2017-01-01
In this article, a new quadratic trigonometric B-spline with control parameters is constructed to address the problems related to two dimensional digital image interpolation. The newly constructed spline is then used to design an image interpolation scheme together with one of the soft computing techniques named as Genetic Algorithm (GA). The idea of GA has been formed to optimize the control parameters in the description of newly constructed spline. The Feature SIMilarity (FSIM), Structure SIMilarity (SSIM) and Multi-Scale Structure SIMilarity (MS-SSIM) indices along with traditional Peak Signal-to-Noise Ratio (PSNR) are employed as image quality metrics to analyze and compare the outcomes of approach offered in this work, with three of the present digital image interpolation schemes. The upshots show that the proposed scheme is better choice to deal with the problems associated to image interpolation.
Nonlinear interpolation fractal classifier for multiple cardiac arrhythmias recognition
Energy Technology Data Exchange (ETDEWEB)
Lin, C.-H. [Department of Electrical Engineering, Kao-Yuan University, No. 1821, Jhongshan Rd., Lujhu Township, Kaohsiung County 821, Taiwan (China); Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)], E-mail: eechl53@cc.kyu.edu.tw; Du, Y.-C.; Chen Tainsong [Institute of Biomedical Engineering, National Cheng-Kung University, Tainan 70101, Taiwan (China)
2009-11-30
This paper proposes a method for cardiac arrhythmias recognition using the nonlinear interpolation fractal classifier. A typical electrocardiogram (ECG) consists of P-wave, QRS-complexes, and T-wave. Iterated function system (IFS) uses the nonlinear interpolation in the map and uses similarity maps to construct various data sequences including the fractal patterns of supraventricular ectopic beat, bundle branch ectopic beat, and ventricular ectopic beat. Grey relational analysis (GRA) is proposed to recognize normal heartbeat and cardiac arrhythmias. The nonlinear interpolation terms produce family functions with fractal dimension (FD), the so-called nonlinear interpolation function (NIF), and make fractal patterns more distinguishing between normal and ill subjects. The proposed QRS classifier is tested using the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database. Compared with other methods, the proposed hybrid methods demonstrate greater efficiency and higher accuracy in recognizing ECG signals.
A Meshfree Quasi-Interpolation Method for Solving Burgers’ Equation
Directory of Open Access Journals (Sweden)
Mingzhu Li
2014-01-01
Full Text Available The main aim of this work is to consider a meshfree algorithm for solving Burgers’ equation with the quartic B-spline quasi-interpolation. Quasi-interpolation is very useful in the study of approximation theory and its applications, since it can yield solutions directly without the need to solve any linear system of equations and overcome the ill-conditioning problem resulting from using the B-spline as a global interpolant. The numerical scheme is presented, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the time derivative of the dependent variable. Compared to other numerical methods, the main advantages of our scheme are higher accuracy and lower computational complexity. Meanwhile, the algorithm is very simple and easy to implement and the numerical experiments show that it is feasible and valid.
Energy-Driven Image Interpolation Using Gaussian Process Regression
Directory of Open Access Journals (Sweden)
Lingling Zi
2012-01-01
Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.
Interpolation Routines Assessment in ALS-Derived Digital Elevation Models for Forestry Applications
Directory of Open Access Journals (Sweden)
Antonio Luis Montealegre
2015-07-01
Full Text Available Airborne Laser Scanning (ALS is capable of estimating a variety of forest parameters using different metrics extracted from the normalized heights of the point cloud using a Digital Elevation Model (DEM. In this study, six interpolation routines were tested over a range of land cover and terrain roughness in order to generate a collection of DEMs with spatial resolution of 1 and 2 m. The accuracy of the DEMs was assessed twice, first using a test sample extracted from the ALS point cloud, second using a set of 55 ground control points collected with a high precision Global Positioning System (GPS. The effects of terrain slope, land cover, ground point density and pulse penetration on the interpolation error were examined stratifying the study area with these variables. In addition, a Classification and Regression Tree (CART analysis allowed the development of a prediction uncertainty map to identify in which areas DEMs and Airborne Light Detection and Ranging (LiDAR derived products may be of low quality. The Triangulated Irregular Network (TIN to raster interpolation method produced the best result in the validation process with the training data set while the Inverse Distance Weighted (IDW routine was the best in the validation with GPS (RMSE of 2.68 cm and RMSE of 37.10 cm, respectively.
Homography Propagation and Optimization for Wide-Baseline Street Image Interpolation.
Nie, Yongwei; Zhang, Zhensong; Sun, Hanqiu; Su, Tan; Li, Guiqing
2017-10-01
Wide-baseline street image interpolation is useful but very challenging. Existing approaches either rely on heavyweight 3D reconstruction or computationally intensive deep networks. We present a lightweight and efficient method which uses simple homography computing and refining operators to estimate piecewise smooth homographies between input views. To achieve the goal, we show how to combine homography fitting and homography propagation together based on reliable and unreliable superpixel discrimination. Such a combination, other than using homography fitting only, dramatically increases the accuracy and robustness of the estimated homographies. Then, we integrate the concepts of homography and mesh warping, and propose a novel homography-constrained warping formulation which enforces smoothness between neighboring homographies by utilizing the first-order continuity of the warped mesh. This further eliminates small artifacts of overlapping, stretching, etc. The proposed method is lightweight and flexible, allows wide-baseline interpolation. It improves the state of the art and demonstrates that homography computation suffices for interpolation. Experiments on city and rural datasets validate the efficiency and effectiveness of our method.
Directory of Open Access Journals (Sweden)
Annalisa Di Piazza
2015-04-01
Full Text Available An exhaustive comparison among different spatial interpolation algorithms was carried out in order to derive annual and monthly air temperature maps for Sicily (Italy. Deterministic, data-driven and geostatistics algorithms were used, in some cases adding the elevation information and other physiographic variables to improve the performance of interpolation techniques and the reconstruction of the air temperature field. The dataset is given by air temperature data coming from 84 stations spread around the island of Sicily. The interpolation algorithms were optimized by using a subset of the available dataset, while the remaining subset was used to validate the results in terms of the accuracy and bias of the estimates. Validation results indicate that univariate methods, which neglect the information from physiographic variables, significantly entail the largest errors, while performances improve when such parameters are taken into account. The best results at the annual scale have been obtained using the the ordinary kriging of residuals from linear regression and from the artificial neural network algorithm, while, at the monthly scale, a Fourier-series algorithm has been used to downscale mean annual temperature to reproduce monthly values in the annual cycle.
Improved tensor scale computation with application to medical image interpolation.
Xu, Ziyue; Sonka, Milan; Saha, Punam K
2011-01-01
Tensor scale (t-scale) is a parametric representation of local structure morphology that simultaneously describes its orientation, shape and isotropic scale. At any image location, t-scale represents the largest ellipse (an ellipsoid in three dimensions) centered at that location and contained in the same homogeneous region. Here, we present an improved algorithm for t-scale computation and study its application to image interpolation. Specifically, the t-scale computation algorithm is improved by: (1) enhancing the accuracy of identifying local structure boundary and (2) combining both algebraic and geometric approaches in ellipse fitting. In the context of interpolation, a closed form solution is presented to determine the interpolation line at each image location in a gray level image using t-scale information of adjacent slices. At each location on an image slice, the method derives normal vector from its t-scale that yields trans-orientation of the local structure and points to the closest edge point. Normal vectors at the matching two-dimensional locations on two adjacent slices are used to compute the interpolation line using a closed form equation. The method has been applied to BrainWeb data sets and to several other images from clinical applications and its accuracy and response to noise and other image-degrading factors have been examined and compared with those of current state-of-the-art interpolation methods. Experimental results have established the superiority of the new t-scale based interpolation method as compared to existing interpolation algorithms. Also, a quantitative analysis based on the paired t-test of residual errors has ascertained that the improvements observed using the t-scale based interpolation are statistically significant. Copyright © 2010 Elsevier Ltd. All rights reserved.
Signal simulation in folding and interpolating integrated ADC
Marcinkevičius, Albinas Jonas; Jasonis, Vaidas; Poviliauskas, Darius
2007-01-01
The structure of the folding and interpolating analog-to-digital converter, which includes the signal sample and hold, folding and interpolating circuits, the electric circuits of which are formed on the basis of silicon bipolar transistors of the 0.5 µm technology, is presented. The methodology of simulation of dynamic characteristics of the created 8-binary bit converter and calculation of the digital output signal form was developed. The results of simulation of dynamic characteristics of ...
Considerations Related to Interpolation of Experimental Data Using Piecewise Functions
Directory of Open Access Journals (Sweden)
Stelian Alaci
2016-12-01
Full Text Available The paper presents a method for experimental data interpolation by means of a piecewise function, the points where the form of the function changes being found simultaneously with the other parameters utilized in an optimization criterion. The optimization process is based on defining the interpolation function using a single expression founded on the Heaviside function and regarding the optimization function as a generalised infinitely derivable function. The exemplification of the methodology is made via a tangible example.
Lossless image compression based on a generalized recursive interpolation
Aiazzi, Bruno; Alba, Pasquale S.; Alparone, Luciano; Baronti, Stefano; Lotti, Franco
1996-09-01
A variety of image compression algorithms exists for applications where reconstruction errors are tolerated. When lossless coding is mandatory, compression ratios greater than 2 or 3 are hard to obtain. DPCM techniques can be implemented in a hierarchical way, thus producing high- quality intermediate versions (tokens) of the input images at increasing spatial resolutions. Data retrieval and transmission can be achieved in a progressive fashion, either by stopping the process at the requested resolution level, or by recognizing that the image being retrieved is no longer of interest. However, progressiveness is usually realized with a certain performance penalty with respect to the reference DPCM (i.e., 4-pel optimum causal AR prediction). A generalized recursive interpolation (GRINT) algorithm is proposed and shown to be the most effective progressive technique for compression of still images. The main advantage of the novel scheme with respect to the standard hierarchical interpolation (HINT) is that interpolation is performed in a separable fashion from all error-free values, thereby reducing the variance of interpolation errors. Moreover, the introduction of a parametric half-band interpolation filter produces further benefits and allows generalized interpolation. An adaptive strategy consists of measuring image correlation both along rows and along columns and interpolating first along the direction of minimum correlation. The statistics of the different subband-like sets of interpolation errors are modeled as generalized Gaussian PDFs, and individual codebooks are fitted for variable length coding. The estimate of the shape factor of the PDF is based on a novel criterion matching the entropy of the theoretical and actual distributions. Performances are evaluated by comparing GRINT with HINT, and a variety of other multiresolution techniques. Optimum 4-pel causal DPCM and lossless JPEG are also considered for completeness of comparisons, although they are not
Precipitation interpolation and corresponding uncertainty assessment using copulas
Bardossy, A.; Pegram, G. G.
2012-12-01
Spatial interpolation of rainfall over different time and spatial scales is necessary in many applications of hydrometeorology. The specific problems encountered in rainfall interpolation include: the large number of calculations which need to be performed automatically the quantification of the influence of topography, usually the most influential of exogenous variables how to use observed zero (dry) values in interpolation, because their proportion increases the shorter the time interval the need to estimate a reasonable uncertainty of the modelled point/pixel distributions the need to separate (i) temporally highly correlated bias from (ii) random interpolation errors at different spatial and temporal scales the difficulty of estimating uncertainty of accumulations over a range of spatial scales. The approaches used and described in the presentation employ the variables rainfall and altitude. The methods of interpolation include (i) Ordinary Kriging of the rainfall without altitude, (ii) External Drift Kriging with altitude as an exogenous variable, and less conventionally, (iii) truncated Gaussian copulas and truncated v-copulas, both omitting and including the altitude of the control stations as well as that of the target (iv) truncated Gaussian copulas and truncated v-copulas for a two-step interpolation of precipitation combining temporal and spatial quantiles for bias quantification. It was found that truncated Gaussian copulas, with the target's and all control the stations' altitudes included as exogenous variables, produce the lowest Mean Square error in cross-validation and, as a bonus, model with the least bias. In contrast, the uncertainty of interpolation is better described by the v-copulas, but the Gaussian copulas have the advantage of computational effort (by three orders of magnitude) which justifies their use in practice. It turns out that the uncertainty estimates of the OK and EDK interpolants are not competitive at any time scale, from daily
As-Rigid-As-Possible molecular interpolation paths
Nguyen, Minh Khoa; Jaillet, Léonard; Redon, Stéphane
2017-04-01
This paper proposes a new method to generate interpolation paths between two given molecular conformations. It relies on the As-Rigid-As-Possible (ARAP) paradigm used in Computer Graphics to manipulate complex meshes while preserving their essential structural characteristics. The adaptation of ARAP approaches to the case of molecular systems is presented in this contribution. Experiments conducted on a large set of benchmarks show how such a strategy can efficiently compute relevant interpolation paths with large conformational rearrangements.
Exemplar-Based Interpolation of Sparsely Sampled Images
2009-06-01
interpolating a sparsely sampled image is introduced in this paper. The proposed variational for- mulation, originally motivated by image inpainting ...classical inpaint - ing problem, no complete patches are available from the sparse image samples, and the patch similarity criterion has to be redefined as...departures from the variational setting, showing a remarkable 1 Introduction The terms image inpainting and interpolation refer to the problem of
Survey: interpolation methods for whole slide image processing.
Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T
2017-02-01
Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Mosquera Lastra, Edison Ramiro
2014-01-01
215 hojas : ilustraciones, 29 x 21 cm + CD-ROM 5697 El objetivo del presente trabajo fue mejorar el proceso productivo en el envasado de glp y reparación de cilindros en la Planta de envasado Pifo de Eni Ecuador S.A., para esto se realizó un modelamiento del proceso productivo con base en la adopción del enfoque en procesos el cual sirvió para orientar a la organización hacia el cliente y hacia sus objetivos. Luego se procedió a realizar un estudio para determinar los tiempos de cada acti...
Goran M. Lazić; Zdenko D. Šiljak; Stevo B. Jovandić
2010-01-01
Protivoklopni vođeni projektili namenjeni su za uništavanje teško-oklopljenih tenkova, kao i drugih oklopnih vozila. Ovaj rad nudi istorijsko-tehnički pregled (razvoj projektila kroz generacije i osnovni podaci vezani za borbeno-operativno dejstvo ovih projektila) ovog tipa naoružanja koje poseduju zemlje zapadne Evrope, Izraela i Indije. Pored osnovnih podataka navode se i cene nekih projektila ponaosob, kao i tendencije razvoja u ovoj grani naoružanja. / Anti-tank guided missiles are design...
Kepka, Lucyna; Bujko, Krzysztof; Zolciak-Siwinska, Agnieszka
2008-01-01
To estimate retrospectively the rate of isolated nodal failures (INF) in NSCLC patients treated with the elective nodal irradiation (ENI) using 3D-conformal radiotherapy (3D-CRT). One hundred and eighty-five patients with I-IIIB stage treated with 3D-CRT in consecutive clinical trials differing in an extent of the ENI were analyzed. According to the extent of the ENI, two groups were distinguished: extended (n = 124) and limited (n = 61) ENI. INF was defined as regional nodal failure occurring without local progression. Cumulative Incidence of INF (CIINF) was evaluated by univariate and multivariate analysis with regard to prognostic factors. With a median follow up of 30 months, the two-year actuarial overall survival was 35%. The two-year CIINF rate was 12%. There were 16 (9%) INF, eight (6%) for extended and eight (13%) for limited ENI. In the univariate analysis bulky mediastinal disease (BMD), left side, higher N stage, and partial response to RT had a significant negative impact on the CIINF. BMD was the only independent predictor of the risk of incidence of the INF (p = 0.001). INF is more likely to occur in case of more advanced nodal status.
5-D interpolation with wave-front attributes
Xie, Yujiang; Gajewski, Dirk
2017-11-01
Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that
Analysis of Interpolation Methods in the Image Reconstruction Tasks
Directory of Open Access Journals (Sweden)
V. T. Nguyen
2017-01-01
Full Text Available The article studies the interpolation methods used for image reconstruction. These methods were also implemented and tested with several images to estimate their effectiveness.The considered interpolation methods are a nearest-neighbor method, linear method, a cubic B-spline method, a cubic convolution method, and a Lanczos method. For each method were presented an interpolation kernel (interpolation function and a frequency response (Fourier transform.As a result of the experiment, the following conclusions were drawn:- the nearest neighbor algorithm is very simple and often used. With using this method, the reconstructed images contain artifacts (blurring and haloing;- the linear method is quickly and easily performed. It also reduces some visual distortion caused by changing image size. Despite the advantages using this method causes a large amount of interpolation artifacts, such as blurring and haloing;- cubic B-spline method provides smoothness of reconstructed images and eliminates apparent ramp phenomenon. But in the interpolation process a low-pass filter is used, and a high frequency component is suppressed. This will lead to fuzzy edge and false artificial traces;- cubic convolution method offers less distortion interpolation. But its algorithm is more complicated and more execution time is required as compared to the nearest-neighbor method and the linear method;- using the Lanczos method allows us to achieve a high-definition image. In spite of the great advantage the method requires more execution time as compared to the other methods of interpolation.The result obtained not only shows a comparison of the considered interpolation methods for various aspects, but also enables users to select an appropriate interpolation method for their applications.It is advisable to study further the existing methods and develop new ones using a number of methods
Geospatial Interpolation and Mapping of Tropospheric Ozone Pollution Using Geostatistics
Kethireddy, Swatantra R.; Tchounwou, Paul B.; Ahmad, Hafiz A.; Yerramilli, Anjaneyulu; Young, John H.
2014-01-01
Tropospheric ozone (O3) pollution is a major problem worldwide, including in the United States of America (USA), particularly during the summer months. Ozone oxidative capacity and its impact on human health have attracted the attention of the scientific community. In the USA, sparse spatial observations for O3 may not provide a reliable source of data over a geo-environmental region. Geostatistical Analyst in ArcGIS has the capability to interpolate values in unmonitored geo-spaces of interest. In this study of eastern Texas O3 pollution, hourly episodes for spring and summer 2012 were selectively identified. To visualize the O3 distribution, geostatistical techniques were employed in ArcMap. Using ordinary Kriging, geostatistical layers of O3 for all the studied hours were predicted and mapped at a spatial resolution of 1 kilometer. A decent level of prediction accuracy was achieved and was confirmed from cross-validation results. The mean prediction error was close to 0, the root mean-standardized-prediction error was close to 1, and the root mean square and average standard errors were small. O3 pollution map data can be further used in analysis and modeling studies. Kriging results and O3 decadal trends indicate that the populace in Houston-Sugar Land-Baytown, Dallas-Fort Worth-Arlington, Beaumont-Port Arthur, San Antonio, and Longview are repeatedly exposed to high levels of O3-related pollution, and are prone to the corresponding respiratory and cardiovascular health effects. Optimization of the monitoring network proves to be an added advantage for the accurate prediction of exposure levels. PMID:24434594
Geospatial Interpolation and Mapping of Tropospheric Ozone Pollution Using Geostatistics
Directory of Open Access Journals (Sweden)
Swatantra R. Kethireddy
2014-01-01
Full Text Available Tropospheric ozone (O3 pollution is a major problem worldwide, including in the United States of America (USA, particularly during the summer months. Ozone oxidative capacity and its impact on human health have attracted the attention of the scientific community. In the USA, sparse spatial observations for O3 may not provide a reliable source of data over a geo-environmental region. Geostatistical Analyst in ArcGIS has the capability to interpolate values in unmonitored geo-spaces of interest. In this study of eastern Texas O3 pollution, hourly episodes for spring and summer 2012 were selectively identified. To visualize the O3 distribution, geostatistical techniques were employed in ArcMap. Using ordinary Kriging, geostatistical layers of O3 for all the studied hours were predicted and mapped at a spatial resolution of 1 kilometer. A decent level of prediction accuracy was achieved and was confirmed from cross-validation results. The mean prediction error was close to 0, the root mean-standardized-prediction error was close to 1, and the root mean square and average standard errors were small. O3 pollution map data can be further used in analysis and modeling studies. Kriging results and O3 decadal trends indicate that the populace in Houston-Sugar Land-Baytown, Dallas-Fort Worth-Arlington, Beaumont-Port Arthur, San Antonio, and Longview are repeatedly exposed to high levels of O3-related pollution, and are prone to the corresponding respiratory and cardiovascular health effects. Optimization of the monitoring network proves to be an added advantage for the accurate prediction of exposure levels.
Comparison of two kriging interpolation methods applied to spatiotemporal rainfall
Kebaili Bargaoui, Zoubeida; Chebbi, Afef
2009-02-01
SummaryThe variogram structure is an effective tool in order to appraise the rainfall spatial variability. In areas with disperse raingauge network, this paper suggests a 3-D estimation of the variogram, as alternative to the classical 2-D approach for spatiotemporal rainfall analysis. The context deals with the estimation of the spatial variability of maximum intensity of rainfall for a given duration δ. Hence, a 3-coordinate vector (location - rainfall duration - rainfall intensity) is associated to each monitoring location rather than the two coordinate vector, based only on the location in relation to intensity subject to duration. A set of averaging time intervals is taken into account ( δ ranging from 5 min to 2 h). The advantage of the 3-D approach is that it results on a standardized variogram which uniquely characterizes the rainfall event. On the contrary, for the 2-D approach, variograms are subject to intensity duration. The kriging with external drift is performed to make the spatial interpolations and to compute the kriging variance maps. A full comparison of the accuracy of both methods (2-D, 3-D) using cross-validation scheme, shows that the 3-D kriging leads to significantly lower prediction errors than the classical 2-D kriging. It is further suggested to quantify the effect of 3-D and 2-D kriging on the areal rainfall distribution and on the standard deviation of the kriging error SDKE. It is noticed that the 3-D SDKE field displays an empirical distribution which represents a median position among the 2-D distributions corresponding to SDKE ( δ) fields. On the other hand, results are compared to those obtained through ordinary kriging. In the 3-D approach, cross-validation performances and SDKE maps are found to be less sensitive to the kriging method.
CSIR Research Space (South Africa)
Van den Bergh, F
2006-01-01
Full Text Available and exponential function for modelling the DTC. The second scheme uses the notion of a Reproducing Kernel Hilbert Space (RKHS) interpolator [1] for interpolating the missing samples. The application of RKHS interpolators to the DTC interpolation problem is novel...
Spatial interpolation methods for monthly rainfalls and temperatures in Basilicata
Directory of Open Access Journals (Sweden)
Ferrara A
2008-12-01
Full Text Available Spatial interpolated climatic data on grids are important as input in forest modeling because climate spatial variability has a direct effect on productivity and forest growth. Maps of climatic variables can be obtained by different interpolation methods depending on data quality (number of station, spatial distribution, missed data etc. and topographic and climatic features of study area. In this paper four methods are compared to interpolate monthly rainfall at regional scale: 1 inverse distance weighting (IDW; 2 regularized spline with tension (RST; 3 ordinary kriging (OK; 4 universal kriging (UK. Besides, an approach to generate monthly surfaces of temperatures over regions of complex terrain and with limited number of stations is presented. Daily data were gathered from 1976 to 2006 period and then gaps in the time series were filled in order to obtain monthly mean temperatures and cumulative precipitation. Basic statistics of monthly dataset and analysis of relationship of temperature and precipitation to elevation were performed. A linear relationship was found between temperature and altitude, while no relationship was found between rainfall and elevation. Precipitations were then interpolated without taking into account elevation. Based on root mean squared error for each month the best method was ranked. Results showed that universal kriging (UK is the best method in spatial interpolation of rainfall in study area. Then cross validation was used to compare prediction performance of tree different variogram model (circular, spherical, exponential using UK algorithm in order to produce final maps of monthly precipitations. Before interpolating temperatures were referred to see level using the calculated lapse rate and a digital elevation model (DEM. The result of interpolation with RST was then set to originally elevation with an inverse procedure. To evaluate the quality of interpolated surfaces a comparison between interpolated and
Defining random and systematic error in precipitation interpolation
Lebrenz, H.; Bárdossy, A.
2012-04-01
Variogram-based interpolation methods are widely applied for hydrology. Kriging estimates an expectation value and an associated distribution while simulations provide a distribution of possible realizations of the random function at the unknown location. The associated error in both cases is random and characterized by the convergence of its sum over time to zero, being convenient for subsequent hydrological modelling. This study addresses the quantification of a random and a systematic error for the mentioned interpolation methods. Firstly, monthly precipitation observations are fit to a two-parametric, theoretical distribution at each observation point. Prior to interpolation, the observations are decomposed into two distribution parameters and their corresponding quantiles. The distribution parameters and their quantiles are interpolated to the unknown location and finally recomposed back to precipitation amounts. This method bears the capability of addressing two types of errors: a random error defined by simulating the quantiles and associated expectation value of the parameters, and a systematic error defined by simulating the parameters and the expectation value of the quantiles. The defined random error converges over time to zero while the systematic error does not, but creates a bias. With perspective to subsequent hydrological modelling, the input uncertainty of the interpolated (areal) precipitation is thus described by a random and a systematic error.
Minimal energy interpolation of repeat orbit ground-track gaps
Keller, Wolfgang
2017-04-01
If satellites of gravity-field missions are in an repeat orbit, their ground tracks do not sample the surface of the Earth uniformly, but leave large gaps. Usually, these gaps are interpolated by the representation of the gravitational field by surface spherical harmonics. Since surface spherical harmonics are algebraic/trigonometric polynomials, this interpolation tends to oscillate. This contribution starts from the observation, that the gravitational field is best known along the ground tracks. Therefore, a reasonable interpolation strategy should fulfill two requirements: i) Reproduce the measured values along the satellite tracks. ii) Be as smooth as possible between the satellite tracks. The concept of smoothness will be understood as the bending energy of an elastic membrane attached to the measured values along the satellite tracks. It will be shown, that such an interpolation is the solution of a boundary value problem for the biharmonic equation. A finite difference approximation for the biharmonic equation is developed and numerically tested. The biharmonic interpolation turns out to be more reasonable than the Gaussian smoothed spherical harmonics solution.
The Importance of Interpolation in Computerized Growth Charting.
Kiger, James R; Taylor, Sarah N
2016-01-01
Computer growth charting is increasingly available for clinical and research applications. The LMS method is used to define the growth curves on the charts most commonly used in practice today. The data points for any given chart are at discrete points, and computer programs may simply round to the closest LMS data point when calculating growth centiles. We sought to determine whether applying an interpolation algorithm to the LMS data for commonly used growth charts may reduce the inherent errors which occur with rounding to the nearest data point. We developed a simple, easily implemented interpolation algorithm to use with LMS data. Using published growth charts, we compared predicted growth centiles using our interpolation algorithm versus a standard rounding approach. Using a test scenario of a patient at the 50th centile in weight, compared to using our interpolation algorithm, the method of simply rounding to the nearest data point resulted in maximal z-score errors in weight of the following: 2.02 standard deviations for the World Health Organization 0-to-23 month growth chart, 1.07 standard deviations for the Fenton preterm growth chart, 0.71 standard deviations for the Olsen preterm growth chart, and 0.11 standard deviations for the CDC 2-to-18 year growth chart. Failure to include an interpolation algorithm when designing computerizing growth charts can lead to large errors in centile and z-score calculations.
A new interpolation method based on satellite physical character in using IGS precise ephemeris
Directory of Open Access Journals (Sweden)
Liu Weiping
2014-08-01
Full Text Available Due to the deficiency of sliding Lagrange polynomial interpolation, the author proposes a new interpolation method, which considers the physical character of satellite movement in coordinate transformation and reasonable selection of interpolation function. Precision of the two methods is compared by a numerical example. The result shows that the new method is superior to the sliding Lagrange polynomial interpolation in interpolation and extrapolation, especially in extrapolation that is over short time spans.
Research on the DDA Precision Interpolation Algorithm for Continuity of Speed and Acceleration
Directory of Open Access Journals (Sweden)
Kai Sun
2014-05-01
Full Text Available The interpolation technology is critical to performance of CNC and industrial robots; this paper proposes a new precision interpolation algorithm based on analysis of root cause in speed and acceleration. To satisfy continuity of speed and acceleration in interpolation process, this paper describes, respectively, variable acceleration precision interpolation of two stages and three sections. Testing shows that CNC system can be enhanced significantly by using the new fine interpolation algorithm in this paper.
DEM interpolation weight calculation modulus based on maximum entropy
Chen, Tian-wei; Yang, Xia
2015-12-01
There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.
Lossless image compression based on recursive nonlinear interpolation
Aiazzi, Bruno; Alparone, Luciano; Baronti, Stefano; Lotti, Franco
1997-10-01
The generalized recursive interpolation (GRINT) algorithm was recently proposed and shown to be the most effective progressive technique for decorrelation of still image. A nonlinear version of GRINT (MRINT) employs median filtering in a nonseparable fashion on a quincunx grid. The main advantage of both these schemes is that interpolation is performed from all error-free values, thereby reducing the variance of interpolation errors. MRINT is embedded in a simplified version of the context-based encoder by Said and Pearlman. Coding performances of the novel context-based coder are evaluated by comparisons with GRINT, and a variety of other multiresolution lossless methods, including the original scheme by Said and Pearlman. The modified scheme outperforms all the other algorithms, including the latter, especially when dealing with medical images.
Interpolation techniques in robust constrained model predictive control
Kheawhom, Soorathep; Bumroongsri, Pornchai
2017-05-01
This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.
Gaussian process interpolation for uncertainty estimation in image registration.
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods.
Comparison of Three Interpolation Schemes for Six Parameters
Kolbe, Christine; Rehfeldt, Kira; Ziese, Markus; Rustemeier, Elke; Krähenmann, Stefan; Becker, Andreas
2017-04-01
The European Commission set up the Copernicus Emergency Management Service (EMS), which up to now includes the European Flood Awareness System (EFAS) and the European Forest Fire Information System (EFFIS). Within this framework, the Meteorological Data Collection Center (Copernicus MDCC) collects data from European data providers and supplies regularly gridded and station related analyses as input data for the EMS's EFAS and EFFIS. To identify the optimum interpolation scheme for the six EMS relevant parameters (precipitation total, maximum temperature, minimum temperature, mean vapor pressure, daily mean wind speed, daily total radiation) a comparison of three different interpolation methods using European station observation data on a daily basis covering May 2014 had been conducted. This month featured high precipitation amounts in some areas of Europe, especially in the Balkan states and Italy. Such periods of high precipitation amounts across topographically structured terrain are a challenge for interpolation schemes to represent the entire variability actually taking place, thus most suitable for the comparison. We compared inverse distance weighting (Ntegeka et al., 2013), Spheremap (Willmott et al., 1985) and ordinary kriging (Krige, 1966). Furthermore, the uncertainty information of the gridded product is provided. A leave-one-out cross validation was utilized to assess the quality of the interpolation schemes and different error metrics were calculated, as they focus on different aspects of uncertainties. Yamamoto's approach was used to determine the uncertainty of the gridded fields in order to find the best interpolation scheme (Yamamoto, 2000). This analysis revealed that IDW is the best performing scheme regarding the computational effort. However, Spheremap is more robust against locally higher density of input data and grids generated by Spheremap are more reliable and the overall uncertainty is lower than in the other tested interpolation schemes
An adaptive interpolation scheme for molecular potential energy surfaces
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
An adaptive interpolation scheme for molecular potential energy surfaces
Kowalewski, Markus; Heryudono, Alfa
2016-01-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task -- especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior is evaluated for a model function in 2, 3 and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
Scientific data interpolation with low dimensional manifold model
Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley
2018-01-01
We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.
Gribov ambiguities at the Landau-maximal Abelian interpolating gauge
Energy Technology Data Exchange (ETDEWEB)
Pereira, Antonio D.; Sobreiro, Rodrigo F. [UFF-Universidade Federal Fluminense, Instituto de Fisica, Niteroi, RJ (Brazil)
2014-08-15
In a previous work, we presented a new method to account for the Gribov ambiguities in non-Abelian gauge theories. The method consists on the introduction of an extra constraint which directly eliminates the infinitesimal Gribov copies without the usual geometric approach. Such strategy allows one to treat gauges with non-hermitian Faddeev-Popov operator. In this work, we apply this method to a gauge which interpolates among the Landau and maximal Abelian gauges. The result is a local and power counting renormalizable action, free of infinitesimal Gribov copies. Moreover, the interpolating tree-level gluon propagator is derived. (orig.)
Fractional Calculus of Coalescence Hidden-Variable Fractal Interpolation Functions
Prasad, Srijanani Anurag
Riemann-Liouville fractional calculus of Coalescence Hidden-variable Fractal Interpolation Function (CHFIF) is studied in this paper. It is shown in this paper that fractional integral of order ν of a CHFIF defined on any interval [a,b] is also a CHFIF albeit passing through different interpolation points. Further, conditions for fractional derivative of order ν of a CHFIF is derived in this paper. It is shown that under these conditions on free parameters, fractional derivative of order ν of a CHFIF defined on any interval [a,b] is also a CHFIF.
Interpolation and approximation by rational functions in the complex domain
Walsh, J L
1935-01-01
The present work is restricted to the representation of functions in the complex domain, particularly analytic functions, by sequences of polynomials or of more general rational functions whose poles are preassigned, the sequences being defined either by interpolation or by extremal properties (i.e. best approximation). Taylor's series plays a central role in this entire study, for it has properties of both interpolation and best approximation, and serves as a guide throughout the whole treatise. Indeed, almost every result given on the representation of functions is concerned with a generaliz
Geometries and interpolations for symmetric positive definite matrices
DEFF Research Database (Denmark)
Feragen, Aasa; Fuster, Andrea
2017-01-01
In this survey we review classical and recently proposed Riemannian metrics and interpolation schemes on the space of symmetric positive definite (SPD) matrices. We perform simulations that illustrate the problem of tensor fattening not only in the usually avoided Frobenius metric, but also...... the visualization of scale and shape variation in tensorial data. With the paper, we will release a software package with Matlab scripts for computing the interpolations and statistics used for the experiments in the paper (Code is available at https://sites.google.com/site/aasaferagen/home/software)....
Partitioning and interpolation based hybrid ARIMA–ANN model for ...
Indian Academy of Sciences (India)
Time series forecasting; ARIMA; ANN; partitioning and interpolation; Box–Jenkins methodology ... Further, on different experimental TSD like sunspots TSD and electricity price TSD, the proposed hybrid model is applied along with four existing state-of-the-art models and it is found that the proposed model outperforms all ...
Impact of gridpoint statistical interpolation scheme over Indian region
Indian Academy of Sciences (India)
A Global Data Assimilation and Forecasting. (GDAF) system for providing medium range weather forecasts over Indian region is operational at National Centre for Medium Range Weather. Forecasting (NCMRWF), India since 1994. Its analysis system was based on global Spectral Sta- tistical Interpolation (SSI) scheme ...
Geostatistical interpolation for modelling SPT data in northern Izmir
Indian Academy of Sciences (India)
uncertainty related to 'data scatter' stems from the natural randomness of the system under con- sideration and ... Carvalho & Cavalheiro (2005) employed geostatistical methods and Fourier analyses to model the cross-hole ... ing grain size distribution, plasticity, strength parameters and water content, for interpolation and.
Improved Interpolation Kernels for Super-resolution Algorithms
DEFF Research Database (Denmark)
Rasti, Pejman; Orlova, Olga; Tamberg, Gert
2016-01-01
Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...
The Grand Tour via Geodesic Interpolation of 2-frames
Asimov, Daniel; Buja, Andreas
1994-01-01
Grand tours are a class of methods for visualizing multivariate data, or any finite set of points in n-space. The idea is to create an animation of data projections by moving a 2-dimensional projection plane through n-space. The path of planes used in the animation is chosen so that it becomes dense, that is, it comes arbitrarily close to any plane. One of the original inspirations for the grand tour was the experience of trying to comprehend an abstract sculpture in a museum. One tends to walk around the sculpture, viewing it from many different angles. A useful class of grand tours is based on the idea of continuously interpolating an infinite sequence of randomly chosen planes. Visiting randomly (more precisely: uniformly) distributed planes guarantees denseness of the interpolating path. In computer implementations, 2-dimensional orthogonal projections are specified by two 1-dimensional projections which map to the horizontal and vertical screen dimensions, respectively. Hence, a grand tour is specified by a path of pairs of orthonormal projection vectors. This paper describes an interpolation scheme for smoothly connecting two pairs of orthonormal vectors, and thus for constructing interpolating grand tours. The scheme is optimal in the sense that connecting paths are geodesics in a natural Riemannian geometry.
Scalable Intersample Interpolation Architecture for High-channel-count Beamformers
DEFF Research Database (Denmark)
Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt
2011-01-01
Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo ...
Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation
Gordon, Sheldon P.; Yang, Yajun
2017-01-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…
LIP: The Livermore Interpolation Package, Version 1.6
Energy Technology Data Exchange (ETDEWEB)
Fritsch, F. N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-01-04
This report describes LIP, the Livermore Interpolation Package. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since it is a general-purpose package that need not be restricted to equation of state data, which uses variables ρ (density) and T (temperature).
A generalised interpolating post–processing method for integral ...
African Journals Online (AJOL)
Interpolating post-processing method for integral equation has been demonstrated to be superior to the iteration method by Qun Lin, Shechua Zhang and Ningning Yan. They demonstrated that it is of order O (h2r+2) This paper describes the generalization in the choice of h, the mesh size which leads to a higher order of O ...
Upsilon-quaternion splines for the smooth interpolation of orientations.
Nielson, Gregory M
2004-01-01
We present a new method for smoothly interpolating orientation matrices. It is based upon quaternions and a particular construction of upsilon-spline curves. The new method has tension parameters and variable knot (time) spacing which both prove to be effective in designing and controlling key frame animations.
Space-Mapping-Based Interpolation for Engineering Optimization
DEFF Research Database (Denmark)
Koziel, Slawomir; Bandler, John W.; Madsen, Kaj
2006-01-01
We consider a simple and efficient space-mapping (SM) based interpolation scheme to work in conjunction with SM optimization algorithms. The technique is useful if the fine model (the one that is supposed to be optimized) is available only on a structured grid. It allows us to estimate the respon...
Voluntary activation of trapezius measured with twitch interpolation
DEFF Research Database (Denmark)
Taylor, Janet L; Olsen, Henrik Baare; Sjøgaard, Gisela
2009-01-01
This study investigated the feasibility of measuring voluntary activation of the trapezius muscle with twitch interpolation. Subjects (n=8) lifted the right shoulder or both shoulders against fixed force transducers. Stimulation of the accessory nerve in the neck was used to evoke maximal twitches...
Multivariable operator-valued Nevanlinna-Pick interpolation: a survey
Ball, J.A.; ter Horst, S.|info:eu-repo/dai/nl/298809877
2010-01-01
The theory of Nevanlinna-Pick and Carathéodory-Fejér interpolation for matrix- and operator-valued Schur class functions on the unit disk is now well established. Recent work has produced extensions of the theory to a variety of multivariable settings, including the ball and the polydisk (both
Interpolation on sparse Gauss-Chebyshev grids in higher dimensions
F. Sprengel
1998-01-01
textabstractIn this paper, we give a unified approach to error estimates for interpolation on sparse Gauss--Chebyshev grids for multivariate functions from Besov--type spaces with dominating mixed smoothness properties. The error bounds obtained for this method are almost optimal for the considered
Interpolant Tree Automata and their Application in Horn Clause Verification
Directory of Open Access Journals (Sweden)
Bishoksan Kafle
2016-07-01
Full Text Available This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this paper. The role of an interpolant tree automaton is to provide a generalisation of a spurious counterexample during refinement, capturing a possibly infinite set of spurious counterexample traces. In our approach these traces are then eliminated using a transformation of the Horn clauses. We compare this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead.
Interpolation Methods for Dunn Logics and Their Extensions
S. Wintein (Stefan); Muskens, R. (Reinhard)
2017-01-01
textabstractThe semantic valuations of classical logic, strong Kleene logic, the logic of paradox and the logic of first-degree entailment, all respect the Dunn conditions: we call them Dunn logics. In this paper, we study the interpolation properties of the Dunn logics and extensions of these
Interpolation Methods for Dunn Logics and Their Extensions
Wintein, Stefan; Muskens, Reinhard
2017-01-01
The semantic valuations of classical logic, strong Kleene logic, the logic of paradox and the logic of first-degree entailment, all respect the Dunn conditions: we call them Dunn logics. In this paper, we study the interpolation properties of the Dunn logics and extensions of these logics to more
Interpolant tree automata and their application in Horn clause verification
DEFF Research Database (Denmark)
Kafle, Bishoksan; Gallagher, John Patrick
2016-01-01
This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this ......This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way...... this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn...... clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead....
Some observations on interpolating gauges and non-covariant gauges
Indian Academy of Sciences (India)
tion that are not normally taken into account in the BRST formalism that ignores the ε-term, and that they are characteristic of the way the singularities in propagators are handled. We argue that a prescription, in general, will require renormalization; if at all it is to be viable. Keywords. Non-covariant gauges; interpolating ...
Fractional Calculus of Fractal Interpolation Function on [0,b](b>0
Directory of Open Access Journals (Sweden)
XueZai Pan
2014-01-01
Full Text Available The paper researches the continuity of fractal interpolation function’s fractional order integral on [0,+∞ and judges whether fractional order integral of fractal interpolation function is still a fractal interpolation function on [0,b](b>0 or not. Relevant theorems of iterated function system and Riemann-Liouville fractional order calculus are used to prove the above researched content. The conclusion indicates that fractional order integral of fractal interpolation function is a continuous function on [0,+∞ and fractional order integral of fractal interpolation is still a fractal interpolation function on the interval [0,b].
Interpolation with prediction-error filters and training data
Curry, William
With finite capital and logistical means, interpolation is a necessary part of seismic processing, especially for such data-dependent methods as surface-related multiple elimination. One such method is to first estimate a prediction-error filter (PEF) on a training data set, and then use that PEF to estimate missing data, where each step is a least-squares problem. This approach is useful because it can interpolate multiple simultaneous, aliased slopes, but requires regularly-sampled data. I adapt this approach to interpolate irregularly-sampled data, marine data with a large near-offset gap, and 3D prestack marine data with many dimensions. I estimate a PEF from irregularly-sampled data in order to interpolate these data onto a regular grid. I do this by regridding the data onto multiple different grids and estimate a PEF simultaneously on all of the regridded data. I use this approach to interpolate both irregularly-sampled 3D synthetic data and 2D prestack land data using nonstationary PEFs. Marine data typically contains a near-offset gap of several traces, which can be larger when surface obstacles are present, such as offshore platforms. Most methods that depend on lower-frequency information from the data fail for these large gaps. I estimate nonstationary PEFs from pseudoprimary data, which is generated by cross-correlating data within each shot, so that the correlation of multiples with primaries creates data at the near offsets that were not originally recorded. I use this approach in t-x-y and f-x-y, on both the Sigsbee2B 2D prestack synthetic dataset, and a 2D prestack field data set. I also explore the feasibility of this approach for 3D data. Finally, I estimate nonstationary PEFs in many dimensions using the approximation that slope is constant as a function of frequency, and interpolate data in two, three, four, and five dimensions simultaneously by using nonstationary PEFs on frequency slices. I interpolate both prestack 3D synthetic as well as
Stężenie i mikroheterogenność białek ostrej fazy u chorych z twardziną układową
Izabela Domysławska; Klimiuk, Piotr A.; Agnieszka Sulik; Stanisław Sierakowski
2010-01-01
Stężenie i mikroheterogenność białek ostrej fazy (BOF) ulegazmianom w ostrych i przewlekłych stanach zapalnych. Zmianyjakościowe niektórych białek ostrej fazy są określane jako mikroheterogennośćgłówna. Elektroforeza dwóch kierunków powinowactwaz konkanawaliną A (ConA) jako ligandem jest z powodzeniemstosowana do oceny mikroheterogenności glikoproteinostrej fazy. Określenie stężenia i mikroheterogenności BOF możebyć użyteczne we wczesnej diagnostyce i prognozowaniu przewlekłychprocesów zapaln...
Accuracy of stream habitat interpolations across spatial scales
Sheehan, Kenneth R.; Welsh, Stuart A.
2013-01-01
Stream habitat data are often collected across spatial scales because relationships among habitat, species occurrence, and management plans are linked at multiple spatial scales. Unfortunately, scale is often a factor limiting insight gained from spatial analysis of stream habitat data. Considerable cost is often expended to collect data at several spatial scales to provide accurate evaluation of spatial relationships in streams. To address utility of single scale set of stream habitat data used at varying scales, we examined the influence that data scaling had on accuracy of natural neighbor predictions of depth, flow, and benthic substrate. To achieve this goal, we measured two streams at gridded resolution of 0.33 × 0.33 meter cell size over a combined area of 934 m2 to create a baseline for natural neighbor interpolated maps at 12 incremental scales ranging from a raster cell size of 0.11 m2 to 16 m2 . Analysis of predictive maps showed a logarithmic linear decay pattern in RMSE values in interpolation accuracy for variables as resolution of data used to interpolate study areas became coarser. Proportional accuracy of interpolated models (r2 ) decreased, but it was maintained up to 78% as interpolation scale moved from 0.11 m2 to 16 m2 . Results indicated that accuracy retention was suitable for assessment and management purposes at various scales different from the data collection scale. Our study is relevant to spatial modeling, fish habitat assessment, and stream habitat management because it highlights the potential of using a single dataset to fulfill analysis needs rather than investing considerable cost to develop several scaled datasets.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Jing, Wang; Zhu, Hui; Guo, Hongbo; Zhang, Yan; Shi, Fang; Han, Anqin; Li, Minghuan; Kong, Li; Yu, Jinming
2015-01-01
We conducted a retrospective analysis to assess the feasibility of involved field irradiation (IFI) in elderly patients with esophageal squamous cell cancer (ESCC). We performed a retrospective review of the records of elderly patients (≥ 70 years) with unresectable ESCC and no distant metastases who received treatment with radiotherapy between January 2009 and March 2013. According to the irradiation volume, patients were allocated into either the elective nodal irradiation (ENI) group or the IFI group. Overall survival (OS), progression-free survival (PFS) and treatment-related toxicities were compared between the two groups. A total of 137 patients were enrolled. Fifty-four patients (39.4%) were allocated to the ENI group and 83 patients (60.6%) to the IFI group, the median doses in the two groups were 60 Gy and 59.4 Gy, respectively. For the entire group, the median survival time (MST) and PFS were 16 months and 12 months, respectively. The median PFS and 3-year PFS rate in the ENI group were 13 months and 20.6%, compared to 11 months and 21.0% in the IFI groups (p = 0.61). The MST and 3-year OS rate in the ENI and IFI groups were 17 months and 26.4% and 15.5 months and 21.7%, respectively (p = 0.25). The rate of grade ≥ 3 acute irradiation esophagitis in the ENI group was significantly higher than that in the IFI group (18.5% vs. 6.0%; p = 0.027). Other grade ≥ 3 treatment-related toxicities did not significantly differ between the two groups. IFI resulted in decreased irradiation toxicities without sacrificing OS in elderly patients with ESCC.
Directory of Open Access Journals (Sweden)
S. Ly
2011-07-01
Full Text Available Spatial interpolation of precipitation data is of great importance for hydrological modelling. Geostatistical methods (kriging are widely applied in spatial interpolation from point measurement to continuous surfaces. The first step in kriging computation is the semi-variogram modelling which usually used only one variogram model for all-moment data. The objective of this paper was to develop different algorithms of spatial interpolation for daily rainfall on 1 km^{2} regular grids in the catchment area and to compare the results of geostatistical and deterministic approaches. This study leaned on 30-yr daily rainfall data of 70 raingages in the hilly landscape of the Ourthe and Ambleve catchments in Belgium (2908 km^{2}. This area lies between 35 and 693 m in elevation and consists of river networks, which are tributaries of the Meuse River. For geostatistical algorithms, seven semi-variogram models (logarithmic, power, exponential, Gaussian, rational quadratic, spherical and penta-spherical were fitted to daily sample semi-variogram on a daily basis. These seven variogram models were also adopted to avoid negative interpolated rainfall. The elevation, extracted from a digital elevation model, was incorporated into multivariate geostatistics. Seven validation raingages and cross validation were used to compare the interpolation performance of these algorithms applied to different densities of raingages. We found that between the seven variogram models used, the Gaussian model was the most frequently best fit. Using seven variogram models can avoid negative daily rainfall in ordinary kriging. The negative estimates of kriging were observed for convective more than stratiform rain. The performance of the different methods varied slightly according to the density of raingages, particularly between 8 and 70 raingages but it was much different for interpolation using 4 raingages. Spatial interpolation with the geostatistical and
PM10 data assimilation over Europe with the optimal interpolation method
Directory of Open Access Journals (Sweden)
B. Sportisse
2009-01-01
Full Text Available This paper presents experiments of PM10 data assimilation with the optimal interpolation method. The observations are provided by BDQA (Base de Données sur la Qualité de l'Air, whose monitoring network covers France. Two other databases (EMEP and AirBase are used to evaluate the improvements in the analyzed state over January 2001 and for several outputs (PM10, PM2.5 and chemical composition. The method is then applied in operational-forecast conditions. It is found that the assimilation of PM10 observations significantly improves the one-day forecast of total mass (PM10 and PM2.5, whereas the improvement is non significant for the two-day forecast. The errors on aerosol chemical composition are sometimes amplified by the assimilation procedure, which shows the need for chemical data. Since the observations cover a limited part of the domain (France versus Europe and since the method used for assimilation is sequential, we focus on the horizontal and temporal impacts of the assimilation and we study how several parameters of the assimilation system modify these impacts. The strategy followed in this paper, with the optimal interpolation, could be useful for operational forecasts. Meanwhile, considering the weak temporal impact of the approach (about one day, the method has to be improved or other methods have to be considered.
An improved algorithm of three B-spline curve interpolation and simulation
Zhang, Wanjun; Xu, Dongmei; Meng, Xinhong; Zhang, Feng
2017-03-01
As a key interpolation technique in CNC system machine tool, three B-spline curve interpolator has been proposed to change the drawbacks caused by linear and circular interpolator, Such as interpolation time bigger, three B-spline curves step error are not easy changed,and so on. This paper an improved algorithm of three B-spline curve interpolation and simulation is proposed. By Using MATALAB 7.0 computer soft in three B-spline curve interpolation is developed for verifying the proposed modification algorithm of three B-spline curve interpolation experimentally. The simulation results show that the algorithm is correct; it is consistent with a three B-spline curve interpolation requirements.
Muñoz, Randy; Paredes, Javier; Huggel, Christian; Drenkhan, Fabian; García, Javier
2017-04-01
The availability and consistency of data is a determining factor for the reliability of any hydrological model and simulated results. Unfortunately, there are many regions worldwide where data is not available in the desired quantity and quality. The Santa River basin (SRB), located within a complex topographic and climatic setting in the tropical Andes of Peru is a clear example of this challenging situation. A monitoring network of in-situ stations in the SRB recorded series of hydro-meteorological variables which finally ceased to operate in 1999. In the following years, several researchers evaluated and completed many of these series. This database was used by multiple research and policy-oriented projects in the SRB. However, hydroclimatic information remains limited, making it difficult to perform research, especially when dealing with the assessment of current and future water resources. In this context, here the evaluation of different methodologies to interpolate temperature and precipitation data at a monthly time step as well as ice volume data in glacierized basins with limited data is presented. The methodologies were evaluated for the Quillcay River, a tributary of the SRB, where the hydro-meteorological data is available from nearby monitoring stations since 1983. The study period was 1983 - 1999 with a validation period among 1993 - 1999. For temperature series the aim was to extend the observed data and interpolate it. Data from Reanalysis NCEP was used to extend the observed series: 1) using a simple correlation with multiple field stations, or 2) applying the altitudinal correction proposed in previous studies. The interpolation then was applied as a function of altitude. Both methodologies provide very close results, by parsimony simple correlation is shown as a viable choice. For precipitation series, the aim was to interpolate observed data. Two methodologies were evaluated: 1) Inverse Distance Weighting whose results underestimate the amount
Reproducing Kernel Hilbert space method for optimal interpolation of potential field data.
Maltz, J; De Mello Koch, R; Willis, A
1998-01-01
The RKHS-based optimal image interpolation method, presented by Chen and de Figueiredo (1993), is applied to scattered potential field measurements. The RKHS which admits only interpolants consistent with Laplace's equation is defined and its kernel, derived. The algorithm is compared to bicubic spline interpolation, and is found to yield vastly superior results.
Digital interpolators for polar format processing. [of synthetic aperture radar images
Adams, John W.; Hudson, Ralph E.; Bayma, Robert W.; Nelson, Jeffrey E.
1989-01-01
The polar format approach to SAR image formation requires data to be interpolated from a warped grid onto a Cartesian lattice. In general, this requires that data be interpolated between varying sampling rates. In this paper, frequency-domain optimality criteria for polar format interpolators are defined and justified, and an approach to designing the corresponding digital filters is described.
On the efficiency and accuracy of interpolation methods for spectral codes
van Hinsberg, M.A.T.; ten Thije Boonkkamp, J.H.M.; Toschi, F.; Clercx, H.J.H.
2012-01-01
In this paper a general theory for interpolation methods on a rectangular grid is introduced. By the use of this theory an efficient B-spline-based interpolation method for spectral codes is presented. The theory links the order of the interpolation method with its spectral properties. In this way
Runoff Interpolation and Budyko Framework over 300 Catchments across China
Qiu, Ning
2017-04-01
The Budyko hypothesis illustrates that mean annual evapotranspiration is largely determined by precipitation and potential evapotranspiration, which can be adopted to estimate mean annual actual evapotranspiration. In this study Fu's equation derived from the Budyko hypothesis is firstly tested by using mean annual streamflow and meteorological data of over 300 hydrological stations from ten main basins in China. Result shows that significant differences yield in the application of Fu's equation among basins. Secondly, the relationship between the single parameterωin Fu's equation and climatic and human factors was built to reveal the time variation of it. Meanwhile, the spacial structure characteristic of the regionalized variable ω was analyzed including spatial autocorrelation and locality. Then a stochastic interpolation scheme based on geostatistical interpolation, adding a constraint of global water balance in river system, is developed to mapping ω and runoff, aimed to predict runoff of elements of target partition of main basins and compare to the results computed by using Budyko hypothesis.
The Nonlocal p-Laplacian Evolution for Image Interpolation
Directory of Open Access Journals (Sweden)
Yi Zhan
2011-01-01
Full Text Available This paper presents an image interpolation model with nonlocal p-Laplacian regularization. The nonlocal p-Laplacian regularization overcomes the drawback of the partial differential equation (PDE proposed by Belahmidi and Guichard (2004 that image density diffuses in the directions pointed by local gradient. The grey values of images diffuse along image feature direction not gradient direction under the control of the proposed model, that is, minimal smoothing in the directions across the image features and maximal smoothing in the directions along the image features. The total regularizer combines the advantages of nonlocal p-Laplacian regularization and total variation (TV regularization (preserving discontinuities and 1D image structures. The derived model efficiently reconstructs the real image, leading to a natural interpolation, with reduced blurring and staircase artifacts. We present experimental results that prove the potential and efficacy of the method.
Evaluating the Power of GPU Acceleration for IDW Interpolation Algorithm
Directory of Open Access Journals (Sweden)
Gang Mei
2014-01-01
Full Text Available We first present two GPU implementations of the standard Inverse Distance Weighting (IDW interpolation algorithm, the tiled version that takes advantage of shared memory and the CDP version that is implemented using CUDA Dynamic Parallelism (CDP. Then we evaluate the power of GPU acceleration for IDW interpolation algorithm by comparing the performance of CPU implementation with three GPU implementations, that is, the naive version, the tiled version, and the CDP version. Experimental results show that the tilted version has the speedups of 120x and 670x over the CPU version when the power parameter p is set to 2 and 3.0, respectively. In addition, compared to the naive GPU implementation, the tiled version is about two times faster. However, the CDP version is 4.8x∼6.0x slower than the naive GPU version, and therefore does not have any potential advantages in practical applications.
Reconstruction from Sparsely Sampled Data by ART with Interpolated Rays.
Kouris, K; Tuy, H; Lent, A; Herman, G T; Lewitt, R M
1982-01-01
After a brief discussion of the algebraic reconstruction techniques (ART), we introduce the attenuation problem in positron emission tomography (PET). We anticipate that a generalization of ART, the so-called cyclic subgradient projection (CSP) method, may be useful for solving this problem. This, however, has not been successfully realized, due to the fact that data collected by our proposed stationary PET detector ring are too sparsely sampled. That this is, in fact, a major problem is demonstrated by showing that ordinary ART produces reconstructions with unacceptably strong artifacts even on perfect (no attenuation) data collected according to the PET geometry. We demonstrate that the source of this artifact is the sparse sampling, and we propose the use of interpolated rays to overcome the problem. This approach is successful, as is illustrated by showing reconstructions from sparsely sampled data by ART with interpolated rays.
Interpolation of missing wind data based on ANFIS
Energy Technology Data Exchange (ETDEWEB)
Yang, Zhiling [Energy and Power Engineering School, North China Electric Power University, Beijing 102206 (China); Liu, Yongqian [Renewable Energy School, North China Electric Power University, Beijing 102206 (China); Li, Chengrong [Electrical and Electronic Engineering School, North China Electric Power University, Beijing 102206 (China)
2011-03-15
Measured wind data is one of the key input data for wind farm planning and design. There are always some missing and invalid data in wind measurement, which poses the main challenges for wind energy resources assessment. In this paper, the rules of integrity check and reasonableness check are introduced, then an adaptive neuro-fuzzy inference system (ANFIS) model is proposed, in which fuzzy inference algorithm are used to interpolate the missing and invalid wind data. A further comparison and analysis is given between the calculating result and measured data. Meanwhile Using methods of wind shear coefficient and ANFIS, 12 measured wind data sets from a wind farm in North China are interpolated and analyzed, respectively. The results proved the effectiveness of ANFIS. (author)
Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery.
Ratliff, Bradley M; LaCasse, Charles F; Tyo, J Scott
2009-05-25
Microgrid polarimeters are composed of an array of micro-polarizing elements overlaid upon an FPA sensor. In the past decade systems have been designed and built in all regions of the optical spectrum. These systems have rugged, compact designs and the ability to obtain a complete set of polarimetric measurements during a single image capture. However, these systems acquire the polarization measurements through spatial modulation and each measurement has a varying instantaneous field-of-view (IFOV). When these measurements are combined to estimate the polarization images, strong edge artifacts are present that severely degrade the estimated polarization imagery. These artifacts can be reduced when interpolation strategies are first applied to the intensity data prior to Stokes vector estimation. Here we formally study IFOV error and the performance of several bilinear interpolation strategies used for reducing it.
Dynamic Stability Analysis Using High-Order Interpolation
Directory of Open Access Journals (Sweden)
Juarez-Toledo C.
2012-10-01
Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.
Accuracy Analysis of DEM Generated from Cokriging Interpolators
Setiyoko, A.; Arymurthy, A. M.
2017-10-01
DEM as a representation of the earth's surface has many functions for spatial analysis. DEM can be produced from several kinds of techniques such as satellite technology stereo optical or radar technology. Problems when using the optical stereo data is at the high point density level that is not distributed evenly. In regions with homogeneous character, the height point is becoming sparse. This will affect to DEM accuracy. In order to solve the problem, performing fusion techniques using interpolation method cokriging involving data points ALOS PRISM and SRTM height point was conducted. The sparse height point derived from ALOS PRISM on some object is expected to be enhanced by using SRTM data. There were several aspects to enhance the accuracy of DEM-derived from this process: the character of topography, land cover types, density in height point of the data and the precise type of interpolation method used.
Interpolated Sounding and Gridded Sounding Value-Added Products
Energy Technology Data Exchange (ETDEWEB)
Jensen, M. P. [Brookhaven National Laboratory (BNL), Upton, NY (United States); Toto, T. [Brookhaven National Laboratory (BNL), Upton, NY (United States)
2016-03-01
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25 and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.
Interpolated pressure laws in two-fluid simulations and hyperbolicity
Helluy, Philippe; Jung, Jonathan
2014-01-01
We consider a two-fluid compressible flow. Each fluid obeys a stiffened gas pressure law. The continuous model is well defined without considering mixture regions. However, for numerical applications it is often necessary to consider artificial mixtures, because the two-fluid interface is diffused by the numerical scheme. We show that classic pressure law interpolations lead to a non-convex hyperbolicity domain and failure of well-known numerical schemes. We propose a physically relevant pres...
Steady State Stokes Flow Interpolation for Fluid Control
DEFF Research Database (Denmark)
Bhatacharya, Haimasree; Nielsen, Michael Bang; Bridson, Robert
2012-01-01
— suffer from a common problem. They fail to capture the rotational components of the velocity field, although extrapolation in the normal direction does consider the tangential component. We address this problem by casting the interpolation as a steady state Stokes flow. This type of flow captures...... the rotational components and is suitable for controlling liquid animations where tangential motion is pronounced, such as in a breaking wave...
Does better rainfall interpolation improve hydrological model performance?
Bàrdossy, Andràs; Kilsby, Chris; Lewis, Elisabeth
2017-04-01
High spatial variability of precipitation is one of the main sources of uncertainty in rainfall/runoff modelling. Spatially distributed models require detailed space time information on precipitation as input. In the past decades a lot of effort was spent on improving precipitation interpolation using point observations. Different geostatistical methods like Ordinary Kriging, External Drift Kriging or Copula based interpolation can be used to find the best estimators for unsampled locations. The purpose of this work is to investigate to what extents more sophisticated precipitation estimation methods can improve model performance. For this purpose the Wye catchment in Wales was selected. The physically-based spatially-distributed hydrological model SHETRAN is used to describe the hydrological processes in the catchment. 31 raingauges with 1 hourly temporal resolution are available for a time period of 6 years. In order to avoid the effect of model uncertainty model parameters were not altered in this study. Instead 100 random subsets consisting of 14 stations each were selected. For each of the configurations precipitation was interpolated for each time step using nearest neighbor (NN), inverse distance (ID) and Ordinary Kriging (OK). The variogram was obtained using the temporal correlation of the time series measured at different locations. The interpolated data were used as input for the spatially distributed model. Performance was evaluated for daily mean discharges using the Nash-Sutcliffe coefficient, temporal correlations, flow volumes and flow duration curves. The results show that the simplest NN and the sophisticated OK performances are practically equally good, while ID performed worse. NN was often better for high flows. The reason for this is that NN does not reduce the variance, while OK and ID yield smooth precipitation fields. The study points out the importance of precipitation variability and suggests the use of conditional spatial simulation as
A Direct Coarray Interpolation Approach for Direction Finding.
Chen, Tao; Guo, Muran; Guo, Limin
2017-09-19
Sparse arrays have gained considerable attention in recent years because they can resolve more sources than the number of sensors. The coprime array can resolve O ( M N ) sources with only O ( M + N ) sensors, and is a popular sparse array structure due to its closed-form expressions for array configuration and the reduction of the mutual coupling effect. However, because of the existence of holes in its coarray, the performance of subspace-based direction of arrival (DOA) estimation algorithms such as MUSIC and ESPRIT is limited. Several coarray interpolation approaches have been proposed to address this issue. In this paper, a novel DOA estimation approach via direct coarray interpolation is proposed. By using the direct coarray interpolation, the reshaping and spatial smoothing operations in coarray-based DOA estimation are not needed. Compared with existing approaches, the proposed approach can achieve a better accuracy with lower complexity. In addition, an improved angular resolution capability is obtained by using the proposed approach. Numerical simulations are conducted to validate the effectiveness of the proposed approach.
Reconstruction of surfaces from planar contours through contour interpolation
Sunderland, Kyle; Woo, Boyeong; Pinter, Csaba; Fichtinger, Gabor
2015-03-01
Segmented structures such as targets or organs at risk are typically stored as 2D contours contained on evenly spaced cross sectional images (slices). Contour interpolation algorithms are implemented in radiation oncology treatment planning software to turn 2D contours into a 3D surface, however the results differ between algorithms, causing discrepancies in analysis. Our goal was to create an accurate and consistent contour interpolation algorithm that can handle issues such as keyhole contours, rapid changes, and branching. This was primarily motivated by radiation therapy research using the open source SlicerRT extension for the 3D Slicer platform. The implemented algorithm triangulates the mesh by minimizing the length of edges spanning the contours with dynamic programming. The first step in the algorithm is removing keyholes from contours. Correspondence is then found between contour layers and branching patterns are determined. The final step is triangulating the contours and sealing the external contours. The algorithm was tested on contours segmented on computed tomography (CT) images. Some cases such as inner contours, rapid changes in contour size, and branching were handled well by the algorithm when encountered individually. There were some special cases in which the simultaneous occurrence of several of these problems in the same location could cause the algorithm to produce suboptimal mesh. An open source contour interpolation algorithm was implemented in SlicerRT for reconstructing surfaces from planar contours. The implemented algorithm was able to generate qualitatively good 3D mesh from the set of 2D contours for most tested structures.
Importance of interpolation and coincidence errors in data fusion
Directory of Open Access Journals (Sweden)
S. Ceccherini
2018-02-01
Full Text Available The complete data fusion (CDF method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.
Interpolation of daily rainfall using spatiotemporal models and clustering
Militino, A. F.
2014-06-11
Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.
Economical Interpolator in a ΣΔ D/A Converter
Directory of Open Access Journals (Sweden)
Vytenis Puidokas
2011-03-01
Full Text Available The place of interpolator in ΣΔ DACs was briefly discussed. The summarized structure of the most common interpolators was provided. The more applicable interpolators’ structures were suggested and analyzed in comparison with similar one. Having changed the structure of incomplete interpolator and having optimized the stages, it was possible to improve the characteristic of amplitude transfer by 17 dB with less non-zero coefficients and much less FPGA resources. After experimental research of the full converter system (interpolator, modulator and output filter it was defined that the designed interpolator (including 17 dB gaining suits only a very limited set of modulators. Another version of interpolator was offered for the system, ensuring the suppression of the additional frequency band in the whole system above 99 dB instead of the previous 66 dB (or 49 dB in the similar version of interpolator.Article in Lithuanian
Fourier Interpolation of Sparsely and Irregularly Sampled Potential Field Data
Saleh, R.; Bailey, R. C.
2011-12-01
Sparsely and irregularly sampled values of potential fields on the Earth's surface need to be interpolated to be presented as maps. For display purposes, the choice of an interpolation method may be largely an aesthetic choice. However, if derived quantities such as spatial derivatives of the field are also required for display, it is important that interpolation respect the physics of Laplace's equation. Examples would be the derivation of equivalent surface currents for interpretation purposes from a magnetotelluric hypothetical event map of the horizontal magnetic fields, or the derivation of tensor gravity gradients from ground data for comparison with an airborne survey. Various methods for interpolating while respecting Laplace's equation date back nearly fifty years, to Dampney's 1969 equivalent source technique. In that and comparable methods, a set of effective sources below the Earth's surface is found which is consistent with the data, and used to calculate the field away from data locations. Because the interpolation is not unique, the source depth can be used as a parameter to maximally suppress the indeterminate high frequency part of the resulting map while retaining consistency with the data. Here, we take advantage of modern computing power to recast the interpolation problem as an inverse problem: that of determining the Fourier transform of the field at the Earth's surface subject to the constraints of fitting the data while minimizing a model norm which measures the high-frequency content of the resulting map. User decisions about the number of equivalent sources or their depths are not required. The approach is not fundamentally different from that used to determine planetary gravity or magnetic fields from satellite measurements, except that our application is designed for extremely under-sampled situations and is formulated in Cartesian coordinates. To avoid artificially constraining the frequency content of the resulting transform, we choose
Movable chain jacks and winches: case study of PETROBRAS' P58/)62and ENI's Goliat
Energy Technology Data Exchange (ETDEWEB)
Grindheim, Reidar [Aker Pusnes AS, Arendal (Norway)
2012-07-01
Recently, Aker Solutions delivered a movable chain jack system to PETROBRAS's P58/P62 FPSOs and a movable windlass system to ENI's Goliat FPSO. This paper highlights the main differences between the two systems and when it is beneficial to employ movable systems. There are many parameters to consider in determining which system to use - also a traditional system involving a single winch or chain jack per mooring line may in many cases be preferred. The movable chain jack concept is designed to operate multiple mooring lines within the same cluster. A single chain jack is lifted by a skidding gantry and moved to the next mooring line and so forth. Installation and messenger chains are moved using a large sliding chain locker allowing for later offloading of the surplus chain. The movable windlass system is also designed to operate multiple mooring lines within the same cluster. However, in this case the winch is rotary and can operate via electric or hydraulic power. One of the main considerations is to move the windlass and keep the mooring lines intact without cutting them. (author)
Hirota, Ryuichi; Yamagata, Akira; Kato, Junichi; Kuroda, Akio; Ikeda, Tsukasa; Takiguchi, Noboru; Ohtake, Hisao
2000-01-01
Pulsed-field gel electrophoresis of PmeI digests of the Nitrosomonas sp. strain ENI-11 chromosome produced four bands ranging from 1,200 to 480 kb in size. Southern hybridizations suggested that a 487-kb PmeI fragment contained two copies of the amoCAB genes, coding for ammonia monooxygenase (designated amoCAB1 and amoCAB2), and three copies of the hao gene, coding for hydroxylamine oxidoreductase (hao1, hao2, and hao3). In this DNA fragment, amoCAB1 and amoCAB2 were about 390 kb apart, while hao1, hao2, and hao3 were separated by at least about 100 kb from each other. Interestingly, hao1 and hao2 were located relatively close to amoCAB1 and amoCAB2, respectively. DNA sequence analysis revealed that hao1 and hao2 shared 160 identical nucleotides immediately upstream of each translation initiation codon. However, hao3 showed only 30% nucleotide identity in the 160-bp corresponding region. PMID:10633121
Directory of Open Access Journals (Sweden)
Zhiwei Pan
2016-05-01
Full Text Available Global look-up table strategy proposed recently has been proven to be an efficient method to accelerate the interpolation, which is the most time-consuming part in the iterative sub-pixel digital image correlation (DIC algorithms. In this paper, a global look-up table strategy with cubic B-spline interpolation is developed for the DIC method based on the inverse compositional Gauss–Newton (IC-GN algorithm. The performance of this strategy, including accuracy, precision, and computation efficiency, is evaluated through a theoretical and experimental study, using the one with widely employed bicubic interpolation as a benchmark. The global look-up table strategy with cubic B-spline interpolation improves significantly the accuracy of the IC-GN algorithm-based DIC method compared with the one using the bicubic interpolation, at a trivial price of computation efficiency.
A Bidirectional Flow Joint Sobolev Gradient for Image Interpolation
Directory of Open Access Journals (Sweden)
Yi Zhan
2013-01-01
Full Text Available An energy functional with bidirectional flow is presented to sharpen image by reducing its edge width, which performs a forward diffusion in brighter lateral on edge ramp and backward diffusion that proceeds in darker lateral. We first consider the diffusion equations as L2 gradient flows on integral functionals and then modify the inner product from L2 to a Sobolev inner product. The experimental results demonstrate that our model efficiently reconstructs the real image, leading to a natural interpolation with reduced blurring, staircase artifacts and preserving better the texture features of image.
Gravity Aided Navigation Precise Algorithm with Gauss Spline Interpolation
Directory of Open Access Journals (Sweden)
WEN Chaobin
2015-01-01
Full Text Available The gravity compensation of error equation thoroughly should be solved before the study on gravity aided navigation with high precision. A gravity aided navigation model construction algorithm based on research the algorithm to approximate local grid gravity anomaly filed with the 2D Gauss spline interpolation is proposed. Gravity disturbance vector, standard gravity value error and Eotvos effect are all compensated in this precision model. The experiment result shows that positioning accuracy is raised by 1 times, the attitude and velocity accuracy is raised by 1～2 times and the positional error is maintained from 100~200 m.
Twitch interpolation technique in testing of maximal muscle strength
DEFF Research Database (Denmark)
Bülow, P M; Nørregaard, J; Danneskiold-Samsøe, B
1993-01-01
The aim was to study the methodological aspects of the muscle twitch interpolation technique in estimating the maximal force of contraction in the quadriceps muscle utilizing commercial muscle testing equipment. Six healthy subjects participated in seven sets of experiments testing the effects...... of the preload was reduced. The relationship between twitch size and force was only linear, for force levels greater than 25% of maximum. It was concluded that to achieve an accurate estimate of true maximal force of muscle contraction, it would be necessary for the subject to be able to perform at least 75...
Trends in Continuity and Interpolation for Computer Graphics.
Gonzalez Garcia, Francisco
2015-01-01
In every computer graphics oriented application today, it is a common practice to texture 3D models as a way to obtain realistic material. As part of this process, mesh texturing, deformation, and visualization are all key parts of the computer graphics field. This PhD dissertation was completed in the context of these three important and related fields in computer graphics. The article presents techniques that improve on existing state-of-the-art approaches related to continuity and interpolation in texture space (texturing), object space (deformation), and screen space (rendering).
A case of multivariate Birkhoff interpolation using high order derivatives
Goldman, Gil
2016-01-01
We consider a specific scheme of multivariate Birkhoff polynomial interpolation. Our samples are derivatives of various orders $k_j$ at fixed points $v_j$ along fixed straight lines through $v_j$ in directions $u_j$, under the following assumption: the total number of sampled derivatives of order $k, \\ k=0,1,\\ldots$ is equal to the dimension of the space homogeneous polynomials of degree $k$. We show that this scheme is regular for general directions. Specifically this scheme is regular indep...
A New Interpolation Approach for Linearly Constrained Convex Optimization
Espinoza, Francisco
2012-08-01
In this thesis we propose a new class of Linearly Constrained Convex Optimization methods based on the use of a generalization of Shepard\\'s interpolation formula. We prove the properties of the surface such as the interpolation property at the boundary of the feasible region and the convergence of the gradient to the null space of the constraints at the boundary. We explore several descent techniques such as steepest descent, two quasi-Newton methods and the Newton\\'s method. Moreover, we implement in the Matlab language several versions of the method, particularly for the case of Quadratic Programming with bounded variables. Finally, we carry out performance tests against Matab Optimization Toolbox methods for convex optimization and implementations of the standard log-barrier and active-set methods. We conclude that the steepest descent technique seems to be the best choice so far for our method and that it is competitive with other standard methods both in performance and empirical growth order.
Construction of Large Period Symplectic Maps by Interpolative Methods
Energy Technology Data Exchange (ETDEWEB)
Warnock, Robert; Cai, Yunhai; /SLAC; Ellison, James A.; /New Mexico U.
2009-12-17
The goal is to construct a symplectic evolution map for a large section of an accelerator, say a full turn of a large ring or a long wiggler. We start with an accurate tracking algorithm for single particles, which is allowed to be slightly non-symplectic. By tracking many particles for a distance S one acquires sufficient data to construct the mixed-variable generator of a symplectic map for evolution over S, given in terms of interpolatory functions. Two ways to find the generator are considered: (1) Find its gradient from tracking data, then the generator itself as a line integral. (2) Compute the action integral on many orbits. A test of method (1) has been made in a difficult example: a full turn map for an electron ring with strong nonlinearity near the dynamic aperture. The method succeeds at fairly large amplitudes, but there are technical difficulties near the dynamic aperture due to oddly shaped interpolation domains. For a generally applicable algorithm we propose method (2), realized with meshless interpolation methods.
3D Interpolation Method for CT Images of the Lung
Directory of Open Access Journals (Sweden)
Noriaki Asada
2003-06-01
Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.
THE EFFECT OF STIMULUS ANTICIPATION ON THE INTERPOLATED TWITCH TECHNIQUE
Directory of Open Access Journals (Sweden)
Duane C. Button
2008-12-01
Full Text Available The objective of this study was to investigate the effect of expected and unexpected interpolated stimuli (IT during a maximum voluntary contraction on quadriceps force output and activation. Two groups of male subjects who were either inexperienced (MI: no prior experience with IT tests or experienced (ME: previously experienced 10 or more series of IT tests received an expected or unexpected IT while performing quadriceps isometric maximal voluntary contractions (MVCs. Measurements included MVC force, quadriceps and hamstrings electromyographic (EMG activity, and quadriceps inactivation as measured by the interpolated twitch technique (ITT. When performing MVCs with the expectation of an IT, the knowledge or lack of knowledge of an impending IT occurring during a contraction did not result in significant overall differences in force, ITT inactivation, quadriceps or hamstrings EMG activity. However, the expectation of an IT significantly (p < 0.0001 reduced MVC force (9.5% and quadriceps EMG activity (14.9% when compared to performing MVCs with prior knowledge that stimulation would not occur. While ME exhibited non-significant decreases when expecting an IT during a MVC, MI force and EMG activity significantly decreased 12.4% and 20.9% respectively. Overall, ME had significantly (p < 0.0001 higher force (14.5% and less ITT inactivation (10.4% than MI. The expectation of the noxious stimuli may account for the significant decrements in force and activation during the ITT
MATHEMATICAL BASIS FOR THREE DIMENSIONAL CIRCULAR INTERPOLATION ON CNC MACHINES
Directory of Open Access Journals (Sweden)
A.J. Lubbe
2012-01-01
Full Text Available
ENGLISH ABSTRACT: The control units of numerically controlled manufacturing machines allow the programmer only a limited number of mathematical functions with which programmes can be written. Despite these limitations it is now possible to write programmes with which three dimensional (3D circular interpolation can be performed directly on the .machines. The necessary mathematical techniques to perform 3D circular interpolation directly on the machines are deduced, although somewhat roundabout to overcome the programming limitations. The main programming limitations as well as cutter speed limitations are indicated.
AFRIKAANSE OPSOMMING: Die beheereenhede van numeriesbeheerde vervaardigingsmasjiene beskik:oor 'n beperkte aantal wiskundige funksies wat tot die beskikking van die programmeerder gestel word om programme mee te skryf. Selfs met .hierdie beperkings is dit nou moontlik om programme te skryf waarmee driedimensionele (3D sirkelinterpolasie direk op die masjiene uitgevoer kan word. Die nodige wiskundige tegnieke waarmee 3D sirkelinterpolasie gedoen kan word, al is dit 'n effens omslagtige manier om die beperkings te oorkom, word afgelei. Die belangrikste programmeringsbeperkings asook snyspoedbeperkings word aangetoon.
Interpolated Sounding and Gridded Sounding Value-Added Products
Energy Technology Data Exchange (ETDEWEB)
Toto, T. [Brookhaven National Lab. (BNL), Upton, NY (United States); Jensen, M. [Brookhaven National Lab. (BNL), Upton, NY (United States)
2016-03-01
Standard Atmospheric Radiation Measurement (ARM) Climate Research Facility sounding files provide atmospheric state data in one dimension of increasing time and height per sonde launch. Many applications require a quick estimate of the atmospheric state at higher time resolution. The INTERPOLATEDSONDE (i.e., Interpolated Sounding) Value-Added Product (VAP) transforms sounding data into continuous daily files on a fixed time-height grid, at 1-minute time resolution, on 332 levels, from the surface up to a limit of approximately 40 km. The grid extends that high so the full height of soundings can be captured; however, most soundings terminate at an altitude between 25 and 30 km, above which no data is provided. Between soundings, the VAP linearly interpolates atmospheric state variables in time for each height level. In addition, INTERPOLATEDSONDE provides relative humidity scaled to microwave radiometer (MWR) observations.The INTERPOLATEDSONDE VAP, a continuous time-height grid of relative humidity-corrected sounding data, is intended to provide input to higher-order products, such as the Merged Soundings (MERGESONDE; Troyan 2012) VAP, which extends INTERPOLATEDSONDE by incorporating model data. The INTERPOLATEDSONDE VAP also is used to correct gaseous attenuation of radar reflectivity in products such as the KAZRCOR VAP.
Chen, Xiangdong; He, Liwen; Jeon, Gwanggil; Jeong, Jechang
2014-05-01
In this paper, we present a novel color image demosaicking algorithm based on a directional weighted interpolation method and gradient inverse-weighted filter-based refinement method. By applying a directional weighted interpolation method, the missing center pixel is interpolated, and then using the nearest neighboring pixels of the pre-interpolated pixel within the same color channel, the accuracy of interpolation is refined using a five-point gradient inverse weighted filtering method we proposed. The refined interpolated pixel values can be used to estimate the other missing pixel values successively according to the correlation inter-channels. Experimental analysis of images revealed that our proposed algorithm provided superior performance in terms of both objective and subjective image quality compared to conventional state-of-the-art demosaicking algorithms. Our implementation has very low complexity and is therefore well suited for real-time applications.
Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation
Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan
2018-01-01
It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.
Ratan, Rajeev; Sharma, Sanjay; Kohli, Amit K.
2013-12-01
In this article, the performance of quadrature amplitude modulation (QAM)-based single- and double-stage digital interpolators have been compared. The basic interpolator for up-sampling can be a combination of an expander unit with an interpolation lowpass filter in cascade. Complicated implementations can be done by connecting multiple expander and low-pass filter pairs in cascade. This article presents the efficient and effective implementation of digital interpolation systems for up-sampling of single- and double-stage digital interpolators. Comparison is done in terms of spectrum of generated signal, envelope power, modulated signal trajectory, input and output constellation and noise performance. In this article, the proposed interpolation filters have been simulated in Agilent's Advanced Design System (ADS).
[Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].
Chen, Hao; Yu, Haizhong
2014-04-01
Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.
Digital image correlation with self-adaptive scheme for interpolation bias reduction
Tu, Peihan
2017-07-01
In digital image correlation (DIC), the systematic error caused by intensity interpolation at sub-pixel positions, namely the overall interpolation bias, includes both interpolation bias and noise-induced bias. The overall interpolation bias is especially significant when the noise level is high or the image contrast is low. There is a pressing need to reduce the overall interpolation bias to improve the accuracy of DIC. However, existing approaches such as using a low-pass filter or a high-order interpolation require manually selected algorithm parameters, and cannot reduce the bias automatically. It is known that the overall interpolation bias is highly correlated with image gradient (and thus the contrast of the speckle image). This provides an opportunity to reduce the bias simply by adjusting the gradients. Inspired by the image enhancement technique which is used to alter image gradients (thus image contrast) by nonlinearly transforming its intensities (RGB, gray-value, etc), a DIC algorithm called the gray-level adaptive DIC (GA-DIC), based on a new correlation criterion with an additional adjustable parameter which controls the gradients, is proposed to reduce the overall interpolation bias. Both numerical and real experiments are applied to verify the feasibility and effectiveness of the GA-DIC. The results show that the proposed algorithm can reduce the overall interpolation bias without empirically selecting algorithm hyperparameters. Its effect is more significant in cases with higher image noise and poorer image quality.
Directory of Open Access Journals (Sweden)
Mingjian Sun
2015-01-01
Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.
Directory of Open Access Journals (Sweden)
Mathieu Lepot
2017-10-01
Full Text Available A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are many methods and criteria to estimate efficiencies of these methods, but uncertainties on the interpolated values are rarely calculated. Furthermore, while they are estimated according to standard methods, the prediction uncertainty is not taken into account: a discussion is thus presented on the uncertainty estimation of interpolated/extrapolated data. Finally, some suggestions for further research and a new method are proposed.
Stężenie i mikroheterogenność białek ostrej fazy u chorych z twardziną układową
Directory of Open Access Journals (Sweden)
Izabela Domysławska
2010-12-01
Full Text Available Stężenie i mikroheterogenność białek ostrej fazy (BOF ulegazmianom w ostrych i przewlekłych stanach zapalnych. Zmianyjakościowe niektórych białek ostrej fazy są określane jako mikroheterogennośćgłówna. Elektroforeza dwóch kierunków powinowactwaz konkanawaliną A (ConA jako ligandem jest z powodzeniemstosowana do oceny mikroheterogenności glikoproteinostrej fazy. Określenie stężenia i mikroheterogenności BOF możebyć użyteczne we wczesnej diagnostyce i prognozowaniu przewlekłychprocesów zapalnych, w tym twardziny układowej (TU.Do badania zakwalifikowano 45 pacjentów z TU w średnim wieku46,2 roku. Wszyscy chorzy spełniali kryteria klasyfikacyjne ARA dlarozpoznania twardziny układowej. Grupę kontrolną stanowiło15 zdrowych ochotników (średni wiek 42,3 roku.Stężenia kwaśnej glikoproteiny (AGP, antychymotrypsyny (ACT,ceruloplazminy (CP były określane w surowicy metodą elektroimmunoforezyz użyciem przeciwciał anty-AGP, anty-ACT, anty-CP.Stężenie białka C-reaktywnego (C-reactive protein – CRP byłookreślane metodą radialnej immunodyfuzji z użyciem przeciwciałanty-CRP. Mikroheterogenność BOF oceniono metodą elektroforezydwóch kierunków z ConA na żelu agarazowym, jak opisywałBo/g-Hansen. W grupie chorych z TU obserwowano zwiększeniestężenia kilku z badanych białek ostrej fazy (AGP, CRP, CP. Umiarkowanezwiększenie stężenia CRP, AGP, CP obserwowano u 50%chorych z twardziną układową, u których stwierdzono zapaleniestawów oraz owrzodzenia skóry. Bardzo duże zwiększenie stężeniabiałek ostrej fazy występowało w grupie pacjentów z zajęciemserca i płuc. Mikroheterogenność BOF była zmieniona u badanychchorych i wykazywała zmienne, niejednoznaczne obrazy. Wynikipotwierdzają obecność zmian w odpowiedzi ostrej fazy u chorychz twardziną układową.
Roset-Salla, Margarita; Ramon-Cabot, Joana; Salabarnada-Torras, Jordi; Pera, Guillem; Dalmau, Albert
2016-04-01
The objective of the present study was to evaluate the effectiveness of an educational programme on healthy alimentation, carried out in day-care centres and aimed at the parents of children from 1 to 2 years of age, regarding the acquisition of healthy eating habits among themselves and their children. We performed a multicentre, multidisciplinary, randomized controlled study in a community setting. The EniM study (nutritional intervention study among children from Mataró) was performed in twelve day-care centres in Mataró (Spain). Centres were randomized into a control group (CG) and an intervention group (IG). IG received four or five educational workshops on diet, CG did not have workshops. Children, not exclusively breast-fed, from 1 to 2 years of age, in the participating day-care centres and the persons responsible for their alimentation (mother or father). Thirty-five per cent of the IG did not attend the minimum of three workshops and were excluded. The CG included seventy-four children and seventy-two parents and the IG seventy-five children and sixty-seven parents. Both groups were comparable at baseline. Basal adherence to the Mediterranean diet was 56·4 % in parents (Gerber index) and 7·7 points in children (Kidmed test). At 8 months, Mediterranean diet adherence had improved in the IG by 5·8 points in the Gerber index (P=0·01) and 0·6 points in the Kidmed test (P=0·02) compared with the CG. This educational intervention performed in parents at the key period of incorporation of a 1-2-year-old child to the family table showed significant increases in adherence of the parents to the Mediterranean diet, suggesting future improvement in different indicators of health and an expected influence on the diet of their children.
Finite element analysis of rotating beams physics based interpolation
Ganguli, Ranjan
2017-01-01
This book addresses the solution of rotating beam free-vibration problems using the finite element method. It provides an introduction to the governing equation of a rotating beam, before outlining the solution procedures using Rayleigh-Ritz, Galerkin and finite element methods. The possibility of improving the convergence of finite element methods through a judicious selection of interpolation functions, which are closer to the problem physics, is also addressed. The book offers a valuable guide for students and researchers working on rotating beam problems – important engineering structures used in helicopter rotors, wind turbines, gas turbines, steam turbines and propellers – and their applications. It can also be used as a textbook for specialized graduate and professional courses on advanced applications of finite element analysis.
Determining Parameters for Images Amplification by Pulses Interpolation
Directory of Open Access Journals (Sweden)
Morera-Delfín Leandro
2015-01-01
Full Text Available This paper presents the implementation of a method for image samples interpolation based on a physical scanning model. It uses the theory to take digital image samples and to perform an implementation of such mechanism through software. This allows us to get the appropriate parameters for the images amplification using a truncated sampler arrangement. The shown process copies the physical model of image acquisition in order to incorporate the required samples for the amplification. This process is useful in the reconstruction of details in low resolution images and for images compression. The proposed method studies the conservation of high frequency in the high resolution plane for the generation of the amplification kernel. A new way of direct application of the physical model for scanning images in analytic mode is presented.
Multiresolution analysis over triangles, based on quadratic Hermite interpolation
Dæhlen, M.; Lyche, T.; Mørken, K.; Schneider, R.; Seidel, H.-P.
2000-07-01
Given a triangulation T of , a recipe to build a spline space over this triangulation, and a recipe to refine the triangulation T into a triangulation T', the question arises whether , i.e., whether any spline surface over the original triangulation T can also be represented as a spline surface over the refined triangulation T'. In this paper we will discuss how to construct such a nested sequence of spaces based on Powell-Sabin 6-splits for a regular triangulation. The resulting spline space consists of piecewise C1-quadratics, and refinement is obtained by subdividing every triangle into four subtriangles at the edge midpoints. We develop explicit formulas for wavelet transformations based on quadratic Hermite interpolation, and give a stability result with respect to a natural norm.
Interpolating discrete advection-diffusion propagators at Leja sequences
Caliari, M.; Vianello, M.; Bergamaschi, L.
2004-11-01
We propose and analyze the ReLPM (Real Leja Points Method) for evaluating the propagator φ(ΔtB)v via matrix interpolation polynomials at spectral Leja sequences. Here B is the large, sparse, nonsymmetric matrix arising from stable 2D or 3D finite-difference discretization of linear advection-diffusion equations, and φ(z) is the entire function φ(z)=(ez-1)/z. The corresponding stiff differential system , is solved by the exact time marching scheme yi+1Dyi+Δtiφ(ΔtiB)(Byi+g), i=0,1,..., where the time-step is controlled simply via the variation percentage of the solution, and can be large. Numerical tests show substantial speed-ups (up to one order of magnitude) with respect to a classical variable step-size Crank-Nicolson solver.
Environmental time series interpolation based on Spartan random processes
Žukovič, Milan; Hristopulos, D. T.
In many environmental applications, time series are either incomplete or irregularly spaced. We investigate the application of the Spartan random process to missing data prediction. We employ a novel modified method of moments (MMoM) and the established method of maximum likelihood (ML) for parameter inference. The CPU time of MMoM is shown to be much faster than that of ML estimation and almost independent of the data size. We formulate an explicit Spartan interpolator for estimating missing data. The model validation is performed on both synthetic data and real time series of atmospheric aerosol concentrations. The prediction performance is shown to be comparable with that attained by means of the best linear unbiased (Kolmogorov-Wiener) predictor at reduced computational cost.
Plasma simulation with the Differential Algebraic Cubic Interpolated Propagation scheme
Energy Technology Data Exchange (ETDEWEB)
Utsumi, Takayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
A computer code based on the Differential Algebraic Cubic Interpolated Propagation scheme has been developed for the numerical solution of the Boltzmann equation for a one-dimensional plasma with immobile ions. The scheme advects the distribution function and its first derivatives in the phase space for one time step by using a numerical integration method for ordinary differential equations, and reconstructs the profile in phase space by using a cubic polynomial within a grid cell. The method gives stable and accurate results, and is efficient. It is successfully applied to a number of equations; the Vlasov equation, the Boltzmann equation with the Fokker-Planck or the Bhatnagar-Gross-Krook (BGK) collision term and the relativistic Vlasov equation. The method can be generalized in a straightforward way to treat cases such as problems with nonperiodic boundary conditions and higher dimensional problems. (author)
Interpolation function for approximating knee joint behavior in human gait
Toth-Taşcǎu, Mirela; Pater, Flavius; Stoia, Dan Ioan
2013-10-01
Starting from the importance of analyzing the kinematic data of the lower limb in gait movement, especially the angular variation of the knee joint, the paper propose an approximation function that can be used for processing the correlation among a multitude of knee cycles. The approximation of the raw knee data was done by Lagrange polynomial interpolation on a signal acquired using Zebris Gait Analysis System. The signal used in approximation belongs to a typical subject extracted from a lot of ten investigated subjects, but the function domain of definition belongs to the entire group. The study of the knee joint kinematics plays an important role in understanding the kinematics of the gait, this articulation having the largest range of motion in whole joints, in gait. The study does not propose to find an approximation function for the adduction-abduction movement of the knee, this being considered a residual movement comparing to the flexion-extension.
Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation
Directory of Open Access Journals (Sweden)
Budi I. Setiawan
2007-09-01
Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.
Diabat Interpolation for Polymorph Free-Energy Differences.
Kamat, Kartik; Peters, Baron
2017-02-02
Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.
Generalized synchronization in complex dynamical networks via adaptive couplings
Liu, Hui; Chen, Juan; Lu, Jun-an; Cao, Ming
2010-01-01
This paper investigates generalized synchronization of three typical classes of complex dynamical networks: scale-free networks, small-world networks. and interpolating networks. The proposed synchronization strategy is to adjust adaptively a node's coupling strength based oil the node's local
Balabin, Roman M; Smirnov, Sergey V
2012-04-07
Modern analytical chemistry of industrial products is in need of rapid, robust, and cheap analytical methods to continuously monitor product quality parameters. For this reason, spectroscopic methods are often used to control the quality of industrial products in an on-line/in-line regime. Vibrational spectroscopy, including mid-infrared (MIR), Raman, and near-infrared (NIR), is one of the best ways to obtain information about the chemical structures and the quality coefficients of multicomponent mixtures. Together with chemometric algorithms and multivariate data analysis (MDA) methods, which were especially created for the analysis of complicated, noisy, and overlapping signals, NIR spectroscopy shows great results in terms of its accuracy, including classical prediction error, RMSEP. However, it is unclear whether the combined NIR + MDA methods are capable of dealing with much more complex interpolation or extrapolation problems that are inevitably present in real-world applications. In the current study, we try to make a rather general comparison of linear, such as partial least squares or projection to latent structures (PLS); "quasi-nonlinear", such as the polynomial version of PLS (Poly-PLS); and intrinsically non-linear, such as artificial neural networks (ANNs), support vector regression (SVR), and least-squares support vector machines (LS-SVM/LSSVM), regression methods in terms of their robustness. As a measure of robustness, we will try to estimate their accuracy when solving interpolation and extrapolation problems. Petroleum and biofuel (biodiesel) systems were chosen as representative examples of real-world samples. Six very different chemical systems that differed in complexity, composition, structure, and properties were studied; these systems were gasoline, ethanol-gasoline biofuel, diesel fuel, aromatic solutions of petroleum macromolecules, petroleum resins in benzene, and biodiesel. Eighteen different sample sets were used in total. General
Zhou, Tao
2008-05-01
In this article, we propose a mixing navigation mechanism, which interpolates between random-walk and shortest-path protocol. The navigation efficiency can be remarkably enhanced via a few routers. Some advanced strategies are also designed: For non-geographical scale-free networks, the targeted strategy with a tiny fraction of routers can guarantee an efficient navigation with low and stable delivery time almost independent of network size. For geographical localized networks, the clustering strategy can simultaneously increase efficiency and reduce the communication cost. The present mixing navigation mechanism is of significance especially for information organization of wireless sensor networks and distributed autonomous robotic systems.
Adaptive manifold-mapping using multiquadric interpolation applied to linear actuator design
D.J.P. Lahaye (Domenico); A. Canova; G. Gruosso; M. Repetto
2006-01-01
htmlabstractIn this work a multilevel optimization strategy based on manifold-mapping combined with multiquadric interpolation for the coarse model construction is presented. In the proposed approach the coarse model is obtained by interpolating the fine model using multiquadrics in a small
A combination of parabolic and grid slope interpolation for 2D tissue displacement estimations.
Albinsson, John; Ahlgren, Åsa Rydén; Jansson, Tomas; Cinthio, Magnus
2017-08-01
Parabolic sub-sample interpolation for 2D block-matching motion estimation is computationally efficient. However, it is well known that the parabolic interpolation gives a biased motion estimate for displacements greater than |y.2| samples (y = 0, 1, …). Grid slope sub-sample interpolation is less biased, but it shows large variability for displacements close to y.0. We therefore propose to combine these sub-sample methods into one method (GS15PI) using a threshold to determine when to use which method. The proposed method was evaluated on simulated, phantom, and in vivo ultrasound cine loops and was compared to three sub-sample interpolation methods. On average, GS15PI reduced the absolute sub-sample estimation errors in the simulated and phantom cine loops by 14, 8, and 24% compared to sub-sample interpolation of the image, parabolic sub-sample interpolation, and grid slope sub-sample interpolation, respectively. The limited in vivo evaluation of estimations of the longitudinal movement of the common carotid artery using parabolic and grid slope sub-sample interpolation and GS15PI resulted in coefficient of variation (CV) values of 6.9, 7.5, and 6.8%, respectively. The proposed method is computationally efficient and has low bias and variance. The method is another step toward a fast and reliable method for clinical investigations of longitudinal movement of the arterial wall.
The Neville-Aitken formula for rational interpolants with prescribed poles
Carstensen, C.; Mühlbach, G.
1992-12-01
Using a polynomial description of rational interpolation with prescribed poles a simple purely algebraic proof of a Neville-Aitken recurrence formula for rational interpolants with prescribed poles is presented. It is used to compute the general Cauchy-Vandermonde determinant explicitly in terms of the nodes and poles involved.
The twitch interpolation technique for study of fatigue of human quadriceps muscle
DEFF Research Database (Denmark)
Bülow, P M; Nørregaard, J; Mehlsen, J
1995-01-01
The aim of the study was to examine if the twitch interpolation technique could be used to objectively measure fatigue in the quadriceps muscle in subjects performing submaximally. The 'true' maximum isometric quadriceps torque was determined in 21 healthy subject using the twitch interpolation......). In conclusion, the twitch technique can be used for objectively measuring fatigue of the quadriceps muscle....
Abstract interpolation in vector-valued de Branges-Rovnyak spaces
Ball, J.A.; Bolotnikov, V.; ter Horst, S.
2011-01-01
Following ideas from the Abstract Interpolation Problem of Katsnelson et al. (Operators in spaces of functions and problems in function theory, vol 146, pp 83–96, Naukova Dumka, Kiev, 1987) for Schur class functions, we study a general metric constrained interpolation problem for functions from a
Directory of Open Access Journals (Sweden)
Mathieu Raux
2016-11-01
Full Text Available In humans, inspiratory constraints engage cortical networks involving the supplementary motor area. Functional magnetic resonance imaging (fMRI shows that the spread and intensity of the corresponding respiratory-related cortical activation dramatically decrease when a discrete load becomes sustained. This has been interpreted as reflecting motor cortical reorganisation and automatisation, but could proceed from sensory and/or affective habituation. To corroborate the existence of motor reorganisation between single-breath and sustained inspiratory loading (namely changes in motor neurones recruitment, we conducted a diaphragm twitch interpolation study based on the hypothesis that motor reorganisation should result in changes in the twitch interpolation slope. Fourteen healthy subjects (age: 21 – 40 years were studied. Bilateral phrenic stimulation was delivered at rest, upon prepared and targeted voluntary inspiratory efforts (vol, upon unprepared inspiratory efforts against a single-breath inspiratory threshold load (single-breath, and upon sustained inspiratory efforts against the same type of load (continuous. The slope of the relationship between diaphragm twitch transdiaphragmatic pressure and the underlying transdiaphragmatic pressure was –1.1 ± 0.2 during vol, –1.5 ± 0.7 during single-breath, and -0.6 ± 0.4 during continuous (all slopes expressed in percent of baseline.percent of baseline-1 all comparisons significant at the 5% level. The contribution of the diaphragm to inspiration, as assessed by the gastric pressure to transdiaphragmatic pressure ratio, was 31 ± 17 % during vol, 22 ± 16 % during single-breath (p=0.13, and 19 ± 9 % during continuous (p = 0.0015 vs. vol. This study shows that the relationship between the amplitude of the transdiaphragmatic pressure produced by a diaphragm twitch and its counterpart produced by the underlying diaphragm contraction is not unequivocal. If twitch interpolation is interpreted as
Directory of Open Access Journals (Sweden)
Huiqing Fang
2016-01-01
Full Text Available Based on geometrically exact beam theory, a hybrid interpolation is proposed for geometric nonlinear spatial Euler-Bernoulli beam elements. First, the Hermitian interpolation of the beam centerline was used for calculating nodal curvatures for two ends. Then, internal curvatures of the beam were interpolated with a second interpolation. At this point, C1 continuity was satisfied and nodal strain measures could be consistently derived from nodal displacement and rotation parameters. The explicit expression of nodal force without integration, as a function of global parameters, was founded by using the hybrid interpolation. Furthermore, the proposed beam element can be degenerated into linear beam element under the condition of small deformation. Objectivity of strain measures and patch tests are also discussed. Finally, four numerical examples are discussed to prove the validity and effectivity of the proposed beam element.
Identification method for digital image forgery and filtering region through interpolation.
Hwang, Min Gu; Har, Dong Hwan
2014-09-01
Because of the rapidly increasing use of digital composite images, recent studies have identified digital forgery and filtering regions. This research has shown that interpolation, which is used to edit digital images, is an effective way to analyze digital images for composite regions. Interpolation is widely used to adjust the size of the image of a composite target, making the composite image seem natural by rotating or deforming. As a result, many algorithms have been developed to identify composite regions by detecting a trace of interpolation. However, many limitations have been found in detection maps developed to identify composite regions. In this study, we analyze the pixel patterns of noninterpolation and interpolation regions. We propose a detection map algorithm to separate the two regions. To identify composite regions, we have developed an improved algorithm using minimum filer, Laplacian operation and maximum filters. Finally, filtering regions that used the interpolation operation are analyzed using the proposed algorithm. © 2014 American Academy of Forensic Sciences.
International co-operation through the Interpol system to counter illicit drug trafficking.
Leamy, W J
1983-01-01
The International Criminal Police Organization (ICPO/Interpol), whose main aim is the prevention and suppression of ordinary crime, has 135 member countries. The Government of each of these countries has designated an Interpol National Central Bureau to co-operate and liaise within the framework of Interpol. The Drugs Sub-Division of Interpol's General Secretariat monitors and responds to incoming communications on drug enforcement matters, conducts intelligence analysis of information and produces tactical and strategic intelligence reports as well as statistical and other specialized reports. It received 33,181 and dispatched 6,741 drug-enforcement-related communications in 1982, which was over 60 per cent of the entire communications of the General Secretariat. The Drugs Sub-Division participates in drug training and drug strategy seminars world-wide. Interpol also carries out drug liaison officer programmes in five regions of the world.
Yamashita, Hideomi; Okuma, Kae; Wakui, Reiko; Kobayashi-Shibata, Shino; Ohtomo, Kuni; Nakagawa, Keiichi
2011-02-01
To describe patterns of recurrence of elective nodal irradiation (ENI) in definitive chemoradiotherapy (CRT) for thoracic esophageal squamous cell carcinoma (SqCC) using 3D-conformal radiotherapy. One hundred and twenty-six consecutive patients with stages I-IVB thoracic esophageal SqCC newly diagnosed between June 2000 and July 2009 and treated with 3D-CRT in our institution were recruited from our database. Definitive CRT consisted of two cycles of nedaplatin/5FU repeated every 4 weeks, with concurrent radiation therapy of 50-50.4 Gy in 25-28 fractions. Until completion, radiotherapy was delivered to the N1 and M1a lymph nodes as ENI in addition to gross tumor volume. All 126 patients were included in this analysis, and their tumors were staged as follows: T1/T2/T3/T4, 28/18/54/26; N0/N1, 50/76; M0/M1a/M1b, 91/5/30. The mean follow-up period for the 63 surviving patients was 28.3 (±22.8) months. Eighty-seven patients (69%) achieved complete response (CR) without any residual tumor at least once after completion of CRT. After achieving CR, each of 40 patients experienced failures (local=20 and distant=20) and no patient experienced elective nodal failure without having any other site of recurrence. The upper thoracic esophageal carcinoma showed significantly more (34%) relapses at the local site than the middle (9%) or lower thoracic (11%) carcinomas. The 2-year and 3-year overall survival was 56% and 43%, respectively. The 1-year, 2-year and 3-year disease-free survival was 46%, 38% and 33%, respectively. In CRT for esophageal SqCC, ENI was effective for preventing regional nodal failure. The upper thoracic esophageal carcinomas had significantly more local recurrences than the middle or lower thoracic sites. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Mucino G, O.
2015-07-01
Most BWR type reactors have internal support components, which need to be attached to the inner surface by welding. Specifically, in these joints two materials interact, such as stainless steel and nickel base alloys. Nickel base alloys such as alloy 82 (ERNiCr3) and alloy 182 (ENiCrFe-3) are used for the joining of both dissimilar materials. For joints made with both nickel base alloys, the alloy 182 is prone to stress corrosion cracking (SCC); so it is essential to carry out studies related to this contribution material. In the nuclear industry any study related to this alloy is of importance because experience is gained in its behavior when is part of a system of an operation reactor. This work presents the characterization of the weld deposit of a stainless steel coating (with electrodes E309L and E308L) on a carbon steel plate type A36 and the joining with an Inconel 600 plate, simulating the joining of the internal coating of vessel and the heel of the support leg of the envelope of a BWR reactor. In this work, the mechanical and micro-structural characterization of the alloy deposit 182 was performed. (Author)
Directory of Open Access Journals (Sweden)
Moslem Imani
2013-01-01
Full Text Available Polynomial interpolation and Holt-Winters exponential smoothing (HWES are used to analyze and forecast Caspian Sea level anomalies derived from 15-year Topex/Poseidon (T/P and Jason-1 (J-1 altimetry covering 1993 to 2008. Because along-track altimetric products may contain temporal and spatial data gaps, a least squares polynomial interpolation is performed to fill the gaps of along-track sea surface heights used. The modeling results of a 3-year forecasting time span (2005 - 2008 derived using HWES agree well with the observed time series with a correlation coefficient of 0.86. Finally, the 3-year forecasted Caspian Sea level anomalies are compared with those obtained using an artificial neural network method with reasonable agreement found.
Ionosphere Model for European Region Based on Multi-GNSS Data and TPS Interpolation
Directory of Open Access Journals (Sweden)
Anna Krypiak-Gregorczyk
2017-11-01
Full Text Available The ionosphere is still considered one of the most significant error sources in precise Global Navigation Satellite Systems (GNSS positioning. On the other hand, new satellite signals and data processing methods allow for a continuous increase in the accuracy of the available ionosphere models derived from GNSS observables. Therefore, many research groups around the world are conducting research on the development of precise ionosphere products. This is also reflected in the establishment of several ionosphere-related working groups by the International Association of Geodesy. Whilst a number of available global ionosphere maps exist today, dense regional GNSS networks often offer the possibility of higher accuracy regional solutions. In this contribution, we propose an approach for regional ionosphere modelling based on un-differenced multi-GNSS carrier phase data for total electron content (TEC estimation, and thin plate splines for TEC interpolation. In addition, we propose a methodology for ionospheric products self-consistency analysis based on calibrated slant TEC. The results of the presented approach are compared to well-established global ionosphere maps during varied ionospheric conditions. The initial results show that the accuracy of our regional ionospheric vertical TEC maps is well below 1 TEC unit, and that it is at least a factor of 2 better than the global products.
Spatial interpolation and estimation of solar irradiation by cumulative semivariograms
Energy Technology Data Exchange (ETDEWEB)
Isen, Zeka [Istanbul Technical Univ., Hydraulics Div., Istanbul (Turkey); Sahin, Ahmet D. [Istanbul Technical Univ., Meteorology Dept., Istanbul (Turkey)
2001-07-01
The main purpose of this paper is to find a regional procedure for estimating the solar irradiation value of any point from sites where measurements of solar global irradiation already exist. The spatial weights are deduced through the regionalised variables theory and the cumulative semivariogram (CSV) approach. The CSV helps to find the change of spatial variability with distance from a set of given solar irradiation data. It is then employed in the estimation of solar irradiation value at any desired point through a weighted average procedure. The number of adjacent sites considered in this weighting scheme is based on the least squares technique which is applied spatially by incrementing nearest site numbers successively from one up to the total site number. The validity of the methodology is first checked with the cross validation technique prior to its application to sites with no solar irradiation records. Hence, after the cross-validation each site will have different number of nearest adjacent sites for spatial interpolation. The application is achieved for monthly solar irradiation records over Turkey by considering 29 measurements stations. It has been shown that the procedure presented in this paper is better than the classical techniques such as the inverse distance or inverse distance square approaches. (Author)
Combining the Hanning windowed interpolated FFT in both directions
Chen, Kui Fu; Li, Yan Feng
2008-06-01
The interpolated fast Fourier transform (IFFT) has been proposed as a way to eliminate the picket fence effect (PFE) of the fast Fourier transform. The modulus based IFFT, cited in most relevant references, makes use of only the 1st and 2nd highest spectral lines. An approach using three principal spectral lines is proposed. This new approach combines both directions of the complex spectrum based IFFT with the Hanning window. The optimal weight to minimize the estimation variance is established on the first order Taylor series expansion of noise interference. A numerical simulation is carried out, and the results are compared with the Cramer-Rao bound. It is demonstrated that the proposed approach has a lower estimation variance than the two-spectral-line approach. The improvement depends on the extent of sampling deviating from the coherent condition, and the best is decreasing variance by 2/7. However, it is also shown that the estimation variance of the windowed IFFT with the Hanning is significantly higher than that of without windowing.
Statistical analysis and interpolation of compositional data in materials science.
Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M
2015-02-09
Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.
Formalization of Human Categorization Process Using Interpolative Boolean Algebra
Directory of Open Access Journals (Sweden)
Vladimir Dobrić
2015-01-01
Full Text Available Since the ancient times, it has been assumed that categorization has the basic form of classical sets. This implies that the categorization process rests on the Boolean laws. In the second half of the twentieth century, the classical theory has been challenged in cognitive science. According to the prototype theory, objects belong to categories with intensities, while humans categorize objects by comparing them to prototypes of relevant categories. Such categorization process is governed by the principles of perceived world structure and cognitive economy. Approaching the prototype theory by using truth-functional fuzzy logic has been harshly criticized due to not satisfying the complementation laws. In this paper, the prototype theory is approached by using structure-functional fuzzy logic, the interpolative Boolean algebra. The proposed formalism is within the Boolean frame. Categories are represented as fuzzy sets of objects, while comparisons between objects and prototypes are formalized by using Boolean consistent fuzzy relations. Such relations are directly constructed from a Boolean consistent fuzzy partial order relation, which is treated by Boolean implication. The introduced formalism secures the principles of categorization showing that Boolean laws are fundamental in the categorization process. For illustration purposes, the artificial cognitive system which mimics human categorization activity is proposed.
Linear and Quadratic Interpolators Using Truncated-Matrix Multipliers and Squarers
Directory of Open Access Journals (Sweden)
E. George Walters III
2015-11-01
Full Text Available This paper presents a technique for designing linear and quadratic interpolators for function approximation using truncated multipliers and squarers. Initial coefficient values are found using a Chebyshev-series approximation and then adjusted through exhaustive simulation to minimize the maximum absolute error of the interpolator output. This technique is suitable for any function and any precision up to 24 bits (IEEE single precision. Designs for linear and quadratic interpolators that implement the 1/x, 1/ √ x, log2(1+2x, log2(x and 2x functions are presented and analyzed as examples. Results show that a proposed 24-bit interpolator computing 1/x with a design specification of ±1 unit in the last place of the product (ulp error uses 16.4% less area and 15.3% less power than a comparable standard interpolator with the same error specification. Sixteen-bit linear interpolators for other functions are shown to use up to 17.3% less area and 12.1% less power, and 16-bit quadratic interpolators are shown to use up to 25.8% less area and 24.7% less power.
Pearce, Mark A
2015-08-01
EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.
STUDY OF BLOCKING EFFECT ELIMINATION METHODS BY MEANS OF INTRAFRAME VIDEO SEQUENCE INTERPOLATION
Directory of Open Access Journals (Sweden)
I. S. Rubina
2015-01-01
Full Text Available The paper deals with image interpolation methods and their applicability to eliminate some of the artifacts related to both the dynamic properties of objects in video sequences and algorithms used in the order of encoding steps. The main drawback of existing methods is the high computational complexity, unacceptable in video processing. Interpolation of signal samples for blocking - effect elimination at the output of the convertion encoding is proposed as a part of the study. It was necessary to develop methods for improvement of compression ratio and quality of the reconstructed video data by blocking effect elimination on the borders of the segments by intraframe interpolating of video sequence segments. The main point of developed methods is an adaptive recursive algorithm application with adaptive-sized interpolation kernel both with and without the brightness gradient consideration at the boundaries of objects and video sequence blocks. Within theoretical part of the research, methods of information theory (RD-theory and data redundancy elimination, methods of pattern recognition and digital signal processing, as well as methods of probability theory are used. Within experimental part of the research, software implementation of compression algorithms with subsequent comparison of the implemented algorithms with the existing ones was carried out. Proposed methods were compared with the simple averaging algorithm and the adaptive algorithm of central counting interpolation. The advantage of the algorithm based on the adaptive kernel size selection interpolation is in compression ratio increasing by 30%, and the advantage of the modified algorithm based on the adaptive interpolation kernel size selection is in the compression ratio increasing by 35% in comparison with existing algorithms, interpolation and quality of the reconstructed video sequence improving by 3% compared to the one compressed without interpolation. The findings will be
IABP Drifting Buoy Pressure, Temperature, Position, and Interpolated Ice Velocity
National Oceanic and Atmospheric Administration, Department of Commerce — The International Arctic Buoy Programme (IABP) maintains a network of drifting buoys to provide meteorological and oceanographic data for real-time operational...
Blend Shape Interpolation and FACS for Realistic Avatar
Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila
2015-03-01
The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.
A Parallel Strategy for High-speed Interpolation of CNC Using Data Space Constraint Method
Directory of Open Access Journals (Sweden)
Shuan-qiang Yang
2013-12-01
Full Text Available A high-speed interpolation scheme using parallel computing is proposed in this paper. The interpolation method is divided into two tasks, namely, the rough task executing in PC and the fine task in the I/O card. During the interpolation procedure, the double buffers are constructed to exchange the interpolation data between the two tasks. Then, the data space constraint method is adapted to ensure the reliable and continuous data communication between the two buffers. Therefore, the proposed scheme can be realized in the common distribution of the operation systems without real-time performance. The high-speed and high-precision motion control can be achieved as well. Finally, an experiment is conducted on the self-developed CNC platform, the test results are shown to verify the proposed method.
NOAA Optimum Interpolation 1/4 Degree Daily Sea Surface Temperature (OISST) Analysis, Version 2
National Oceanic and Atmospheric Administration, Department of Commerce — This high-resolution sea surface temperature (SST) analysis product was developed using an optimum interpolation (OI) technique. The SST analysis has a spatial grid...
Directory of Open Access Journals (Sweden)
Pengyun Chen
2014-01-01
Full Text Available The interpolation-reconstruction of local underwater terrain using the underwater digital terrain map (UDTM is an important step for building an underwater terrain matching unit and directly affects the accuracy of underwater terrain matching navigation. The Kriging method is often used in terrain interpolation, but, with this method, the local terrain features are often lost. Therefore, the accuracy cannot meet the requirements of practical application. Analysis of the geographical features is performed on the basis of the randomness and self-similarity of underwater terrain. We extract the fractal features of local underwater terrain with the fractal Brownian motion model, compensating for the possible errors of the Kriging method with fractal theory. We then put forward an improved Kriging interpolation method based on this fractal compensation. Interpolation-reconstruction tests show that the method can simulate the real underwater terrain features well and that it has good usability.
Image Interpolation via Scanning Line Algorithm and Discontinuous B-Spline
Directory of Open Access Journals (Sweden)
Cheng-ming Liu
2017-05-01
Full Text Available Image interpolation is a basic operation in image processing. Lots of methods have been proposed, including convolution-based methods, edge modeling methods, point spread function (PSF-based methods or learning-based methods. Most of them, however, present a high computational complexity and are not suitable for real time applications. However, fast methods are not able to provide artifacts-free images. In this paper we describe a new image interpolation method by using scanning line algorithm which can generate C - 1 curves or surfaces. The C - 1 interpolation can truncate the interpolation curve at big skipping; hence, the image edge can be kept. Numerical experiments illustrate the efficiency of the novel method.
Interpolation of the discrete logarithm in a finite field of characteristic two by Boolean functions
DEFF Research Database (Denmark)
Brandstaetter, Nina; Lange, Tanja; Winterhof, Arne
2005-01-01
We obtain bounds on degree, weight, and the maximal Fourier coefficient of Boolean functions interpolating the discrete logarithm in finite fields of characteristic two. These bounds complement earlier results for finite fields of odd characteristic....
The analysis of decimation and interpolation in the linear canonical transform domain
National Research Council Canada - National Science Library
Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li
2016-01-01
.... As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain...
National Oceanic and Atmospheric Administration, Department of Commerce — This feature dataset contains the control points used to validate the accuracies of the interpolated water density rasters for the Gulf of Maine. These control...
Wu, G.; Skidmore, A.K.; Leeuw, de J.; Liu, X.; Prins, H.H.T.
2010-01-01
Measurements of photosynthetically active radiation (PAR), which are indispensable for simulating plant growth and productivity, are generally very scarce. This study aimed to compare two extrapolation and one interpolation methods for estimating daily PAR reaching the earth surface within the
CSIR Research Space (South Africa)
Bogaers, Alfred EJ
2016-10-01
Full Text Available , transferring information across a non-matching interface presents itself as a nontrivial problem. RBF interpolation, which requires no global connectivity information, provides an elegant means by which to negate any geometric discrepancies along the interface...
Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating
Chen, Liangji; Guo, Guangsong; Li, Huiying
2017-07-01
NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.
Flores Padilla, Deyanira; Jimenez-Hernández, Hugo; Reynosa Canseco, Jaqueline
2016-09-01
Interpolating for data sample is required in many image processing methods. For example, in the estimation of displacement which are smaller than one pixel. In the case of displacement calculation in an image sequence using the numerical method, Newton-Rapson is very common to use a linear interpolator due to its simplicity and speed. However this method generates discontinuous functions. Therefore, theoretically it should not be used in combination with Newton-Rapson due to the use of the derivative in the numerical method. This work shows a comparative analysis of different interpolators, along with a comparison between "real world" and image displacements and their relationship. All of this with the purpose of identifying which interpolator offers the most exact approximation in the estimation of displacement calculation.
Chen, Shyi-Ming; Hsin, Wen-Chyuan
2015-07-01
In this paper, we propose a new weighted fuzzy interpolative reasoning method for sparse fuzzy rule-based systems based on the slopes of fuzzy sets. We also propose a particle swarm optimization (PSO)-based weights-learning algorithm to automatically learn the optimal weights of the antecedent variables of fuzzy rules for weighted fuzzy interpolative reasoning. We apply the proposed weighted fuzzy interpolative reasoning method using the proposed PSO-based weights-learning algorithm to deal with the computer activity prediction problem, the multivariate regression problems, and the time series prediction problems. The experimental results show that the proposed weighted fuzzy interpolative reasoning method using the proposed PSO-based weights-learning algorithm outperforms the existing methods for dealing with the computer activity prediction problem, the multivariate regression problems, and the time series prediction problems.
Directory of Open Access Journals (Sweden)
Huaiqing Zhang
2014-01-01
Full Text Available The spectral leakage has a harmful effect on the accuracy of harmonic analysis for asynchronous sampling. This paper proposed a time quasi-synchronous sampling algorithm which is based on radial basis function (RBF interpolation. Firstly, a fundamental period is evaluated by a zero-crossing technique with fourth-order Newton’s interpolation, and then, the sampling sequence is reproduced by the RBF interpolation. Finally, the harmonic parameters can be calculated by FFT on the synchronization of sampling data. Simulation results showed that the proposed algorithm has high accuracy in measuring distorted and noisy signals. Compared to the local approximation schemes as linear, quadric, and fourth-order Newton interpolations, the RBF is a global approximation method which can acquire more accurate results while the time-consuming is about the same as Newton’s.
An application of gain-scheduled control using state-space interpolation to hydroactive gas bearings
DEFF Research Database (Denmark)
Theisen, Lukas Roy Svane; Camino, Juan F.; Niemann, Hans Henrik
2016-01-01
, it is possible to design a gain-scheduled controller using multiple controllers optimised for a single frequency. Gain-scheduling strategies using the Youla parametrisation can guarantee stability at the cost of increased controller order and performance loss in the interpolation region. This paper contributes...... with a gain-scheduling strategy using state-space interpolation, which avoids both the performance loss and the increase of controller order associated to the Youla parametrisation. The proposed state-space interpolation for gain-scheduling is applied for mass imbalance rejection for a controllable gas...... bearing scheduled in two parameters. Comparisons against the Youla-based scheduling demonstrate the superiority of the state-space interpolation....
Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation
Murarasu, Alin
2012-12-01
The well-known power wall resulting in multi-cores requires special techniques for speeding up applications. In this sense, parallelization plays a crucial role. Besides standard serial optimizations, techniques such as input specialization can also bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation is an inherently hierarchical method of interpolation employed for example in computational steering applications for decompressing highdimensional simulation data. In this context, improving the speedup is essential for real-time visualization. Using input specialization, we report a speedup of up to 9x over the nonspecialized version. The paper covers the steps we took to reach this speedup by means of input adaptivity. Our algorithms will be integrated in fastsg, a library for fast sparse grid interpolation. © 2012 IEEE.
Shen, J.; Han, W. L.; Ge, J.; Zhang, L. B.; Tan, H.
2017-09-01
Interpolation methods have significant impacts on the accuracy of the digital elevation model (DEM) from contours which are one of frequently employed data sources. In this paper, an interpolation method is presented to build DEM from contour lines by fusion/integration of morphological reconstruction and distance transformation with obstacles. Particularly, morphological reconstruction is used to get the elevation values of the higher contour lines and the lower contour lines of any a spatial point between two contour lines, and distance transformation with obstacles is used to get the geodesic distances of the spatial point to the higher contour lines and the lower contour lines respectively. At last, linear interpolation along water flow line is used to get the elevation values of the pixels to be interpolated. The experiment demonstrates that feasibility of our proposed method.
Interpolation Filter Design for Hearing-Aid Audio Class-D Output Stage Application
DEFF Research Database (Denmark)
Pracný, Peter; Bruun, Erik; Llimos Muntal, Pere
2012-01-01
This paper deals with a design of a digital interpolation filter for a 3rd order multi-bit ΣΔ modulator with over-sampling ratio OSR = 64. The interpolation filter and the ΣΔ modulator are part of the back-end of an audio signal processing system in a hearing-aid application. The aim in this paper...... is to compare this design to designs presented in other state-of-the-art works ranging from hi-fi audio to hearing-aids. By performing comparison, trends and tradeoffs in interpolation filter design are indentified and hearing-aid specifications are derived. The possibilities for hardware reduction...... in the interpolation filter are investigated. Proposed design simplifications presented here result in the least hardware demanding combination of oversampling ratio, number of stages and number of filter taps among a number of filters reported for audio applications....
Lee, Seung-Jae; Serre, Marc L; van Donkelaar, Aaron; Martin, Randall V; Burnett, Richard T; Jerrett, Michael
2012-12-01
A better understanding of the adverse health effects of chronic exposure to fine particulate matter (PM2.5) requires accurate estimates of PM2.5 variation at fine spatial scales. Remote sensing has emerged as an important means of estimating PM2.5 exposures, but relatively few studies have compared remote-sensing estimates to those derived from monitor-based data. We evaluated and compared the predictive capabilities of remote sensing and geostatistical interpolation. We developed a space-time geostatistical kriging model to predict PM2.5 over the continental United States and compared resulting predictions to estimates derived from satellite retrievals. The kriging estimate was more accurate for locations that were about 100 km from a monitoring station, whereas the remote sensing estimate was more accurate for locations that were > 100 km from a monitoring station. Based on this finding, we developed a hybrid map that combines the kriging and satellite-based PM2.5 estimates. We found that for most of the populated areas of the continental United States, geostatistical interpolation produced more accurate estimates than remote sensing. The differences between the estimates resulting from the two methods, however, were relatively small. In areas with extensive monitoring networks, the interpolation may provide more accurate estimates, but in the many areas of the world without such monitoring, remote sensing can provide useful exposure estimates that perform nearly as well.
An interpolation method of b-spline surface for hull form design
Directory of Open Access Journals (Sweden)
Hyung-Bae Jung
2010-12-01
Full Text Available This paper addresses the problem of B-spline surface interpolation of scattered points for a hull form design, which are not arbitrarily scattered, but can be arranged in a series of contours permitting variable number of points in the contours. A new approach that allows different parameter value for each point on the same contour has been adopted. The usefulness and quality of the interpolation has been demonstrated with some experimental results.
Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul
2013-01-01
Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra
Directory of Open Access Journals (Sweden)
Peilu Liu
2017-10-01
Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation.
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Teegavarapu, Ramesh S. V.; Meskele, Tadesse; Pathak, Chandra S.
2012-03-01
Geo-spatial interpolation methods are often necessary in instances where the precipitation estimates available from multisensor source data on a specific spatial grid need to be transformed to another grid with a different spatial grid or orientation. The study involves development and evaluation of spatial interpolation or weighting methods for transforming hourly multisensor precipitation estimates (MPE) available in the form of 4×4 km2 HRAP (hydrologic rainfall analysis project) grid to a Cartesian 2×2 km2 radar (NEXt generation RADar:NEXRAD) grid. Six spatial interpolation weighting methods are developed and evaluated to assess their suitability for transformation of precipitation estimates in space and time. The methods use distances and areal extents of intersection segments of the grids as weights in the interpolation schemes. These methods were applied to transform precipitation estimates from HRAP to NEXRAD grids in the South Florida Water Management District (SFWMD) region in South Florida, United States. A total of 192 rain gauges are used as ground truth to assess the quality of precipitation estimates obtained from these interpolation methods. The rain gauge data in the SFWMD region were also used for radar data bias correction procedures. To help in the assessment, several error measures are calculated and appropriate weighting functions are developed to select the most accurate method for the transformation. Three local interpolation methods out of six methods were found to be competitive and inverse distance based on four nearest neighbors (grids) was found to be the best for the transformation of data.
Directory of Open Access Journals (Sweden)
Mauricio Castro Franco
2017-07-01
Full Text Available Context: Interpolating soil properties at field-scale in the Colombian piedmont eastern plains is challenging due to: the highly and complex variable nature of some processes; the effects of the soil; the land use; and the management. While interpolation techniques are being adapted to include auxiliary information of these effects, the soil data are often difficult to predict using conventional techniques of spatial interpolation. Method: In this paper, we evaluated and compared six spatial interpolation techniques: Inverse Distance Weighting (IDW, Spline, Ordinary Kriging (KO, Universal Kriging (UK, Cokriging (Ckg, and Residual Maximum Likelihood-Empirical Best Linear Unbiased Predictor (REML-EBLUP, from conditioned Latin Hypercube as a sampling strategy. The ancillary information used in Ckg and REML-EBLUP was indexes calculated from a digital elevation model (MDE. The “Random forest” algorithm was used for selecting the most important terrain index for each soil properties. Error metrics were used to validate interpolations against cross validation. Results: The results support the underlying assumption that HCLc captured adequately the full distribution of variables of ancillary information in the Colombian piedmont eastern plains conditions. They also suggest that Ckg and REML-EBLUP perform best in the prediction in most of the evaluated soil properties. Conclusions: Mixed interpolation techniques having auxiliary soil information and terrain indexes, provided a significant improvement in the prediction of soil properties, in comparison with other techniques.
Zheng, Jingjing; Frisch, Michael J
2017-12-12
An efficient geometry optimization algorithm based on interpolated potential energy surfaces with iteratively updated Hessians is presented in this work. At each step of geometry optimization (including both minimization and transition structure search), an interpolated potential energy surface is properly constructed by using the previously calculated information (energies, gradients, and Hessians/updated Hessians), and Hessians of the two latest geometries are updated in an iterative manner. The optimized minimum or transition structure on the interpolated surface is used for the starting geometry of the next geometry optimization step. The cost of searching the minimum or transition structure on the interpolated surface and iteratively updating Hessians is usually negligible compared with most electronic structure single gradient calculations. These interpolated potential energy surfaces are often better representations of the true potential energy surface in a broader range than a local quadratic approximation that is usually used in most geometry optimization algorithms. Tests on a series of large and floppy molecules and transition structures both in gas phase and in solutions show that the new algorithm can significantly improve the optimization efficiency by using the iteratively updated Hessians and optimizations on interpolated surfaces.
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
Emergent Public Spaces: Generative Activities on Function Interpolation
Carmona, Guadalupe; Dominguez, Angeles; Krause, Gladys; Duran, Pablo
2011-01-01
This study highlights ways in which generative activities may be coupled with network-based technologies in the context of teacher preparation to enhance preservice teachers' cognizance of how their own experience as students provides a blueprint for the learning environments they may need to generate in their future classrooms. In this study, the…
Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh
2011-12-01
Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. © The Author(s) 2011. Published by Oxford University Press.
Gandevia, S C; McNeil, C J; Carroll, T J; Taylor, J L
2013-03-01
The assessment of voluntary activation of human muscles usually depends on measurement of the size of the twitch produced by an interpolated nerve or cortical stimulus. In many forms of fatiguing exercise the superimposed twitch increases and thus voluntary activation appears to decline. This is termed 'central' fatigue. Recent studies on isolated mouse muscle suggest that a peripheral mechanism related to intracellular calcium sensitivity increases interpolated twitches. To test whether this problem developed with human voluntary contractions we delivered maximal tetanic stimulation to the ulnar nerve (≥60 s at physiological motoneuronal frequencies, 30 and 15 Hz). During the tetani (at 30 Hz) in which the force declined by 42%, the absolute size of the twitches evoked by interpolated stimuli (delivered regularly or only in the last second of the tetanus) diminished progressively to less than 1%. With stimulation at 30 Hz, there was also a marked reduction in size and area of the interpolated compound muscle action potential (M wave). With a 15 Hz tetanus, a progressive decline in the interpolated twitch force also occurred (to ∼10%) but did so before the area of the interpolated M wave diminished. These results indicate that the increase in interpolated twitch size predicted from the mouse studies does not occur. Diminution in superimposed twitches occurred whether or not the M wave indicated marked impairment at sarcolemmal/t-tubular levels. Consequently, the increase in superimposed twitch, which is used to denote central fatigue in human fatiguing exercise, is likely to reflect low volitional drive to high-threshold motor units, which stop firing or are discharging at low frequencies.
On the Quality of Velocity Interpolation Schemes for Marker-in-Cell Method and Staggered Grids
Pusok, Adina E.; Kaus, Boris J. P.; Popov, Anton A.
2017-03-01
The marker-in-cell method is generally considered a flexible and robust method to model the advection of heterogenous non-diffusive properties (i.e., rock type or composition) in geodynamic problems. In this method, Lagrangian points carrying compositional information are advected with the ambient velocity field on an Eulerian grid. However, velocity interpolation from grid points to marker locations is often performed without considering the divergence of the velocity field at the interpolated locations (i.e., non-conservative). Such interpolation schemes can induce non-physical clustering of markers when strong velocity gradients are present (Journal of Computational Physics 166:218-252, 2001) and this may, eventually, result in empty grid cells, a serious numerical violation of the marker-in-cell method. To remedy this at low computational costs, Jenny et al. (Journal of Computational Physics 166:218-252, 2001) and Meyer and Jenny (Proceedings in Applied Mathematics and Mechanics 4:466-467, 2004) proposed a simple, conservative velocity interpolation scheme for 2-D staggered grid, while Wang et al. (Geochemistry, Geophysics, Geosystems 16(6):2015-2023, 2015) extended the formulation to 3-D finite element methods. Here, we adapt this formulation for 3-D staggered grids (correction interpolation) and we report on the quality of various velocity interpolation methods for 2-D and 3-D staggered grids. We test the interpolation schemes in combination with different advection schemes on incompressible Stokes problems with strong velocity gradients, which are discretized using a finite difference method. Our results suggest that a conservative formulation reduces the dispersion and clustering of markers, minimizing the need of unphysical marker control in geodynamic models.
Hirota, Ryuichi; Kato, Junichi; Morita, Hiromu; Kuroda, Akio; Ikeda, Tsukasa; Takiguchi, Noboru; Ohtake, Hisao
2002-03-01
The cbbL and cbbS genes encoding form I ribulose-1,5-bisphosphate carboxylase/oxygenase (RubisCO) large and small subunits in the ammonia-oxidizing bacterium Nitrosomonas sp. strain ENI-11 were cloned and sequenced. The deduced gene products, CbbL and CbbS, had 93 and 87% identity with Thiobacillus intermedius CbbL and Nitrobacter winogradskyi CbbS, respectively. Expression of cbbL and cbbS in Escherichia coli led to the detection of RubisCO activity in the presence of 0.1 mM isopropyl-beta-D-thiogalactopyranoside (IPTG). To our knowledge, this is the first paper to report the genes involved in the carbon fixation reaction in chemolithotrophic ammonia-oxidizing bacteria.
Directory of Open Access Journals (Sweden)
Goran M. Lazić
2010-10-01
Full Text Available Protivoklopni vođeni projektili namenjeni su za uništavanje teško-oklopljenih tenkova, kao i drugih oklopnih vozila. Ovaj rad nudi istorijsko-tehnički pregled (razvoj projektila kroz generacije i osnovni podaci vezani za borbeno-operativno dejstvo ovih projektila ovog tipa naoružanja koje poseduju zemlje zapadne Evrope, Izraela i Indije. Pored osnovnih podataka navode se i cene nekih projektila ponaosob, kao i tendencije razvoja u ovoj grani naoružanja. / Anti-tank guided missiles are designed to hit and destroy heavily armored tanks and other armored fighting vehicles. This review offers a historical and technical overview (development of missiles throughout generations and basic data about combat and operational actions of this type of weapons in Western Europe, Israel and India. The review also offers prices of some missiles and tendencies of development in this branch. Anti-tank guided missiles are primarily designed to destroy armoured tanks as well as other armoured vehicles. Anti-tank guided systems differ in size, from small ones (shoulder-launched missile weapons carried by a single person to complex weapon systems (crewserved, vehicle-mounted and airborne systems. The first generation of anti-tank guided missiles is a manually guided MCLOS (Manual Command to Line of Sight projectile requiring an operator to guide and steer it to a target by a joystick. Vickers vigilant is a British anti-tank wire-guided missile, produced in 1956. The Bantam (Bofors Anti-Tank Missile or Robot 53 (RB 53 is a Swedish anti-tank wire-guided missile, produced in 1963. Cobra is a German - Swiss product which entered the operational use in 1956. It was replaced by Cobra 2000 and Mamaba systems, which are anti-tank guided missiles of the first generation, but with improved guidance and electronics. ENTAC (Engin téléguidé anti-char or MGM Petronor-32A is a French anti-tank wire-guided missile, widely spread and still in the operational use in many
Directory of Open Access Journals (Sweden)
Labant Slavomír
2013-03-01
Full Text Available V súčasnej geodetickej praxi je nevyhnutnosťou používať moderné geodetické prístroje a rôzne CAD (Computer Aided Design softvéry pre proces spracovania a vizualizácie priestorových údajov. Tento príspevok sa zaoberá geodetickým zameraním povrchového lomu Kecerovce za účelom určenia objemu nevyťažených zásob andezitu pre znovu otvorenie lomu a začatie ťažby andezitu. Predmetný lom je situovaný na upätí Slanských vrchov. Určenie pomocných geodetických bodov a okolia lomu sa vykonalo technológiou GNSS RTK metódou. Podrobné zameranie lomu bolo realizované univerzálnou meracou stanicou. Priestorové údaje získané z meraní sa spracovali v príslušných firemných softvéroch. Následne získané priestorové súradnice boli importované do graficko-vypočtových softvérov pre ďalšie spracovanie a vizualizáciu. Tieto graficko-výpočtové softvéry ponúkajú okrem 3D modelovania povrchov a vizualizácie aj ich analýzu, najmä určenie objemových údajov reprezentujúcich rôzne aspekty pre posudzovanie činností v daných odvetviach s možným ďalším rozvojom.
Lung function interpolation by analysis of means of neural-network-supported respiration sounds
Oud, M
Respiration sounds of individual asthmatic patients were analysed in the scope of the development of a method for computerised recognition of the degree of airways obstruction. Respiration sounds were recorded during laboratory sessions of allergen provoked airways obstruction, during several stages
1980-04-01
dubbed their whole body of techniques Kriging, see Delfiner (1975) and further references given there. In another paper (Cabannes 1979b) , meant to...predictors and related predictors. Dept. of Mathematics, M.I.T., report #16. (3) Delfiner , P. (1975): Linear estimation of nonstationary spatial
The Interpolation Method for Estimating the Above-Ground Biomass Using Terrestrial-Based Inventory
Directory of Open Access Journals (Sweden)
I Nengah Surati Jaya
2014-08-01
Full Text Available This paper examined several methods for interpolating biomass on logged-over dry land forest using terrestrial-based forest inventory in Labanan, East Kalimantan and Lamandau, Kota Wringing Barat, Central Kalimantan. The plot-distances examined was 1,000−1,050 m for Labanan and 1,000−899m for Lawanda. The main objective of this study was to obtain the best interpolation method having the most accurate prediction on spatial distribution of forest biomass for dry land forest. Two main interpolation methods were examined: (1 deterministic approach using the IDW method and (2 geo-statistics approach using Kriging with spherical, circular, linear, exponential, and Gaussian models. The study results at both sites consistently showed that the IDW method was better than the Kriging method for estimating the spatial distribution of biomass. The validation results using chi-square test showed that the IDW interpolation provided accurate biomass estimation. Using the percentage of mean deviation value (MD(%, it was also recognized that the IDWs with power parameter (p of 2 provided relatively low value , i.e., only 15% for Labanan, East Kalimantan Province and 17% for Lamandau, Kota Wringing Barat Central Kalimantan Province. In general, IDW interpolation method provided better results than the Kriging, where the Kriging method provided MD(% of about 27% and 21% for Lamandau and Labanan sites, respectively.
Directory of Open Access Journals (Sweden)
Wei Liu
Full Text Available One important method to obtain the continuous surfaces of soil properties from point samples is spatial interpolation. In this paper, we propose a method that combines ensemble learning with ancillary environmental information for improved interpolation of soil properties (hereafter, EL-SP. First, we calculated the trend value for soil potassium contents at the Qinghai Lake region in China based on measured values. Then, based on soil types, geology types, land use types, and slope data, the remaining residual was simulated with the ensemble learning model. Next, the EL-SP method was applied to interpolate soil potassium contents at the study site. To evaluate the utility of the EL-SP method, we compared its performance with other interpolation methods including universal kriging, inverse distance weighting, ordinary kriging, and ordinary kriging combined geographic information. Results show that EL-SP had a lower mean absolute error and root mean square error than the data produced by the other models tested in this paper. Notably, the EL-SP maps can describe more locally detailed information and more accurate spatial patterns for soil potassium content than the other methods because of the combined use of different types of environmental information; these maps are capable of showing abrupt boundary information for soil potassium content. Furthermore, the EL-SP method not only reduces prediction errors, but it also compliments other environmental information, which makes the spatial interpolation of soil potassium content more reasonable and useful.
Liu, Wei; Du, Peijun; Wang, Dongchen
2015-01-01
One important method to obtain the continuous surfaces of soil properties from point samples is spatial interpolation. In this paper, we propose a method that combines ensemble learning with ancillary environmental information for improved interpolation of soil properties (hereafter, EL-SP). First, we calculated the trend value for soil potassium contents at the Qinghai Lake region in China based on measured values. Then, based on soil types, geology types, land use types, and slope data, the remaining residual was simulated with the ensemble learning model. Next, the EL-SP method was applied to interpolate soil potassium contents at the study site. To evaluate the utility of the EL-SP method, we compared its performance with other interpolation methods including universal kriging, inverse distance weighting, ordinary kriging, and ordinary kriging combined geographic information. Results show that EL-SP had a lower mean absolute error and root mean square error than the data produced by the other models tested in this paper. Notably, the EL-SP maps can describe more locally detailed information and more accurate spatial patterns for soil potassium content than the other methods because of the combined use of different types of environmental information; these maps are capable of showing abrupt boundary information for soil potassium content. Furthermore, the EL-SP method not only reduces prediction errors, but it also compliments other environmental information, which makes the spatial interpolation of soil potassium content more reasonable and useful.
DrawFromDrawings: 2D Drawing Assistance via Stroke Interpolation with a Sketch Database.
Matsui, Yusuke; Shiratori, Takaaki; Aizawa, Kiyoharu
2017-07-01
We present DrawFromDrawings, an interactive drawing system that provides users with visual feedback for assistance in 2D drawing using a database of sketch images. Following the traditional imitation and emulation training from art education, DrawFromDrawings enables users to retrieve and refer to a sketch image stored in a database and provides them with various novel strokes as suggestive or deformation feedback. Given regions of interest (ROIs) in the user and reference sketches, DrawFromDrawings detects as-long-as-possible (ALAP) stroke segments and the correspondences between user and reference sketches that are the key to computing seamless interpolations. The stroke-level interpolations are parametrized with the user strokes, the reference strokes, and new strokes created by warping the reference strokes based on the user and reference ROI shapes, and the user study indicated that the interpolation could produce various reasonable strokes varying in shapes and complexity. DrawFromDrawings allows users to either replace their strokes with interpolated strokes (deformation feedback) or overlays interpolated strokes onto their strokes (suggestive feedback). The other user studies on the feedback modes indicated that the suggestive feedback enabled drawers to develop and render their ideas using their own stroke style, whereas the deformation feedback enabled them to finish the sketch composition quickly.
The effect of interpolation methods in temperature and salinity trends in the Western Mediterranean
Directory of Open Access Journals (Sweden)
M. VARGAS-YANEZ
2012-04-01
Full Text Available Temperature and salinity data in the historical record are scarce and unevenly distributed in space and time and the estimation of linear trends is sensitive to different factors. In the case of the Western Mediterranean, previous works have studied the sensitivity of these trends to the use of bathythermograph data, the averaging methods or the way in which gaps in time series are dealt with. In this work, a new factor is analysed: the effect of data interpolation. Temperature and salinity time series are generated averaging existing data over certain geographical areas and also by means of interpolation. Linear trends from both types of time series are compared. There are some differences between both estimations for some layers and geographical areas, while in other cases the results are consistent. Those results which do not depend on the use of interpolated or non-interpolated data, neither are influenced by data analysis methods can be considered as robust ones. Those results influenced by the interpolation process or the factors analysed in previous sensitivity tests are not considered as robust results.
Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.
Mei, Gang; Xu, Nengxiong; Xu, Liangliang
2016-01-01
This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.
Directory of Open Access Journals (Sweden)
Tao Chen
2017-05-01
Full Text Available The spatial distribution of precipitation is an important aspect of water-related research. The use of different interpolation schemes in the same catchment may cause large differences and deviations from the actual spatial distribution of rainfall. Our study analyzes different methods of spatial rainfall interpolation at annual, daily, and hourly time scales to provide a comprehensive evaluation. An improved regression-based scheme is proposed using principal component regression with residual correction (PCRR and is compared with inverse distance weighting (IDW and multiple linear regression (MLR interpolation methods. In this study, the meso-scale catchment of the Fuhe River in southeastern China was selected as a typical region. Furthermore, a hydrological model HEC-HMS was used to calculate streamflow and to evaluate the impact of rainfall interpolation methods on the results of the hydrological model. Results show that the PCRR method performed better than the other methods tested in the study and can effectively eliminate the interpolation anomalies caused by terrain differences between observation points and surrounding areas. Simulated streamflow showed different characteristics based on the mean, maximum, minimum, and peak flows. The results simulated by PCRR exhibited the lowest streamflow error and highest correlation with measured values at the daily time scale. The application of the PCRR method is found to be promising because it considers multicollinearity among variables.
Terzer, S.; Araguas, L.; Aggarwal, P. K.
2012-12-01
Spatial interpolation of point-based precipitation isotope measurements is a common task required to generate 'isoscapes' which are used for various applications in hydrology, climatology, ecology and forensics. While various prediction methods have been explored (employing interpolation and/or multiple regression), one of their basic objectives is to identify a globally suitable parameterization. On the other hand, regional models have been developed to improve interpolation on a limited spatial extent. We have developed a new approach based on climate zones to 'regionalize' regression parameters, building a global prediction model based on a set of regionally adjusted multiple regression/interpolation procedures. A climate zone boundary fuzzification technique was used to smooth out climate zone transitions. Evaluation of the new model in comparison with a globally fitted one shows that the regionalized model has a lower model uncertainty at similar confidence intervals. The resulting global interpolation thus provides an improved and reliable map of precipitation isotopes with significant differences in predicted values in most parts of the world.
2014-11-19
The Generalized Empirical Interpolation Method: stability theory on Hilbert spaces with an application to the Stokes equation Maday, Y.a,b,c,e, Mula...interpolant (the Lebesgue constant) by relating it to an inf-sup problem in the case of Hilbert spaces . In the second part of the paper, it will be explained...SUBTITLE The Generalized Empirical Interpolation Method: stability theory on Hilbert spaces with an application to the Stokes equation 5a. CONTRACT
On analysis-based two-step interpolation methods for randomly sampled seismic data
Yang, Pengliang; Gao, Jinghuai; Chen, Wenchao
2013-02-01
Interpolating the missing traces of regularly or irregularly sampled seismic record is an exceedingly important issue in the geophysical community. Many modern acquisition and reconstruction methods are designed to exploit the transform domain sparsity of the few randomly recorded but informative seismic data using thresholding techniques. In this paper, to regularize randomly sampled seismic data, we introduce two accelerated, analysis-based two-step interpolation algorithms, the analysis-based FISTA (fast iterative shrinkage-thresholding algorithm) and the FPOCS (fast projection onto convex sets) algorithm from the IST (iterative shrinkage-thresholding) algorithm and the POCS (projection onto convex sets) algorithm. A MATLAB package is developed for the implementation of these thresholding-related interpolation methods. Based on this package, we compare the reconstruction performance of these algorithms, using synthetic and real seismic data. Combined with several thresholding strategies, the accelerated convergence of the proposed methods is also highlighted.
Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling
Directory of Open Access Journals (Sweden)
Hyojin Lee
2015-01-01
Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.
Hosseini, Vahid Reza; Shivanian, Elyas; Chen, Wen
2015-02-01
In this article, a general type of two-dimensional time-fractional telegraph equation explained by the Caputo derivative sense for (1 < α ≤ 2) is considered and analyzed by a method based on the Galerkin weak form and local radial point interpolant (LRPI) approximation subject to given appropriate initial and Dirichlet boundary conditions. In the proposed method, so-called meshless local radial point interpolation (MLRPI) method, a meshless Galerkin weak form is applied to the interior nodes while the meshless collocation method is used for the nodes on the boundary, so the Dirichlet boundary condition is imposed directly. The point interpolation method is proposed to construct shape functions using the radial basis functions. In the MLRPI method, it does not require any background integration cells so that all integrations are carried out locally over small quadrature domains of regular shapes, such as circles or squares. Two numerical examples are presented and satisfactory agreements are achieved.
The Interpolating Element-Free Galerkin Method for 2D Transient Heat Conduction Problems
Directory of Open Access Journals (Sweden)
Na Zhao
2014-01-01
Full Text Available An interpolating element-free Galerkin (IEFG method is presented for transient heat conduction problems. The shape function in the moving least-squares (MLS approximation does not satisfy the property of Kronecker delta function, so an interpolating moving least-squares (IMLS method is discussed; then combining the shape function constructed by the IMLS method and Galerkin weak form of the 2D transient heat conduction problems, the interpolating element-free Galerkin (IEFG method for transient heat conduction problems is presented, and the corresponding formulae are obtained. The main advantage of this approach over the conventional meshless method is that essential boundary conditions can be applied directly. Numerical results show that the IEFG method has high computational accuracy.
A study of interpolation method in diagnosis of carpal tunnel syndrome
Directory of Open Access Journals (Sweden)
Alireza Ashraf
2013-01-01
Full Text Available Context: The low correlation between the patients′ signs and symptoms of carpal tunnel syndrome (CTS and results of electrodiagnostic tests makes the diagnosis challenging in mild cases. Interpolation is a mathematical method for finding median nerve conduction velocity (NCV exactly at carpal tunnel site. Therefore, it may be helpful in diagnosis of CTS in patients with equivocal test results. Aim: The aim of this study is to evaluate interpolation method as a CTS diagnostic test. Settings and Design: Patients with two or more clinical symptoms and signs of CTS in a median nerve territory with 3.5 ms ≤ distal median sensory latency <4.6 ms from those who came to our electrodiagnostic clinics and also, age matched healthy control subjects were recruited in the study. Materials and Methods: Median compound motor action potential and median sensory nerve action potential latencies were measured by a MEDLEC SYNERGY VIASIS electromyography and conduction velocities were calculated by both routine method and interpolation technique. Statistical Analysis Used: Chi-square and Student′s t-test were used for comparing group differences. Cut-off points were calculated using receiver operating characteristic curve. Results: A sensitivity of 88%, specificity of 67%, positive predictive value (PPV and negative predictive value (NPV of 70.8% and 84.7% were obtained for median motor NCV and a sensitivity of 98.3%, specificity of 91.7%, PPV and NPV of 91.9% and 98.2% were obtained for median sensory NCV with interpolation technique. Conclusions: Median motor interpolation method is a good technique, but it has less sensitivity and specificity than median sensory interpolation method.
Rainfall Interpolation and Uncertainty Assessment at different Temporal and Spatial Scales
Bárdossy, A.; Pegram, G.
2012-04-01
Spatial interpolation of rainfall over different time and spatial scales is necessary in many applications of hydrometeorology including (i) catchment modelling, (ii) blending/conditioning of radar-rainfall images and (iii) correction of remote sensing estimates of rainfall (for example using TRMM) which are known to be biased, to name three. The specific problems encountered in rainfall interpolation include: • the large number of calculations which need to be performed automatically • the quantification of the influence of topography, usually the most influential of exogenous variables • how to use observed zero (dry) values in interpolation, because their proportion increases with shorter time scales • the need to estimate a reasonable uncertainty of the modelled point/pixel distributions • the difficulty of estimating uncertainty of accumulations over a range of spatial scales The approaches used and described in the presentation employ the variables rainfall and altitude. The methods of interpolation, restricted to 10 controls neighbouring the target, include (i) Ordinary Kriging of the rainfall without altitude, (ii) External Drift Kriging with altitude as an exogenous variable, and less conventionally, (iii) truncated Gaussian copulas and v-copulas, both omitting and including the altitude of the control stations as well as that of the target. It is found that truncated Gaussian copulas, with the target's and all control the stations' altitudes included as exogenous variables, produce the lowest Mean Square error in cross-validation and, as a bonus, model with the least bias. In contrast, the uncertainty of interpolation is better described by the v-copulas, but the Gaussian copulas have the computational advantage (by three orders of magnitude) which justifies their use in practice. It turns out that the uncertainty estimates of the OK and EDK interpolants are not competitive at any time scale, from daily to annual.
Gorji, Taha; Sertel, Elif; Tanik, Aysegul
2017-12-01
Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters
Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations
Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul
2014-06-01
The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.
Barycentric Interpolation and Exact Integration Formulas for the Finite Volume Element Method
Voitovich, Tatiana V.; Vandewalle, Stefan
2008-09-01
This contribution concerns with the construction of a simple and effective technology for the problem of exact integration of interpolation polynomials arising while discretizing partial differential equations by the finite volume element method on simplicial meshes. It is based on the element-wise representation of the local shape functions through barycentric coordinates (barycentric interpolation) and the introducing of classes of integration formulas for the exact integration of generic monomials of barycentric coordinates over the geometrical shapes defined by a barycentric dual mesh. Numerical examples are presented that illustrate the validity of the technology.
Zhang, Chen; Li, Dahai; Li, Mengyang; E, Kewei
2017-10-01
A novel wavefront reconstruction algorithm for radial shearing interferometer (RSI) is proposed in this paper. Based on the shearing relationship of RSI, an interpolation coefficient matrix is established by the radial shearing ratio and the number of discrete points of test wavefront. Accordingly, the expanded wavefront is characterized by the interpolation coefficient matrix and the test wavefront. Consequently the test wavefront can be calculated from the phase difference wavefront. The numerical simulation is conducted to confirm the correctness of the proposed algorithm. Compared with the previous wavefront reconstruction methods, the proposed algorithm is more accurate and stable.
Marek Ławreszuk
2014-01-01
The aim of this paper is to show the interpolation in the Epiclesis of Anaphora prayers in the Liturgies of St. John Chrysostom and St. Basil the Great. The analysis will cover the changes made in the second millennium, in particular the emergence of the troparion of the third hour service, and interpolation of the words “changing it by your Holy Spirit”. The text shows the genesis of adding additional words and explains what their effect was on the structure of the Anaphora prayers. As a res...
GA Based Rational cubic B-Spline Representation for Still Image Interpolation
Directory of Open Access Journals (Sweden)
Samreen Abbas
2016-12-01
Full Text Available In this paper, an image interpolation scheme is designed for 2D natural images. A local support rational cubic spline with control parameters, as interpolatory function, is being optimized using Genetic Algorithm (GA. GA is applied to determine the appropriate values of control parameter used in the description of rational cubic spline. Three state-of-the-art Image Quality Assessment (IQA models with traditional one are hired for comparison with existing image interpolation schemes and perceptual quality check of resulting images. The results show that the proposed scheme is better than the existing ones in comparison.
Workload Balancing on Heterogeneous Systems: A Case Study of Sparse Grid Interpolation
Muraraşu, Alin
2012-01-01
Multi-core parallelism and accelerators are becoming common features of today’s computer systems, as they allow for computational power without sacrificing energy efficiency. Due to heterogeneity, tuning for each type of compute unit and adequate load balancing is essential. This paper proposes static and dynamic solutions for load balancing in the context of an application for visualizing high-dimensional simulation data. The application relies on the sparse grid technique for data compression. Its performance critical part is the interpolation routine used for decompression. Results show that our load balancing scheme allows for an efficient acceleration of interpolation on heterogeneous systems containing multi-core CPUs and GPUs.
Shivanian, Elyas; Reza Khodabandehlo, Hamid
2014-11-01
In this paper, the meshless local radial point interpolation (MLRPI) method is applied to the one-dimensional telegraph equation with purely integral conditions. In MLRPI, it does not require any background integration cells but it requires all integrations be carried out locally over small quadrature domains of regular shapes, such as lines in one dimension, circles or squares in two dimensions and spheres or cubes in three dimensions. A technique is proposed to construct shape functions using point interpolation method augmented to the radial basis functions. The time derivatives are approximated by the finite difference method. Some numerical experiments for the mentioned problem are carried out as well.
Interpolation of meteorological data by kriging method for use in forestry
Directory of Open Access Journals (Sweden)
Ivetić Vladan
2010-01-01
Full Text Available Interpolation is a suitable method of computing the values of a spatial variable at the location which is impossible for measurement, based on the data obtained by the measurement of the same variable at the predetermined locations (e.g. weather stations. In this paper, temperature and rainfall values at 39 weather stations in Serbia and neighbouring countries were interpolated aiming at the research in forestry. The study results are presented in the form of an interactive map of Serbia, which allows a fast and simple determination of the analyzed variable at any point within its territory, which is presented by the example of 27 forest sites.
Tian, Ye; Erb, Kay Condie; Adluru, Ganesh; Likhite, Devavrat; Pedgaonkar, Apoorva; Blatt, Michael; Kamesh Iyer, Srikant; Roberts, John; DiBella, Edward
2017-08-01
To evaluate the use of three different pre-reconstruction interpolation methods to convert non-Cartesian k-space data to Cartesian samples such that iterative reconstructions can be performed more simply and more rapidly. Phantom as well as cardiac perfusion radial datasets were reconstructed by four different methods. Three of the methods used pre-reconstruction interpolation once followed by a fast Fourier transform (FFT) at each iteration. The methods were: bilinear interpolation of nearest-neighbor points (BINN), 3-point interpolation, and a multi-coil interpolator called GRAPPA Operator Gridding (GROG). The fourth method performed a full non-Uniform FFT (NUFFT) at each iteration. An iterative reconstruction with spatiotemporal total variation constraints was used with each method. Differences in the images were quantified and compared. The GROG multicoil interpolation, the 3-point interpolation, and the NUFFT-at-each-iteration approaches produced high quality images compared to BINN, with the GROG-derived images having the fewest streaks among the three preinterpolation approaches. However, all reconstruction methods produced approximately equal results when applied to perfusion quantitation tasks. Pre-reconstruction interpolation gave approximately an 83% reduction in reconstruction time. Image quality suffers little from using a pre-reconstruction interpolation approach compared to the more accurate NUFFT-based approach. GROG-based pre-reconstruction interpolation appears to offer the best compromise by using multicoil information to perform the interpolation to Cartesian sample points prior to image reconstruction. Speed gains depend on the implementation and relatively standard optimizations on a MATLAB platform result in preinterpolation speedups of ~ 6 compared to using NUFFT at every iteration, reducing the reconstruction time from around 42 min to 7 min. © 2017 American Association of Physicists in Medicine.
Alconis, Jenalyn; Eco, Rodrigo; Mahar Francisco Lagmay, Alfredo; Lester Saddi, Ivan; Mongaya, Candeze; Figueroa, Kathleen Gay
2014-05-01
In response to the slew of disasters that devastates the Philippines on a regular basis, the national government put in place a program to address this problem. The Nationwide Operational Assessment of Hazards, or Project NOAH, consolidates the diverse scientific research being done and pushes the knowledge gained to the forefront of disaster risk reduction and management. Current activities of the project include installing rain gauges and water level sensors, conducting LIDAR surveys of critical river basins, geo-hazard mapping, and running information education campaigns. Approximately 700 automated weather stations and rain gauges installed in strategic locations in the Philippines hold the groundwork for the rainfall visualization system in the Project NOAH web portal at http://noah.dost.gov.ph. The system uses near real-time data from these stations installed in critical river basins. The sensors record the amount of rainfall in a particular area as point data updated every 10 to 15 minutes. The sensor sends the data to a central server either via GSM network or satellite data transfer for redundancy. The web portal displays the sensors as a placemarks layer on a map. When a placemark is clicked, it displays a graph of the rainfall data for the past 24 hours. The rainfall data is harvested by batch determined by a one-hour time frame. The program uses linear interpolation as the methodology implemented to visually represent a near real-time rainfall map. The algorithm allows very fast processing which is essential in near real-time systems. As more sensors are installed, precision is improved. This visualized dataset enables users to quickly discern where heavy rainfall is concentrated. It has proven invaluable on numerous occasions, such as last August 2013 when intense to torrential rains brought about by the enhanced Southwest Monsoon caused massive flooding in Metro Manila. Coupled with observations from Doppler imagery and water level sensors along the
Superpixel-based depth map enhancement and hole filling for view interpolation
Yang, Xiaohui; Feng, Zhiquan; Xu, Tao; Jiang, Yan; Tang, Haokui
2017-07-01
In view interpolation, information missing often exists in initial depth map, moreover, disocclusion regions usually occur along the foreground object boundaries after 3D warping. Generally, initial depth map and warped depth map have a strong influence on the performance of view interpolation. However, most of existing view interpolation algorithms only emphasize hole filling of the warped color image. In this paper, a superpixel-based method is proposed for initial depth map enhancement and warped depth map hole filling. Firstly, the color image is segmented using simple linear iterative clustering (SLIC) algorithm, and after that, the associated depth map is segmented with the same label. Then, the depthmissing pixels are recovered by considering color and depth superpixel information jointly. Additionally, holes of the disocclusion regions in the warped depth map can also be filled efficiently via superpixel-based segmentation. Experimental results show that with the proposed method the quality of the interpolated view has been improved significantly in terms of both subjective and objective evaluations.
Feng, Shaodong; Wang, Mingjun; Wu, Jigang
2017-11-01
In a compact lensless in-line holographic microscope, the imaging resolution is generally limited by the sensor pixel size because of the short sample-to-sensor distance. To overcome this problem, we propose to use data interpolation based on iteration with only two intensity measurements to enhance the resolution in holographic reconstruction. We did numerical simulations using the U.S. air force target as the sample and showed that data interpolation in the acquired in-line hologram can be used to enhance the reconstruction resolution. The imaging resolution and contrast can be further improved by combining data interpolation with iterative holographic reconstruction using only two hologram measurements acquired by slightly changing the sample-to-sensor distance while recording the in-line holograms. The two in-line hologram intensity measurements were used as a priori constraint in the iteration process according to the Gerchberg-Saxton algorithm for phase retrieval. The iterative reconstruction results showed that the iteration between the sample plane and the sensor planes can refine the interpolated data and thus further improve the resolution as well as the imaging contrast. Besides numerical simulation, we also experimentally demonstrated the enhancement of imaging resolution and contrast by imaging the U.S. air force target and a microscope slide of filamentous algae.
DEFF Research Database (Denmark)
Hurkmans, R.T.W.L.; Bamber, J.L.; Sørensen, Louise Sandberg
2012-01-01
Estimation of ice sheet mass balance from satellite altimetry requires interpolation of point-scale elevation change (dH/dt) data over the area of interest. The largest dH/dt values occur over narrow, fast-flowing outlet glaciers, where data coverage of current satellite altimetry is poorest...
On Rational Interpolation-Based List-Decoding and List-Decoding Binary Goppa Codes
DEFF Research Database (Denmark)
Beelen, Peter; Høholdt, Tom; Nielsen, Johan Sebastian Rosenkilde
2013-01-01
We derive the Wu list-decoding algorithm for generalized Reed–Solomon (GRS) codes by using Gröbner bases over modules and the Euclidean algorithm as the initial algorithm instead of the Berlekamp–Massey algorithm. We present a novel method for constructing the interpolation polynomial fast. We give...
Improving the visualization of electron-microscopy data through optical flow interpolation
Carata, Lucian
2013-01-01
Technical developments in neurobiology have reached a point where the acquisition of high resolution images representing individual neurons and synapses becomes possible. For this, the brain tissue samples are sliced using a diamond knife and imaged with electron-microscopy (EM). However, the technique achieves a low resolution in the cutting direction, due to limitations of the mechanical process, making a direct visualization of a dataset difficult. We aim to increase the depth resolution of the volume by adding new image slices interpolated from the existing ones, without requiring modifications to the EM image-capturing method. As classical interpolation methods do not provide satisfactory results on this type of data, the current paper proposes a re-framing of the problem in terms of motion volumes, considering the depth axis as a temporal axis. An optical flow method is adapted to estimate the motion vectors of pixels in the EM images, and this information is used to compute and insert multiple new images at certain depths in the volume. We evaluate the visualization results in comparison with interpolation methods currently used on EM data, transforming the highly anisotropic original dataset into a dataset with a larger depth resolution. The interpolation based on optical flow better reveals neurite structures with realistic undistorted shapes, and helps to easier map neuronal connections. © 2011 ACM.
Applications of operational calculus: trigonometric interpolating equation for the eight-point cube
Energy Technology Data Exchange (ETDEWEB)
Silver, Gary L [Los Alamos National Laboratory
2009-01-01
A general method for obtaining a trigonometric-type interpolating equation for the eight-point cubical array is illustrated. It can often be used to reproduce a ninth datum at an arbitrary point near the center of the array by adjusting a variable exponent. The new method complements operational polynomial and exponential methods for the same design.
Effects of interpolation and data resolution on methane emission estimates from rice paddies
Bodegom, van P.M.; Verburg, P.H.; Stein, A.; Adiningsih, S.; Denier van der Gon, H.A.C.
2002-01-01
Rice paddies are an important source of the greenhouse gas methane (CH4). Global methane emission estimates are highly uncertain and do not account for effects of interpolation or data resolution errors. This paper determines such scaling effects for the influence of soil properties on calculated
Vegter, H.; van den Boogaard, Antonius H.
2006-01-01
An anisotropic plane stress yield function based on interpolation by second order Bézier curves is proposed. The parameters for the model are readily derived by four mechanical tests: a uniaxial, an equi-biaxial and a plane strain tensile test and a shear test. In case of planar anisotropy, this set
Alpay, Daniel; Dijksma, Aad; Langer, Heinz; Wanjala, Gerald
2006-01-01
We define and solve a boundary interpolation problem for generalized Schur functions s(z) on the open unit disk D which have preassigned asymptotics when z from D tends nontangentially to a boundary point z1 ∈ T. The solutions are characterized via a fractional linear parametrization formula. We
INTAMAP: The design and implementation of an interoperable automated interpolation web service
Pebesma, Edzer; Cornford, Dan; Dubois, Gregoire; Heuvelink, Gerard B. M.; Hristopulos, Dionisis; Pilz, Jürgen; Stöhlker, Ulrich; Morin, Gary; Skøien, Jon O.
2011-03-01
INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.
INTAMAP: The design and implementation of an interoperable automated interpolation web service
Pebesma, E.; Cornford, D.; Dubois, G.; Heuvelink, G.B.M.; Hristopulos, D.; Pilz, J.; Stohlker, U.; Morin, G.; Skoien, J.O.
2011-01-01
INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and
Walvoort, D.J.J.; Knotters, M.; Hoogland, T.; Wijnen, van H.; Dijk, van T.A.; Schöll, van L.; Groenenberg, J.E.
2013-01-01
This report documents a decision support system (DSS) that has been developed to assist environmental researchers in selecting interpolation, aggregation, and disaggregation methods. The DSS has been implemented as a web-application. This facilitates updating and makes the DSS generally accessible.
A multiparametric method of interpolation using WOA05 applied to anthropogenic CO2 in the Atlantic
Directory of Open Access Journals (Sweden)
Anton Velo
2010-11-01
Full Text Available This paper describes the development of a multiparametric interpolation method and its application to anthropogenic carbon (CANT in the Atlantic, calculated by two estimation methods using the CARINA database. The multiparametric interpolation proposed uses potential temperature (θ, salinity, conservative ‘NO’ and ‘PO’ as conservative parameters for the gridding, and the World Ocean Atlas (WOA05 as a reference for the grid structure and the indicated parameters. We thus complement CARINA data with WOA05 database in an attempt to obtain better gridded values by keeping the physical-biogeochemical sea structures. The algorithms developed here also have the prerequisite of being simple and easy to implement. To test the improvements achieved, a comparison between the proposed multiparametric method and a pure spatial interpolation for an independent parameter (O2 was made. As an application case study, CANT estimations by two methods (φCTº and TrOCA were performed on the CARINA database and then gridded by both interpolation methods (spatial and multiparametric. Finally, a calculation of CANT inventories for the whole Atlantic Ocean was performed with the gridded values and using ETOPO2v2 as the sea bottom. Thus, the inventories were between 55.1 and 55.2 Pg-C with the φCTº method and between 57.9 and 57.6 Pg-C with the TrOCA method.
New Method for Mesh Moving Based on Radial Basis Function Interpolation
De Boer, A.; Van der Schoot, M.S.; Bijl, H.
2006-01-01
A new point-by-point mesh movement algorithm is developed for the deformation of unstructured grids. The method is based on using radial basis function, RBFs, to interpolate the displacements of the boundary nodes to the whole flow mesh. A small system of equations has to be solved, only involving
Shen, Y.; Tauritz, J.L.
2005-01-01
Traditionally, Taylor series models are only used under small signal or mildly nonlinear regimes. In this paper, a new behavioral model for microwave power amplifiers (PAs) based on first order Taylor expansion of multivariable nonlinearities and interpolation is proposed. The model is tailored for
Optimal Alternative to the Akima's Method of Smooth Interpolation Applied in Diabetology
Directory of Open Access Journals (Sweden)
Emanuel Paul
2006-12-01
Full Text Available It is presented a new method of cubic piecewise smooth interpolation applied to experimental data obtained by glycemic profile for diabetics. This method is applied to create a soft useful in clinical diabetology. The method give an alternative to the Akima's procedure of the derivatives computation on the knots from [Akima, J. Assoc. Comput. Mach., 1970] and have an optimal property.
An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA
Directory of Open Access Journals (Sweden)
Jiye HUANG
2014-05-01
Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.
Energy Technology Data Exchange (ETDEWEB)
Davenport, C. M.
1977-02-01
The mathematical basis for an ultraprecise digital differential analyzer circuit for use as a parabolic interpolator on numerically controlled machines has been established, and scaling and other error-reduction techniques have been developed. An exact computer model is included, along with typical results showing tracking to within an accuracy of one part per million.
Scattering Amplitudes Interpolating Between Instant Form and Front Form of Relativistic Dynamics
Ji, C.R.; Bakker, B.L.G.; Li, Z.
2014-01-01
Among the three forms of relativistic Hamiltonian dynamics proposed by Dirac in 1949, the front form has the largest number of kinematic generators. This distinction provides useful consequences in the analysis of physical observables in hadron physics. Using the method of interpolation between the
Design of interpolation functions for subpixel-accuracy stereo-vision systems.
Haller, Istvan; Nedevschi, Sergiu
2012-02-01
Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE
Directory of Open Access Journals (Sweden)
COJOCARU ŞTEFANA
2014-03-01
Full Text Available patial interpolation, in the context of spatial analysis, can be defined as the derivation of new data from already known information, a technique frequently used to predict and quantify spatial variation of a certain property or parameter. In this study we compared the performance of Inverse Distance Weighted (IDW, Ordinary Kriging and Natural Neighbor techniques, applied in spatial interpolation of precipitation parameters (pH, electrical conductivity and total dissolved solids. These techniques are often used when the area of interest is relatively small and the sampled locations are regularly spaced. The methods were tested on data collected in Iasi city (Romania between March – May 2013. Spatial modeling was performed on a small dataset, consisting of 7 sample locations and 13 different known values of each analyzed parameter. The precision of the techniques used is directly dependent on sample density as well as data variation, greater fluctuations in values between locations causing a decrease in the accuracy of the methods used. To validate the results and reveal the best method of interpolating rainfall characteristics, leave-one – out cross-validation approach was used. Comparing residues between the known values and the estimated values of pH, electrical conductivity and total dissolved solids, it was revealed that Natural Neighbor stands out as generating the smallest residues for pH and electrical conductivity, whereas IDW presents the smallest error in interpolating total dissolved solids (the parameter with the highest fluctuations in value.
An Energy Conservative Ray-Tracing Method With a Time Interpolation of the Force Field
Energy Technology Data Exchange (ETDEWEB)
Yao, Jin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2015-02-10
A new algorithm that constructs a continuous force field interpolated in time is proposed for resolving existing difficulties in numerical methods for ray-tracing. This new method has improved accuracy, but with the same degree of algebraic complexity compared to Kaisers method.
Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei
2017-12-01
Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz
Directory of Open Access Journals (Sweden)
Humair Ahmed
2017-05-01
Full Text Available Soil pH is considered as a core indicator for nutrient bioavailability. Prevailing alkaline pH due to calcareousness in Pakistan is considered as one of the limiting factor for nutrient availability to plants. Exploring the spatial variability of soil variables serves as scientific basis for the generation of soil management strategies. Selection of best interpolation method to predict the soil properties at un-sampled locations is an important issue in the site specific investigations. This article evaluates Inverse distance weighting, global and local polynomial interpolation, radial basis function and kriging to determine the optimal interpolation method for mapping soil pH. Performance of the interpolation methods was analyzed using soil test (pH data from 180 surface soil samples collected from 30 representative orchards grown in tehsil Murree. For inverse distance weighting, powers of 1, 2 and 3 were used and the number of neighbors for all methods ranged from 15 to 25. The conclusion of our study suggested that increased power of inverse distance weighting resulted in an increase in the prediction accuracy. Local polynomial interpolation method was more suitable as compared to global polynomial interpolation. Radial basis function with regularized and spline tension gave equivalent prediction accuracy. Higher errors (mean and mean absolute errors were observed in case of ordinary kriging as compared to other interpolation methods. Digital maps generated by the higher powers of inverse distance weighting, local polynomial interpolation, and radial basis function were of higher accuracy.
Directory of Open Access Journals (Sweden)
A. A. Pichkhadze
2015-01-01
Full Text Available This article deals with the Slavonic sources of three interpolations in the Old Russian version of Flavius Iosephus’ The Jewish War. The text of the first interpolation (about The Adoration of the Magi is preserved in the Interpolated redaction of The Tale of Aphroditianus in a more authentic form; it may be a modified version of the apocryph about the Magi (known for instance from the Velikiye Chet’yi-Minei which is close in content but not in wording to the interpolation of The Jewish War and The Tale of Aphroditianus. In the second interpolation the apostles are named kaližnici‘shoemakers’ in accordance with some Slavonic sources in wich St. Paulis referred to as usmošvec ‘currier; shoemaker’. The third interpolation mentions piyavicy solomon’skiya ‘the leeches of Solomon’ (allusion to Proverbs XXX 15–16 – this utterance derives presumably from the translation of 13 Orationes of St. Gregory of Nazianzus made in the 10th century in Bulgaria. In the light of these facts, together with the borrowings from different Slavonic sources in other interpolations within the Old Russian version of The Jewish War which have been identified earlier, the use of Slavonic sources may be considered a common feature of the interpolations of the Old Russian version.
Directory of Open Access Journals (Sweden)
Lixin Li
2014-09-01
Full Text Available Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-09-03
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-01-01
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation
DESIGN OF BEZIER SPLINE SURFACES OVER BIVARIATE NETWORKS OF CURVES
Directory of Open Access Journals (Sweden)
A. P. Pobegailo
2014-01-01
Full Text Available The paper presents an approach to construct interpolating spline surfaces over a bivariate net-work of curves with rectangular patches. Patches of the interpolating spline surface are constructed by means of blending their boundaries with special polynomials. In order to ensure a necessary para-metric continuity of the designed surface the polynomials of the corresponding degree must be used. The constructed interpolating spline surfaces have a local shape control. If the surface frame is deter-mined by means of Bezier curves, then patches of the interpolating spline surface are Bezier surfaces. The presented approach to surface modeling can be used in such applications as computer graphics and geometric design.
AN INTERPOLATION METHOD FOR DETERMINING THE FREQUENCIES OF PARAMETERIZED LARGE-SCALE STRUCTURES
Directory of Open Access Journals (Sweden)
Salvatore Nasisi
2015-12-01
Full Text Available Parametric Model Order Reduction (pMOR is an emerging category of models developed with the aim of describing reduced first and second-order dynamical systems. The use of a pROM turns out useful in a variety of applications spanning from the analysis of Micro-Electro-Mechanical Systems (MEMS to the optimization of complex mechanical systems because they allow predicting the dynamical behavior at any values of the quantities of interest within the design space, e.g. material properties, geometric features or loading conditions. The process underlying the construction of a pROM using an SVD-based method [18] accounts for three basic phases: a construction of several local ROMs (Reduced Order Models; b projection of the state-space vector onto a common subspace spanned by several transformation matrices derived in the first step; c use of an interpolation method capable of capturing for one or more parameters the values of the quantity of interest. One of the major difficulties encountered in this process has been identified at the level of the interpolation method and can be encapsulated in the following contradiction: if the number of detailed finite element analyses is high then an interpolation method can better describe the system for a given choice of a parameter but the time of computation is higher. In this paper is proposed a method for removing the above contradiction by introducing a new interpolation method (RSDM. This method allows to restore and make available to the interpolation tool certain natural components belonging to the matrices of the full FE model that are related on one side, to the process of reduction and on the other side, to the characteristics of a solid in the FE theory. This approach shows higher accuracy than methods used for the assessment of the system’s eigenbehavior. To confirm the usefulness of the RSDM a Hexapod will be analyzed.
An Immersed Boundary method with divergence-free velocity interpolation and force spreading
Bao, Yuanxun; Donev, Aleksandar; Griffith, Boyce E.; McQueen, David M.; Peskin, Charles S.
2017-10-01
The Immersed Boundary (IB) method is a mathematical framework for constructing robust numerical methods to study fluid-structure interaction in problems involving an elastic structure immersed in a viscous fluid. The IB formulation uses an Eulerian representation of the fluid and a Lagrangian representation of the structure. The Lagrangian and Eulerian frames are coupled by integral transforms with delta function kernels. The discretized IB equations use approximations to these transforms with regularized delta function kernels to interpolate the fluid velocity to the structure, and to spread structural forces to the fluid. It is well-known that the conventional IB method can suffer from poor volume conservation since the interpolated Lagrangian velocity field is not generally divergence-free, and so this can cause spurious volume changes. In practice, the lack of volume conservation is especially pronounced for cases where there are large pressure differences across thin structural boundaries. The aim of this paper is to greatly reduce the volume error of the IB method by introducing velocity-interpolation and force-spreading schemes with the properties that the interpolated velocity field in which the structure moves is at least C1 and satisfies a continuous divergence-free condition, and that the force-spreading operator is the adjoint of the velocity-interpolation operator. We confirm through numerical experiments in two and three spatial dimensions that this new IB method is able to achieve substantial improvement in volume conservation compared to other existing IB methods, at the expense of a modest increase in the computational cost. Further, the new method provides smoother Lagrangian forces (tractions) than traditional IB methods. The method presented here is restricted to periodic computational domains. Its generalization to non-periodic domains is important future work.
The Interpolation Method for Estimating the Above-Ground Biomass Using Terrestrial-Based Inventory
Directory of Open Access Journals (Sweden)
I Nengah Surati Jaya
2014-09-01
Full Text Available This paper examined several methods for interpolating biomass on logged-over dry land forest using terrestrial-based forest inventory in Labanan, East Kalimantan and Lamandau, Kota Wringing Barat, Central Kalimantan. The plot-distances examined was 1,000−1,050 m for Labanan and 1,000−899m for Lawanda. The main objective of this study was to obtain the best interpolation method having the most accurate prediction on spatial distribution of forest biomass for dry land forest. Two main interpolation methods were examined: (1 deterministic approach using the IDW method and (2 geo-statistics approach using Kriging with spherical, circular, linear, exponential, and Gaussian models. The study results at both sites consistently showed that the IDW method was better than the Kriging method for estimating the spatial distribution of biomass. The validation results using chi-square test showed that the IDW interpolation provided accurate biomass estimation. Using the percentage of mean deviation value (MD(%, it was also recognized that the IDWs with power parameter (p of 2 provided relatively low value , i.e., only 15% for Labanan, East Kalimantan Province and 17% for Lamandau, Kota Wringing Barat Central Kalimantan Province. In general, IDW interpolation method provided better results than the Kriging, where the Kriging method provided MD(% of about 27% and 21% for Lamandau and Labanan sites, respectively.Keywords: deterministic, geostatistics, IDW, Kriging, above-groung biomass
Interpolation techniques to reduce error in measurement of toe clearance during obstacle avoidance.
Heijnen, Michel J H; Muir, Brittney C; Rietdyk, Shirley
2012-01-03
Foot and toe clearance (TC) are used regularly to describe locomotor control for both clinical and basic research. However, accuracy of TC during obstacle crossing can be compromised by typical sample frequencies, which do not capture the frame when the foot is over the obstacle due to high limb velocities. The purpose of this study was to decrease the error of TC measures by increasing the spatial resolution of the toe trajectory with interpolation. Five young subjects stepped over an obstacle in the middle of an 8 m walkway. Position data were captured at 600 Hz as a gold standard signal (GS-600-Hz). The GS-600-Hz signal was downsampled to 60 Hz (DS-60-Hz). The DS-60-Hz was then interpolated by either upsampling or an algorithm. Error was calculated as the absolute difference in TC between GS-600-Hz and each of the remaining signals, for both the leading limb and the trailing limb. All interpolation methods reduced the TC error to a similar extent. Interpolation reduced the median error of trail TC from 5.4 to 1.1 mm; the maximum error was reduced from 23.4 to 4.2 mm (16.6-3.8%). The median lead TC error improved from 1.6 to 0.5 mm, and the maximum error improved from 9.1 to 1.8 mm (5.3-0.9%). Therefore, interpolating a 60 Hz signal is a valid technique to decrease the error of TC during obstacle crossing. Published by Elsevier Ltd.
Energy Technology Data Exchange (ETDEWEB)
Palau, J.M.; Cathalau, S.; Hudelot, J.P.; Barran, F.; Bellanger, V., E-mail: jean-marc.palau@cea.fr [CEA, DEN, Departement d' Etudes des Reacteurs, Service de Physique des Reacteurs et du Cycle Laboratoire de Projets Nucleaires, Cadarache, Saint-Paul-lez-Durance (France); Magnaud, C.; Moreau, F., E-mail: Christine.magnaud@cea.fr [Departement de Modelisation des Systemes et des Structures Service d' Etudes de Reacteurs et Mathematiques Appliquees, Saclay (France)
2011-07-01
Burnable poisons are extensively used by Light Water Reactor designers in order to preserve the fuel reactivity potential and increase the cycle length (without increasing the uranium enrichment). In the industrial two-steps (assembly 2D transport-core 3D diffusion) calculation schemes these heterogeneities yield to strong flux and cross-sections perturbations that have to be taken into account in the final 3D burn-up calculations. This paper presents the application of an enhanced cross-section interpolation model (implemented in the French CRONOS2 code) to LWR (highly poisoned) depleted core calculations. The principle is to use the absorbers (or actinide) concentrations as the new interpolation parameters instead of the standard local burnup/fluence parameters. It is shown by comparing the standard (burnup/fluence) and new (concentration) interpolation models and using the lattice transport code APOLLO2 as a numerical reference that reactivity and local reaction rate prediction of a 2x2 LWR assembly configuration (slab geometry) is significantly improved with the concentration interpolation model. Gains on reactivity and local power predictions (resp. more than 1000 pcm and 20 % discrepancy reduction compared to the reference APOLLO2 scheme) are obtained by using this model. In particular, when epithermal absorbers are inserted close to thermal poison the 'shadowing' ('screening') spectral effects occurring during control operations are much more correctly modeled by concentration parameters. Through this outstanding example it is highlighted that attention has to be paid to the choice of cross-section interpolation parameters (burnup 'indicator') in core calculations with few energy groups and variable geometries all along the irradiation cycle. Actually, this new model could be advantageously applied to steady-state and transient LWR heterogeneous core computational analysis dealing with strong spectral-history variations under
Directory of Open Access Journals (Sweden)
Mateusz Szcześniak
2015-02-01
Full Text Available Ground-based precipitation data are still the dominant input type for hydrological models. Spatial variability in precipitation can be represented by spatially interpolating gauge data using various techniques. In this study, the effect of daily precipitation interpolation methods on discharge simulations using the semi-distributed SWAT (Soil and Water Assessment Tool model over a 30-year period is examined. The study was carried out in 11 meso-scale (119–3935 km2 sub-catchments lying in the Sulejów reservoir catchment in central Poland. Four methods were tested: the default SWAT method (Def based on the Nearest Neighbour technique, Thiessen Polygons (TP, Inverse Distance Weighted (IDW and Ordinary Kriging (OK. =The evaluation of methods was performed using a semi-automated calibration program SUFI-2 (Sequential Uncertainty Fitting Procedure Version 2 with two objective functions: Nash-Sutcliffe Efficiency (NSE and the adjusted R2 coefficient (bR2. The results show that: (1 the most complex OK method outperformed other methods in terms of NSE; and (2 OK, IDW, and TP outperformed Def in terms of bR2. The median difference in daily/monthly NSE between OK and Def/TP/IDW calculated across all catchments ranged between 0.05 and 0.15, while the median difference between TP/IDW/OK and Def ranged between 0.05 and 0.07. The differences between pairs of interpolation methods were, however, spatially variable and a part of this variability was attributed to catchment properties: catchments characterised by low station density and low coefficient of variation of daily flows experienced more pronounced improvement resulting from using interpolation methods. Methods providing higher precipitation estimates often resulted in a better model performance. The implication from this study is that appropriate consideration of spatial precipitation variability (often neglected by model users that can be achieved using relatively simple interpolation methods can
Directory of Open Access Journals (Sweden)
Ly, S.
2013-01-01
Full Text Available Watershed management and hydrological modeling require data related to the very important matter of precipitation, often measured using raingages or weather stations. Hydrological models often require a preliminary spatial interpolation as part of the modeling process. The success of spatial interpolation varies according to the type of model chosen, its mode of geographical management and the resolution used. The quality of a result is determined by the quality of the continuous spatial rainfall, which ensues from the interpolation method used. The objective of this article is to review the existing methods for interpolation of rainfall data that are usually required in hydrological modeling. We review the basis for the application of certain common methods and geostatistical approaches used in interpolation of rainfall. Previous studies have highlighted the need for new research to investigate ways of improving the quality of rainfall data and ultimately, the quality of hydrological modeling.
Subsidence monitoring network: an Italian example aimed at a sustainable hydrocarbon E&P activity
Dacome, M. C.; Miandro, R.; Vettorel, M.; Roncari, G.
2015-11-01
According to the Italian law in order to start-up any new hydrocarbon exploitation activity, an Environmental Impact Assessment study has to be presented, including a monitoring plan, addressed to foresee, measure and analyze in real time any possible impact of the project on the coastal areas and on those ones in the close inland located. The occurrence of subsidence, that could partly be related to hydrocarbon production, both on-shore and off-shore, can generate great concern in those areas where its occurrence may have impacts on the local environment. ENI, following the international scientific community recommendations on the matter, since the beginning of 90's years, implemented a cutting-edge monitoring network, with the aim to prevent, mitigate and control geodynamics phenomena generated in the activity areas, with a particular attention to conservation and protection of environmental and territorial equilibrium, taking care of what is known as "sustainable development". The current ENI implemented monitoring surveys can be divided as: - Shallow monitoring: spirit levelling surveys, continuous GPS surveys in permanent stations, SAR surveys, assestimeter subsurface compaction monitoring, ground water level monitoring, LiDAR surveys, bathymetrical surveys. - Deep monitoring: reservoir deep compaction trough radioactive markers, reservoir static (bottom hole) pressure monitoring. All the information, gathered through the monitoring network, allow: 1. to verify if the produced subsidence is evolving accordingly with the simulated forecast. 2. to provide data to revise and adjust the prediction compaction models 3. to put in place the remedial actions if the impact exceeds the threshold magnitude originally agreed among the involved parties. ENI monitoring plan to measure and monitor the subsidence process, during field production and also after the field closure, is therefore intended to support a sustainable field development and an acceptable exploitation
Mallet, Florian; Marc, Vincent; Douvinet, Johnny; Rossello, Philippe; Le Bouteiller, Caroline; Malet, Jean-Philippe
2016-04-01
Soil moisture is a key parameter that controls runoff processes at the watershed scale. It is characterized by a high area and time variability, controlled by site properties such as soil texture, topography, vegetation cover and climate. Several recent studies showed that changes in water storage was a key variable to understand the distribution of water residence time and the shape of flood's hydrograph (McDonnell and Beven, 2014; Davies and Beven, 2015). Knowledge of high frequency soil moisture variation across scales is a prerequisite for better understanding the areal distribution of runoff generation. The present study has been carried out in the torrential Draix-Bléone's experimental catchments, where water storage processes are expected to occur mainly on the first meter of soil. The 0,86 km2 Laval marly torrential watershed has a peculiar hydrological behavior during flood events with specific discharge among the highest in the world. To better understand the Laval internal behavior and to identify explanatory parameters of runoff generation, additional field equipment has been setup in sub-basins with various land use and morphological characteristics. From fall 2015 onwards this new instrumentation helped to supplement the routine measurements (rainfall rate, streamflow) and to develop a network of high frequency soil water content sensors (moisture probes, mini lysimeter). Data collected since early May and complementary measurement campaigns (itinerant soil moisture measurements, geophysical measurements) make it now possible to propose a soil water content mapping procedure. We use the LISDQS spatial extrapolation model based on a local interpolation method (Joly et. al, 2008). The interpolation is carried out from different geographical variables which are derived from a high resolution DEM (1m LIDAR) and a land cover image. Unlike conventional interpolation procedure, this method takes into account local forcing parameters such as slope, aspect
Faye, Emile; Herrera, Mario; Bellomo, Lucio; Silvain, Jean-François; Dangles, Olivier
2014-01-01
Bridging the gap between the predictions of coarse-scale climate models and the fine-scale climatic reality of species is a key issue of climate change biology research. While it is now well known that most organisms do not experience the climatic conditions recorded at weather stations, there is little information on the discrepancies between microclimates and global interpolated temperatures used in species distribution models, and their consequences for organisms' performance. To address this issue, we examined the fine-scale spatiotemporal heterogeneity in air, crop canopy and soil temperatures of agricultural landscapes in the Ecuadorian Andes and compared them to predictions of global interpolated climatic grids. Temperature time-series were measured in air, canopy and soil for 108 localities at three altitudes and analysed using Fourier transform. Discrepancies between local temperatures vs. global interpolated grids and their implications for pest performance were then mapped and analysed using GIS statistical toolbox. Our results showed that global interpolated predictions over-estimate by 77.5 ± 10% and under-estimate by 82.1 ± 12% local minimum and maximum air temperatures recorded in the studied grid. Additional modifications of local air temperatures were due to the thermal buffering of plant canopies (from -2.7 °K during daytime to 1.3 °K during night-time) and soils (from -4.9 °K during daytime to 6.7 °K during night-time) with a significant effect of crop phenology on the buffer effect. This discrepancies between interpolated and local temperatures strongly affected predictions of the performance of an ectothermic crop pest as interpolated temperatures predicted pest growth rates 2.3-4.3 times lower than those predicted by local temperatures. This study provides quantitative information on the limitation of coarse-scale climate data to capture the reality of the climatic environment experienced by living organisms. In highly heterogeneous
Directional sinogram interpolation for motion weighted 4D cone-beam CT reconstruction
Zhang, Hua; Kruis, Matthijs; Sonke, Jan-Jakob
2017-03-01
The image quality of respiratory sorted four-dimensional (4D) cone-beam (CB) computed tomography (CT) is often limited by streak artifacts due to insufficient projections. A motion weighted reconstruction (MWR) method is proposed to decrease streak artifacts and improve image quality. Firstly, respiratory correlated CBCT projections were interpolated by directional sinogram interpolation (DSI) to generate additional CB projections for each phase and subsequently reconstructed. Secondly, local motion was estimated by deformable image registration of the interpolated 4D CBCT. Thirdly, a regular 3D FDK CBCT was reconstructed from the non-interpolated projections. Finally, weights were assigned to each voxel, based on the local motion, and then were used to combine the 3D FDK CBCT and interpolated 4D CBCT to generate the final 4D image. MWR method was compared with regular 4D CBCT scans as well as McKinnon and Bates (MKB) based reconstructions. Comparisons were made in terms of (1) comparing the steepness of an extracted profile from the boundary of the region-of-interest (ROI), (2) contrast-to-noise ratio (CNR) inside certain ROIs, and (3) the root-mean-square-error (RMSE) between the planning CT and CBCT inside a homogeneous moving region. Comparisons were made for both a phantom and four patient scans. In a 4D phantom, RMSE were reduced by 24.7% and 38.7% for MKB and MWR respectively, compared to conventional 4D CBCT. Meanwhile, interpolation induced blur was minimal in static regions for MWR based reconstructions. In regions with considerable respiratory motion, image blur using MWR is less than the MKB and 3D Feldkamp (FDK) methods. In the lung cancer patients, average CNRs of MKB, DSI and MWR improved by a factor 1.7, 2.8 and 3.5 respectively relative to 4D FDK. MWR effectively reduces RMSE in 4D cone-beam CT and improves the image quality in both the static and respiratory moving regions compared to 4D FDK and MKB methods.
3-D ultrasound volume reconstruction using the direct frame interpolation method.
Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin
2010-11-01
A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce
Sanders, Brett F.; Chrysikopoulos, Constantios V.
Channel geometry often is described by a set of longitudinally varying parameters measured at a set of survey stations. To support flow modeling at arbitrary resolution, three methods of parameter interpolation are described including piece-wise linear interpolation, monotone piece-wise-cubic Hermitian interpolation, and universal kriging. The latter gives parameter estimates that minimize the mean square error of the interpolator, and therefore can be used as a standard against which the accuracy of polynomial methods can be assessed. Based on the application of these methods to a dataset describing cross-sectional properties at 283 stations, piece-wise linear interpolation gives parameter estimates that closely track universal kriging estimates and therefore this method is recommended for routine modeling purposes. Piece-wise-cubic interpolation gives parameter estimates that do not track as well. Differences between cubic and kriging estimates were found to be 2-10 times larger than differences between linear and kriging parameter estimates. In the context of one-dimensional flow modeling, the sensitivity of steady state water level predictions to the channel bed interpolator is comparable to a 5% change in the Manning coefficient.
The twitch interpolation technique for study of fatigue of human quadriceps muscle
DEFF Research Database (Denmark)
Bülow, P M; Nørregaard, J; Mehlsen, J
1995-01-01
The aim of the study was to examine if the twitch interpolation technique could be used to objectively measure fatigue in the quadriceps muscle in subjects performing submaximally. The 'true' maximum isometric quadriceps torque was determined in 21 healthy subject using the twitch interpolation...... technique. Then an endurance test was performed in which the subjects made repeated isometric contractions at 50% of the 'true' maximum torque for 4 s, separated by 6 s rest periods. During the test, the force response to single electrical stimulation (twitch amplitude) was measured at 50% and 25......% of the estimated maximum torque. In 10 subjects, the test was repeated 2-4 weeks later. Twitch amplitudes at 50% of maximum torque declined exponentially with time in 20 of 21 subjects. The distribution of the exponential rate constant was skewed with a mean of 4.6 h-1 and range of 0.3-21.5 h-1. After...
Zhang, Fang; Wang, Danyu; Xiao, Zhitao; Geng, Lei; Wu, Jun; Xu, Zhenbei; Sun, Jiao; Wang, Jinjiang; Xi, Jiangtao
2015-11-16
A novel phase extraction method for single electronic speckle pattern interferometry (ESPI) fringes is proposed. The partial differential equations (PDEs) are used to extract the skeletons of the gray-scale fringe and to interpolate the whole-field phase values based on skeleton map. Firstly, the gradient vector field (GVF) of the initial fringe is adjusted by an anisotropic PDE. Secondly, the skeletons of the fringe are extracted combining the divergence property of the adjusted GVF. After assigning skeleton orders, the whole-field phase information is interpolated by the heat conduction equation. The validity of the proposed method is verified by computer-simulated and experimentally obtained poor-quality ESPI fringe patterns.
Interpolated memory tests reduce mind wandering and improve learning of online lectures.
Szpunar, Karl K; Khan, Novall Y; Schacter, Daniel L
2013-04-16
The recent emergence and popularity of online educational resources brings with it challenges for educators to optimize the dissemination of online content. Here we provide evidence that points toward a solution for the difficulty that students frequently report in sustaining attention to online lectures over extended periods. In two experiments, we demonstrate that the simple act of interpolating online lectures with memory tests can help students sustain attention to lecture content in a manner that discourages task-irrelevant mind wandering activities, encourages task-relevant note-taking activities, and improves learning. Importantly, frequent testing was associated with reduced anxiety toward a final cumulative test and also with reductions in subjective estimates of cognitive demand. Our findings suggest a potentially key role for interpolated testing in the development and dissemination of online educational content.
Zhou, Ri-Gui; Tan, Canyun; Fan, Ping
2017-06-01
Reviewing past researches on quantum image scaling, only 2D images are studied. And, in a quantum system, the processing speed increases exponentially since parallel computation can be realized with superposition state when compared with classical computer. Consequently, this paper proposes quantum multidimensional color image scaling based on nearest-neighbor interpolation for the first time. Firstly, flexible representation of quantum images (FRQI) is extended to multidimensional color model. Meantime, the nearest-neighbor interpolation is extended to multidimensional color image and cycle translation operation is designed to perform scaling up operation. Then, the circuits are designed for quantum multidimensional color image scaling, including scaling up and scaling down, based on the extension of FRQI. In addition, complexity analysis shows that the circuits in the paper have lower complexity. Examples and simulation experiments are given to elaborate the procedure of quantum multidimensional scaling.
Fabiano, E; Seidl, M; Della Sala, F
2016-01-01
We have tested the original interaction-strength-interpolation (ISI) exchange-correlation functional for main group chemistry. The ISI functional is based on an interpolation between the weak and strong coupling limits and includes exact-exchange as well as the G\\"orling-Levy second-order energy. We have analyzed in detail the basis-set dependence of the ISI functional, its dependence on the ground-state orbitals, and the influence of the size-consistency problem. We show and explain some of the expected limitations of the ISI functional (i.e. for atomization energies), but also unexpected results, such as the good performance for the interaction energy of dispersion-bonded complexes when the ISI correlation is used as a correction to Hartree-Fock.
Energy Technology Data Exchange (ETDEWEB)
Henderson, B.G.; Borel, C.C.; Theiler, J.P.; Smith, B.W.
1996-04-01
Full utilization of multispectral data acquired by whiskbroom and pushbroom imagers requires that the individual channels be registered accurately. Poor registration introduces errors which can be significant, especially in high contrast areas such as boundaries between regions. We simulate the acquisition of multispectral imagery in order to estimate the errors that are introduced by co-registration of different channels and interpolation within the images. We compute the Modulation Transfer Function (MTF) and image quality degradation brought about by fractional pixel shifting and calculate errors in retrieved quantities (surface temperature and water vapor) that occur as a result of interpolation. We also present a method which might be used to estimate sensor platform motion for accurate registration of images acquired by a pushbroom scanner.
A multivariate fast discrete Walsh transform with an application to function interpolation
Liu, Kwong-Ip; Dick, Josef; Hickernell, Fred J.
2009-09-01
For high dimensional problems, such as approximation and integration, one cannot afford to sample on a grid because of the curse of dimensionality. An attractive alternative is to sample on a low discrepancy set, such as an integration lattice or a digital net. This article introduces a multivariate fast discrete Walsh transform for data sampled on a digital net that requires only mathcal{O}(N log N) operations, where N is the number of data points. This algorithm and its inverse are digital analogs of multivariate fast Fourier transforms. This fast discrete Walsh transform and its inverse may be used to approximate the Walsh coefficients of a function and then construct a spline interpolant of the function. This interpolant may then be used to estimate the function's effective dimension, an important concept in the theory of numerical multivariate integration. Numerical results for various functions are presented.
Dupuis, L. R.; Scoggins, J. R.
1979-01-01
Results of analyses revealed that nonlinear changes or differences formed centers or systems, that were mesosynoptic in nature. These systems correlated well in space with upper level short waves, frontal zones, and radar observed convection, and were very systematic in time and space. Many of the centers of differences were well established in the vertical, extending up to the tropopause. Statistical analysis showed that on the average nonlinear changes were larger in convective areas than nonconvective regions. Errors often exceeding 100 percent were made by assuming variables to change linearly through a 12-h period in areas of thunderstorms, indicating that these nonlinear changes are important in the development of severe weather. Linear changes, however, accounted for more and more of an observed change as the time interval (within the 12-h interpolation period) increased, implying that the accuracy of linear interpolation increased over larger time intervals.
COMPARISON OF DETEMINISTIC INTERPOLATION METHODS FOR THE ESTIMATION OF GROUNDWATER LEVEL
Directory of Open Access Journals (Sweden)
Agnieszka Kamińska
2014-10-01
Full Text Available This paper compares two spatial interpolation techniques – Radial Basis Functions (RBF and Inverse Distance Weighting (IDW – with the goal of determining which method creates the best representation of reality for measured groundwater levels in catchment area. The study used the results of research and field observations from the year 2011, in Sosnowica (West Polesie. The data set consists of groundwater levels measured at 15 points in three series of tests. Surface generation was obtained for each method. The water prediction maps showed spatial variation in the groundwater level in the study area and they are quite different. RBF method resulted in a smoother map. The analysis of the methods of interpolation of analyzed data with the help of cross validation statistics and plots showed that Radial Basis Functions creates better representation of reality for measured groundwater levels.
Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo
2017-06-01
The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.
Qian, Fang; Wu, Yihui; Hao, Peng
2017-11-01
Baseline correction is a very important part of pre-processing. Baseline in the spectrum signal can induce uneven amplitude shifts across different wavenumbers and lead to bad results. Therefore, these amplitude shifts should be compensated before further analysis. Many algorithms are used to remove baseline, however fully automated baseline correction is convenient in practical application. A fully automated algorithm based on wavelet feature points and segment interpolation (AWFPSI) is proposed. This algorithm finds feature points through continuous wavelet transformation and estimates baseline through segment interpolation. AWFPSI is compared with three commonly introduced fully automated and semi-automated algorithms, using simulated spectrum signal, visible spectrum signal and Raman spectrum signal. The results show that AWFPSI gives better accuracy and has the advantage of easy use.
Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation
DEFF Research Database (Denmark)
Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt
2015-01-01
We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non......-negative amplitude parameters to arbitrary complex ones, and (ii) we allow for mismatch between the manifold described by the parameters and its polar approximation. To quantify the improvements afforded by the proposed extensions, we evaluate six algorithms for estimation of parameters in sparse translation......-resolution algorithm. The algorithms studied here provide various tradeoffs between computational complexity, estimation precision, and necessary sampling rate. The work shows that compressive sensing for the class of sparse translation-invariant signals allows for a decrease in sampling rate and that the use of polar...
The analysis of composite laminated beams using a 2D interpolating meshless technique
Sadek, S. H. M.; Belinha, J.; Parente, M. P. L.; Natal Jorge, R. M.; de Sá, J. M. A. César; Ferreira, A. J. M.
2017-09-01
Laminated composite materials are widely implemented in several engineering constructions. For its relative light weight, these materials are suitable for aerospace, military, marine, and automotive structural applications. To obtain safe and economical structures, the modelling analysis accuracy is highly relevant. Since meshless methods in the recent years achieved a remarkable progress in computational mechanics, the present work uses one of the most flexible and stable interpolation meshless technique available in the literature—the Radial Point Interpolation Method (RPIM). Here, a 2D approach is considered to numerically analyse composite laminated beams. Both the meshless formulation and the equilibrium equations ruling the studied physical phenomenon are presented with detail. Several benchmark beam examples are studied and the results are compared with exact solutions available in the literature and the results obtained from a commercial finite element software. The results show the efficiency and accuracy of the proposed numeric technique.
An Online Method for Interpolating Linear Parametric Reduced-Order Models
Amsallem, David
2011-01-01
A two-step online method is proposed for interpolating projection-based linear parametric reduced-order models (ROMs) in order to construct a new ROM for a new set of parameter values. The first step of this method transforms each precomputed ROM into a consistent set of generalized coordinates. The second step interpolates the associated linear operators on their appropriate matrix manifold. Real-time performance is achieved by precomputing inner products between the reduced-order bases underlying the precomputed ROMs. The proposed method is illustrated by applications in mechanical and aeronautical engineering. In particular, its robustness is demonstrated by its ability to handle the case where the sampled parameter set values exhibit a mode veering phenomenon. © 2011 Society for Industrial and Applied Mathematics.
Transmit Array Interpolation for DOA Estimation via Tensor Decomposition in 2-D MIMO Radar
Cao, Ming-Yang; Vorobyov, Sergiy A.; Hassanien, Aboulnasr
2017-10-01
In this paper, we propose a two-dimensional (2D) joint transmit array interpolation and beamspace design for planar array mono-static multiple-input-multiple-output (MIMO) radar for direction-of-arrival (DOA) estimation via tensor modeling. Our underlying idea is to map the transmit array to a desired array and suppress the transmit power outside the spatial sector of interest. In doing so, the signal-tonoise ratio is improved at the receive array. Then, we fold the received data along each dimension into a tensorial structure and apply tensor-based methods to obtain DOA estimates. In addition, we derive a close-form expression for DOA estimation bias caused by interpolation errors and argue for using a specially designed look-up table to compensate the bias. The corresponding Cramer-Rao Bound (CRB) is also derived. Simulation results are provided to show the performance of the proposed method and compare its performance to CRB.
Directory of Open Access Journals (Sweden)
Marek Ławreszuk
2014-11-01
Full Text Available The aim of this paper is to show the interpolation in the Epiclesis of Anaphora prayers in the Liturgies of St. John Chrysostom and St. Basil the Great. The analysis will cover the changes made in the second millennium, in particular the emergence of the troparion of the third hour service, and interpolation of the words “changing it by your Holy Spirit”. The text shows the genesis of adding additional words and explains what their effect was on the structure of the Anaphora prayers. As a result, proposals to solve the problems arising from historical changes in the structure of the Epiclesis and in the whole prayer of the Anaphora will be presented.
Hoarau, Charlotte; Christophe, Sidonie
2017-05-01
Graphic interfaces of geoportals allow visualizing and overlaying various (visually) heterogeneous geographical data, often by image blending: vector data, maps, aerial imagery, Digital Terrain Model, etc. Map design and geo-visualization may benefit from methods and tools to hybrid, i.e. visually integrate, heterogeneous geographical data and cartographic representations. In this paper, we aim at designing continuous hybrid visualizations between ortho-imagery and symbolized vector data, in order to control a particular visual property, i.e. the photo-realism perception. The natural appearance (colors, textures) and various texture effects are used to drive the control the photo-realism level of the visualization: color and texture interpolation blocks have been developed. We present a global design method that allows to manipulate the behavior of those interpolation blocks on each type of geographical layer, in various ways, in order to provide various cartographic continua.
Evaluation of Interpolants in Their Ability to Fit Seismometric Time Series
Directory of Open Access Journals (Sweden)
Kanadpriya Basu
2015-08-01
Full Text Available This article is devoted to the study of the ASARCO demolition seismic data. Two different classes of modeling techniques are explored: First, mathematical interpolation methods and second statistical smoothing approaches for curve fitting. We estimate the characteristic parameters of the propagation medium for seismic waves with multiple mathematical and statistical techniques, and provide the relative advantages of each approach to address fitting of such data. We conclude that mathematical interpolation techniques and statistical curve fitting techniques complement each other and can add value to the study of one dimensional time series seismographic data: they can be use to add more data to the system in case the data set is not large enough to perform standard statistical tests.
Soft tissue artifact compensation by linear 3D interpolation and approximation methods.
Dumas, R; Cheze, L
2009-09-18
Several compensation methods estimate bone pose from a cluster of skin-mounted makers, each influenced by soft tissue artifact (STA). In this study, linear 3D interpolation and approximation methods (affine mapping, Kriging and radial basis function (RBF)) and the conventional singular value decomposition (SVD) method were examined to determine their suitability for STA compensation. The ability of these four methods to estimate knee angles and displacements was compared using simulated gait data with and without added STA. The knee angle and the displacement estimates of all four methods were similar with root-mean-square errors (RMSEs) near 1.5 degrees and 4mm, respectively. The 3D interpolation and approximation methods were more complicated to implement than the conventional SVD method. However, these non-standard methods provided additional geometric (homothety, stretch) and time functions that model the deformation of the cluster of markers. This additional information may be useful to model and compensate the STA.
Robust super-resolution by fusion of interpolated frames for color and grayscale images
Directory of Open Access Journals (Sweden)
Barry eKarch
2015-04-01
Full Text Available Multi-frame super-resolution (SR processing seeks to overcome undersampling issues that can lead to undesirable aliasing artifacts. The key to effective multi-frame SR is accurate subpixel inter-frame registration. This accurate registration is challenging when the motion does not obey a simple global translational model and may include local motion. SR processing is further complicated when the camera uses a division-of-focal-plane (DoFP sensor, such as the Bayer color filter array. Various aspects of these SR challenges have been previously investigated. Fast SR algorithms tend to have difficulty accommodating complex motion and DoFP sensors. Furthermore, methods that can tolerate these complexities tend to be iterative in nature and may not be amenable to real-time processing. In this paper, we present a new fast approach for performing SR in the presence of these challenging imaging conditions. We refer to the new approach as Fusion of Interpolated Frames (FIF SR. The FIF SR method decouples the demosaicing, interpolation, and restoration steps to simplify the algorithm. Frames are first individually demosaiced and interpolated to the desired resolution. Next, FIF uses a novel weighted sum of the interpolated frames to fuse them into an improved resolution estimate. Finally, restoration is applied to deconvolve the modeled system PSF. The proposed FIF approach has a lower computational complexity than most iterative methods, making it a candidate for real-time implementation. We provide a detailed description of the FIF SR method and show experimental results using synthetic and real datasets in both constrained and complex imaging scenarios. The experiments include airborne grayscale imagery and Bayer color array images with affine background motion plus local motion.
Interpolation and Inversion - New Features in the Matlab Sesimic Anisotropy Toolbox
Walker, A.; Wookey, J. M.
2015-12-01
A key step in studies of seismic anisotropy in the mantle is often the creation of models designed to explain its physical origin. We previously released MSAT (the Matlab Seismic Anisotropy Toolbox), which includes a range of functions that can be used together to build these models and provide geological or geophysical insight given measurements of, for example, shear-wave splitting. Here we describe some of the new features of MSAT that will be included in a new release timed to coincide with the 2015 Fall Meeting. A critical step in testing models of the origin of seismic anisotropy is the determination of the misfit between shear-wave splitting parameters predicted from a model and measured from seismic observations. Is a model that correctly reproduces the delay time "better" than a model that correctly reproduces the fast polarization? We have introduced several new methods that use both parameters to calculate the misfit in a meaningful way and these can be used as part of an inversion scheme in order to find a model that best matches measured shear wave splitting. Our preferred approach involves the creation, "splitting", and "unsplitting" of a test wavelet. A measure of the misfit is then provided by the normalized second eigenvalue of the covariance matrix of particle motion for the two wavelets in a way similar to that used to find splitting parameters from data. This can be used as part of an inverse scheme to find a model that can reproduce a set of shear-wave splitting observations. A second challenge is the interpolation of elastic constants between two known points. Naive element-by-element interpolation can result in anomalous seismic velocities from the interpolated tensor. We introduce an interpolation technique involving both the orientation (defined in terms of the eigenvectors of the dilatational or Voigt stiffness tensor) and magnitude of the two end-member elastic tensors. This permits changes in symmetry between the end-members and removes
Atmospheric PSF Interpolation for Weak Lensing in Short Exposure Imaging Data
Energy Technology Data Exchange (ETDEWEB)
Chang, C.; Marshall, P.J.; Jernigan, J.G.; Peterson, J.R.; Kahn, S.M.; Gull, S.F.; AlSayyad, Y.; Ahmad, Z.; Bankert, J.; Bard, D.; Connolly, A.; Gibson, R.R.; Gilmore, K.; Grace, E.; Hannel, M.; Hodge, M.A.; Jones, L.; Krughoff, S.; Lorenz, S.; Marshall, S.; Meert, A.
2012-09-19
A main science goal for the Large Synoptic Survey Telescope (LSST) is to measure the cosmic shear signal from weak lensing to extreme accuracy. One difficulty, however, is that with the short exposure time ({approx_equal}15 seconds) proposed, the spatial variation of the Point Spread Function (PSF) shapes may be dominated by the atmosphere, in addition to optics errors. While optics errors mainly cause the PSF to vary on angular scales similar or larger than a single CCD sensor, the atmosphere generates stochastic structures on a wide range of angular scales. It thus becomes a challenge to infer the multi-scale, complex atmospheric PSF patterns by interpolating the sparsely sampled stars in the field. In this paper we present a new method, psfent, for interpolating the PSF shape parameters, based on reconstructing underlying shape parameter maps with a multi-scale maximum entropy algorithm. We demonstrate, using images from the LSST Photon Simulator, the performance of our approach relative to a 5th-order polynomial fit (representing the current standard) and a simple boxcar smoothing technique. Quantitatively, psfent predicts more accurate PSF models in all scenarios and the residual PSF errors are spatially less correlated. This improvement in PSF interpolation leads to a factor of 3.5 lower systematic errors in the shear power spectrum on scales smaller than {approx} 13, compared to polynomial fitting. We estimate that with psfent and for stellar densities greater than {approx_equal}1/arcmin{sup 2}, the spurious shear correlation from PSF interpolation, after combining a complete 10-year dataset from LSST, is lower than the corresponding statistical uncertainties on the cosmic shear power spectrum, even under a conservative scenario.
Time-Frequency Signal Representations Using Interpolations in Joint-Variable Domains
2016-06-14
nature classification using dynamic time warping ,” IEEE Trans. Aerosp. Electron. Syst., vol. 46, no. 3, pp. 1078–1096, Jul. 2010. [12] S. S. Ram, C...SECURITY CLASSIFICATION OF: Time -frequency (TF) representations are a powerful tool for analyzing Doppler and micro-Doppler signals. These signals are...applied in the instantaneous autocorrelation domain over the time variable, the low-pass filter characteristic underlying linear interpolators lends
Joint interpolation of data and parameter filtration of a multibeam communications channel
Shpylka, A. A.; Zhuk, S. Ya.
2010-01-01
The optimal and quasi-optimal algorithms of joint data interpolation and parameter filtration of a multibeam communications channel have been synthesized by using the tools of mixed Markov processes in discrete time. The analysis of the quasi-optimal algorithm and its comparison with an adaptive filter based on the training sequence were carried out by using the computer statistic simulation for a model example.
Directory of Open Access Journals (Sweden)
MILIVOJEVIC, Z. N.
2010-02-01
Full Text Available In this paper the fundamental frequency estimation results of the MP3 modeled speech signal are analyzed. The estimation of the fundamental frequency was performed by the Picking-Peaks algorithm with the implemented Parametric Cubic Convolution (PCC interpolation. The efficiency of PCC was tested for Catmull-Rom, Greville and Greville two-parametric kernel. Depending on MSE, a window that gives optimal results was chosen.
Slemp, Wesley C. H.; Kapania, Rakesh K.; Tessler, Alexander
2010-01-01
Computation of interlaminar stresses from the higher-order shear and normal deformable beam theory and the refined zigzag theory was performed using the Sinc method based on Interpolation of Highest Derivative. The Sinc method based on Interpolation of Highest Derivative was proposed as an efficient method for determining through-the-thickness variations of interlaminar stresses from one- and two-dimensional analysis by integration of the equilibrium equations of three-dimensional elasticity. However, the use of traditional equivalent single layer theories often results in inaccuracies near the boundaries and when the lamina have extremely large differences in material properties. Interlaminar stresses in symmetric cross-ply laminated beams were obtained by solving the higher-order shear and normal deformable beam theory and the refined zigzag theory with the Sinc method based on Interpolation of Highest Derivative. Interlaminar stresses and bending stresses from the present approach were compared with a detailed finite element solution obtained by ABAQUS/Standard. The results illustrate the ease with which the Sinc method based on Interpolation of Highest Derivative can be used to obtain the through-the-thickness distributions of interlaminar stresses from the beam theories. Moreover, the results indicate that the refined zigzag theory is a substantial improvement over the Timoshenko beam theory due to the piecewise continuous displacement field which more accurately represents interlaminar discontinuities in the strain field. The higher-order shear and normal deformable beam theory more accurately captures the interlaminar stresses at the ends of the beam because it allows transverse normal strain. However, the continuous nature of the displacement field requires a large number of monomial terms before the interlaminar stresses are computed as accurately as the refined zigzag theory.
A General Class of Derivative Free Optimal Root Finding Methods Based on Rational Interpolation
Directory of Open Access Journals (Sweden)
Fiza Zafar
2015-01-01
Full Text Available We construct a new general class of derivative free n-point iterative methods of optimal order of convergence 2n-1 using rational interpolant. The special cases of this class are obtained. These methods do not need Newton’s iterate in the first step of their iterative schemes. Numerical computations are presented to show that the new methods are efficient and can be seen as better alternates.
Random Model Sampling: Making Craig Interpolation Work When It Should Not
Directory of Open Access Journals (Sweden)
Marat Akhin
2014-01-01
Full Text Available One of the most serious problems when doing program analyses is dealing with function calls. While function inlining is the traditional approach to this problem, it nonetheless suffers from the increase in analysis complexity due to the state space explosion. Craig interpolation has been successfully used in recent years in the context of bounded model checking to do function summarization which allows one to replace the complete function body with its succinct summary and, therefore, reduce the complexity. Unfortunately this technique can be applied only to a pair of unsatisfiable formulae.In this work-in-progress paper we present an approach to function summarization based on Craig interpolation that overcomes its limitation by using random model sampling. It captures interesting input/output relations, strengthening satisfiable formulae into unsatisfiable ones and thus allowing the use of Craig interpolation. Preliminary experiments show the applicability of this approach; in our future work we plan to do a full evaluation on real-world examples.
A seismic interpolation and denoising method with curvelet transform matching filter
Yang, Hongyuan; Long, Yun; Lin, Jun; Zhang, Fengjiao; Chen, Zubin
2017-10-01
A new seismic interpolation and denoising method with a curvelet transform matching filter, employing the fast iterative shrinkage thresholding algorithm (FISTA), is proposed. The approach treats the matching filter, seismic interpolation, and denoising all as the same inverse problem using an inversion iteration algorithm. The curvelet transform has a high sparseness and is useful for separating signal from noise, meaning that it can accurately solve the matching problem using FISTA. When applying the new method to a synthetic noisy data sets and a data sets with missing traces, the optimum matching result is obtained, noise is greatly suppressed, missing seismic data are filled by interpolation, and the waveform is highly consistent. We then verified the method by applying it to real data, yielding satisfactory results. The results show that the method can reconstruct missing traces in the case of low SNR (signal-to-noise ratio). The above three problems can be simultaneously solved via FISTA algorithm, and it will not only increase the processing efficiency but also improve SNR of the seismic data.
Directory of Open Access Journals (Sweden)
Aihua Liu
2017-01-01
Full Text Available A method of direction-of-arrival (DOA estimation using array interpolation is proposed in this paper to increase the number of resolvable sources and improve the DOA estimation performance for coprime array configuration with holes in its virtual array. The virtual symmetric nonuniform linear array (VSNLA of coprime array signal model is introduced, with the conventional MUSIC with spatial smoothing algorithm (SS-MUSIC applied on the continuous lags in the VSNLA; the degrees of freedom (DoFs for DOA estimation are obviously not fully exploited. To effectively utilize the extent of DoFs offered by the coarray configuration, a compressing sensing based array interpolation algorithm is proposed. The compressing sensing technique is used to obtain the coarse initial DOA estimation, and a modified iterative initial DOA estimation based interpolation algorithm (IMCA-AI is then utilized to obtain the final DOA estimation, which maps the sample covariance matrix of the VSNLA to the covariance matrix of a filled virtual symmetric uniform linear array (VSULA with the same aperture size. The proposed DOA estimation method can efficiently improve the DOA estimation performance. The numerical simulations are provided to demonstrate the effectiveness of the proposed method.
Directory of Open Access Journals (Sweden)
Tsugio Fukuchi
2014-06-01
Full Text Available The finite difference method (FDM based on Cartesian coordinate systems can be applied to numerical analyses over any complex domain. A complex domain is usually taken to mean that the geometry of an immersed body in a fluid is complex; here, it means simply an analytical domain of arbitrary configuration. In such an approach, we do not need to treat the outer and inner boundaries differently in numerical calculations; both are treated in the same way. Using a method that adopts algebraic polynomial interpolations in the calculation around near-wall elements, all the calculations over irregular domains reduce to those over regular domains. Discretization of the space differential in the FDM is usually derived using the Taylor series expansion; however, if we use the polynomial interpolation systematically, exceptional advantages are gained in deriving high-order differences. In using the polynomial interpolations, we can numerically solve the Poisson equation freely over any complex domain. Only a particular type of partial differential equation, Poisson's equations, is treated; however, the arguments put forward have wider generality in numerical calculations using the FDM.
High-temperature behavior of a deformed Fermi gas obeying interpolating statistics.
Algin, Abdullah; Senay, Mustafa
2012-04-01
An outstanding idea originally introduced by Greenberg is to investigate whether there is equivalence between intermediate statistics, which may be different from anyonic statistics, and q-deformed particle algebra. Also, a model to be studied for addressing such an idea could possibly provide us some new consequences about the interactions of particles as well as their internal structures. Motivated mainly by this idea, in this work, we consider a q-deformed Fermi gas model whose statistical properties enable us to effectively study interpolating statistics. Starting with a generalized Fermi-Dirac distribution function, we derive several thermostatistical functions of a gas of these deformed fermions in the thermodynamical limit. We study the high-temperature behavior of the system by analyzing the effects of q deformation on the most important thermostatistical characteristics of the system such as the entropy, specific heat, and equation of state. It is shown that such a deformed fermion model in two and three spatial dimensions exhibits the interpolating statistics in a specific interval of the model deformation parameter 0 < q < 1. In particular, for two and three spatial dimensions, it is found from the behavior of the third virial coefficient of the model that the deformation parameter q interpolates completely between attractive and repulsive systems, including the free boson and fermion cases. From the results obtained in this work, we conclude that such a model could provide much physical insight into some interacting theories of fermions, and could be useful to further study the particle systems with intermediate statistics.
Shi, Yan; Wang, Hao Gang; Li, Long; Chan, Chi Hou
2008-10-01
A multilevel Green's function interpolation method based on two kinds of multilevel partitioning schemes--the quasi-2D and the hybrid partitioning scheme--is proposed for analyzing electromagnetic scattering from objects comprising both conducting and dielectric parts. The problem is formulated using the surface integral equation for homogeneous dielectric and conducting bodies. A quasi-2D multilevel partitioning scheme is devised to improve the efficiency of the Green's function interpolation. In contrast to previous multilevel partitioning schemes, noncubic groups are introduced to discretize the whole EM structure in this quasi-2D multilevel partitioning scheme. Based on the detailed analysis of the dimension of the group in this partitioning scheme, a hybrid quasi-2D/3D multilevel partitioning scheme is proposed to effectively handle objects with fine local structures. Selection criteria for some key parameters relating to the interpolation technique are given. The proposed algorithm is ideal for the solution of problems involving objects such as missiles, microstrip antenna arrays, photonic bandgap structures, etc. Numerical examples are presented to show that CPU time is between O(N) and O(N log N) while the computer memory requirement is O(N).
Phase Center Interpolation Algorithm for Airborne GPS through the Kalman Filter
Directory of Open Access Journals (Sweden)
Edson A. Mitishita
2005-12-01
Full Text Available The aerial triangulation is a fundamental step in any photogrammetric project. The surveying of the traditional control points, depending on region to be mapped, still has a high cost. The distribution of control points at the block, and its positional quality, influence directly in the resulting precisions of the aero triangulation processing. The airborne GPS technique has as key objectives cost reduction and quality improvement of the ground control in the modern photogrammetric projects. Nowadays, in Brazil, the greatest photogrammetric companies are acquiring airborne GPS systems, but those systems are usually presenting difficulties in the operation, due to the need of human resources for the operation, because of the high technology involved. Inside the airborne GPS technique, one of the fundamental steps is the interpolation of the position of the phase center of the GPS antenna, in the photo shot instant. Traditionally, low degree polynomials are used, but recent studies show that those polynomials is reduced in turbulent flights, which are quite common, mainly in great scales flights. This paper has as objective to present a solution for that problem, through an algorithm based on the Kalman Filter, which takes into account the dynamic aspect of the problem. At the end of the paper, the results of a comparison between experiments done with the proposed methodology and a common linear interpolator are shown. These results show a significant accuracy gain at the procedure of linear interpolation, when the Kalman filter is used.
Determination of Tangent Vectors in Construction of Ferguson Interpolation Curves and Surfaces
Directory of Open Access Journals (Sweden)
I. Linkeová
2000-01-01
Full Text Available In technical practice we often need to find an interpolation curve which must go through the given base points. A basis for the calculation of the piecewise interpolation curve is the Ferguson cubic curve, the final shape of which is significantly influenced by the magnitude and the direction of the tangent vectors at the startpoints and endpoints of the individual segments. This article describes a method for calculating tangent vectors at every definition point, which ensures a perfect adaptation of the shape of Ferguson cubic curves to the given configuration of the definition points. This method of determining tangent vectors shows minimal undesirable waving among given points, overshooting in the vicinity of given points is considerably limited, and first-degree continuity is ensured among individual parts of the Ferguson cubic curve. The results are used to create a mathematical model of the given surface. The mathematical model is formed by connecting the Ferguson 12 vector patches. A spherical surface was selected as the testing surface, because it is easy to judge the accuracy of the method by comparing the values of the coordinates of the points on the calculated interpolation surface with the exact analytically calculated values.
Hodam, Sanayanbi; Sarkar, Sajal; Marak, Areor G. R.; Bandyopadhyay, A.; Bhadra, A.
2017-12-01
In the present study, to understand the spatial distribution characteristics of the ETo over India, spatial interpolation was performed on the means of 32 years (1971-2002) monthly data of 131 India Meteorological Department stations uniformly distributed over the country by two methods, namely, inverse distance weighted (IDW) interpolation and kriging. Kriging was found to be better while developing the monthly surfaces during cross-validation. However, in station-wise validation, IDW performed better than kriging in almost all the cases, hence is recommended for spatial interpolation of ETo and its governing meteorological parameters. This study also checked if direct kriging of FAO-56 Penman-Monteith (PM) (Allen et al. in Crop evapotranspiration—guidelines for computing crop water requirements, Irrigation and drainage paper 56, Food and Agriculture Organization of the United Nations (FAO), Rome, 1998) point ETo produced comparable results against ETo estimated with individually kriged weather parameters (indirect kriging). Indirect kriging performed marginally well compared to direct kriging. Point ETo values were extended to areal ETo values by IDW and FAO-56 PM mean ETo maps for India were developed to obtain sufficiently accurate ETo estimates at unknown locations.
Restoring the missing features of the corrupted speech using linear interpolation methods
Rassem, Taha H.; Makbol, Nasrin M.; Hasan, Ali Muttaleb; Zaki, Siti Syazni Mohd; Girija, P. N.
2017-10-01
One of the main challenges in the Automatic Speech Recognition (ASR) is the noise. The performance of the ASR system reduces significantly if the speech is corrupted by noise. In spectrogram representation of a speech signal, after deleting low Signal to Noise Ratio (SNR) elements, the incomplete spectrogram is obtained. In this case, the speech recognizer should make modifications to the spectrogram in order to restore the missing elements, which is one direction. In another direction, speech recognizer should be able to restore the missing elements due to deleting low SNR elements before performing the recognition. This is can be done using different spectrogram reconstruction methods. In this paper, the geometrical spectrogram reconstruction methods suggested by some researchers are implemented as a toolbox. In these geometrical reconstruction methods, the linear interpolation along time or frequency methods are used to predict the missing elements between adjacent observed elements in the spectrogram. Moreover, a new linear interpolation method using time and frequency together is presented. The CMU Sphinx III software is used in the experiments to test the performance of the linear interpolation reconstruction method. The experiments are done under different conditions such as different lengths of the window and different lengths of utterances. Speech corpus consists of 20 males and 20 females; each one has two different utterances are used in the experiments. As a result, 80% recognition accuracy is achieved with 25% SNR ratio.
A novel polar format algorithm for SAR images utilizing post azimuth transform interpolation.
Energy Technology Data Exchange (ETDEWEB)
Holzrichter, Michael Warren; Martin, Grant D.; Doerry, Armin Walter
2005-09-01
SAR phase history data represents a polar array in the Fourier space of a scene being imaged. Polar Format processing is about reformatting the collected SAR data to a Cartesian data location array for efficient processing and image formation. In a real-time system, this reformatting or ''re-gridding'' operation is the most processing intensive, consuming the majority of the processing time; it also is a source of error in the final image. Therefore, any effort to reduce processing time while not degrading image quality is valued. What is proposed in this document is a new way of implementing real-time polar-format processing through a variation on the traditional interpolation/2-D Fast Fourier Transform (FFT) algorithm. The proposed change is based upon the frequency scaling property of the Fourier Transform, which allows a post azimuth FFT interpolation. A post azimuth processing interpolation provides overall benefits to image quality and potentially more efficient implementation of the polar format image formation process.
Multi-Block Computation by Characteristic Interface Conditions with High-Order Interpolation
Sumi, Takahiro; Kurotaki, Takuji; Hiyama, Jun
In the previous study, the authors proposed the generalized characteristic interface conditions (GCIC) for high-order finite difference multi-block computation in the structured grid system. The GCIC can realize single point connection between adjacent blocks, and allows metric discontinuities on the block interface, however, the grid points of the adjacent blocks have to be collocated correspondingly on the block interface. In this study, in order to enhance the flexibility of the GCIC, by incorporating the high-order interpolation method such as the Lagrange or B-spline interpolation, the GCIC+I (GCIC with Interpolation) are newly developed and introduced. The GCIC+I can solve multi-block problem with non-uniform staggered grid connection on the block interface, and the grid resolution can be arbitrarily changed in each block. In this article, their theoretical concept is briefly presented, and suitable numerical test analysis of inviscid or viscous flow is conducted in order to validate the proposed theory. As a result, the successful functions of the GCIC+I are confirmed.
Interpolation of groundwater quality parameters with some values below the detection limit
Directory of Open Access Journals (Sweden)
A. Bárdossy
2011-09-01
Full Text Available For many environmental variables, measurements cannot deliver exact observation values as their concentration is below the sensitivity of the measuring device (detection limit. These observations provide useful information but cannot be treated in the same manner as the other measurements. In this paper a methodology for the spatial interpolation of these values is described. The method is based on spatial copulas. Here two copula models – the Gaussian and a non-Gaussian v-copula are used. First a mixed maximum likelihood approach is used to estimate the marginal distributions of the parameters. After removal of the marginal distributions the next step is the maximum likelihood estimation of the parameters of the spatial dependence including taking those values below the detection limit into account. Interpolation using copulas yields full conditional distributions for the unobserved sites and can be used to estimate confidence intervals, and provides a good basis for spatial simulation. The methodology is demonstrated on three different groundwater quality parameters, i.e. arsenic, chloride and deethylatrazin, measured at more than 2000 locations in South-West Germany. The chloride values are artificially censored at different levels in order to evaluate the procedures on a complete dataset by progressive decimation. Interpolation results are evaluated using a cross validation approach. The method is compared with ordinary kriging and indicator kriging. The uncertainty measures of the different approaches are also compared.
Directory of Open Access Journals (Sweden)
Jinyang Song
2018-01-01
Full Text Available Many modulated signals exhibit a cyclostationarity property, which can be exploited in direction-of-arrival (DOA estimation to effectively eliminate interference and noise. In this paper, our aim is to integrate the cyclostationarity with the spatial domain and enable the algorithm to estimate more sources than sensors. However, DOA estimation with a sparse array is performed in the coarray domain and the holes within the coarray limit the usage of the complete coarray information. In order to use the complete coarray information to increase the degrees-of-freedom (DOFs, sparsity-aware-based methods and the difference coarray interpolation methods have been proposed. In this paper, the coarray interpolation technique is further explored with cyclostationary signals. Besides the difference coarray model and its corresponding Toeplitz completion formulation, we build up a sum coarray model and formulate a Hankel completion problem. In order to further improve the performance of the structured matrix completion, we define the spatial spectrum sampling operations and the derivative (conjugate correlation subspaces, which can be exploited to construct orthogonal constraints for the autocorrelation vectors in the coarray interpolation problem. Prior knowledge of the source interval can also be incorporated into the problem. Simulation results demonstrate that the additional constraints contribute to a remarkable performance improvement.
Hasegawa, Hideo
2009-07-01
Generalized Bose-Einstein and Fermi-Dirac distributions in nonextensive quantum statistics have been discussed by the maximum-entropy method (MEM) with the optimum Lagrange multiplier based on the exact integral representation [A. K. Rajagopal, R. S. Mendes, and E. K. Lenzi, Phys. Rev. Lett. 80, 3907 (1998)]. It has been shown that the (q-1) expansion in the exact approach agrees with the result obtained by the asymptotic approach valid for O(q-1). Model calculations have been made with a uniform density of states for electrons and with the Debye model for phonons. Based on the result of the exact approach, we have proposed the interpolation approximation to the generalized distributions, which yields results in agreement with the exact approach within O(q-1) and in high- and low-temperature limits. By using the four methods of the exact, interpolation, factorization, and superstatistical approaches, we have calculated coefficients in the generalized Sommerfeld expansion and electronic and phonon specific heats at low temperatures. A comparison among the four methods has shown that the interpolation approximation is potentially useful in the nonextensive quantum statistics. Supplementary discussions have been made on the (q-1) expansion of the generalized distributions based on the exact approach with the use of the un-normalized MEM, whose results also agree with those of the asymptotic approach.
Interpolation of Superconducting Gravity Observations Using Least-Squares Collocation Method
Habel, Branislav; Janak, Juraj
2014-05-01
A pre-processing of the gravity data measured by superconducting gravimeter involves removing of spikes, offsets and gaps. Their presence in observations can limit the data analysis and degrades the quality of obtained results. Short data gaps are filling by theoretical signal in order to get continuous records of gravity. It requires the accurate tidal model and eventually atmospheric pressure at the observed site. The poster presents a design of algorithm for interpolation of gravity observations with a sampling rate of 1 min. Novel approach is based on least-squares collocation which combines adjustment of trend parameters, filtering of noise and prediction. It allows the interpolation of missing data up to a few hours without necessity of any other information. Appropriate parameters for covariance function are found using a Bayes' theorem by modified optimization process. Accuracy of method is improved by the rejection of outliers before interpolation. For filling of longer gaps the collocation model is combined with theoretical tidal signal for the rigid Earth. Finally, the proposed method was tested on the superconducting gravity observations at several selected stations of Global Geodynamics Project. Testing demonstrates its reliability and offers results comparable with the standard approach implemented in ETERNA software package without necessity of an accurate tidal model.
Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng
2018-03-01
Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.
First-principles calculation of nonlinear optical responses by Wannier interpolation
Wang, Chong; Liu, Xiaoyu; Kang, Lei; Gu, Bing-Lin; Xu, Yong; Duan, Wenhui
2017-09-01
Various nonlinear optical (NLO) responses, like shift current and second harmonic generation (SHG), are revealed to be closely related to topological quantities involving the Berry connection and Berry curvature. First-principles prediction of NLO responses is of great importance to fundamental research and device design, but efficient computational methods are still lacking. The main challenge is that the calculations require a very dense k -point sampling that is computationally expensive and a proper treatment of the gauge problem for topological quantities. Here we present a Wannier interpolation method for first-principles calculation of NLO responses, which overcomes the challenge. This method interpolates physical quantities accurately for any desired k point with little computational cost and constructs a smooth gauge by the perturbation theory. To demonstrate the method, we study shift current of monolayer GeS and WS2 as well as SHG of bulk GaAs, getting good agreements with previous results. We show that the traditional sum rule method converges slowly with the number of bands, whereas the perturbation way does not. Moreover, our method is easily adapted to build tight-binding models for the following theoretical investigations. Last but not least, the method is compatible with most first-principles approaches, including density functional theory and beyond. With these advantages, Wannier interpolation is a promising method for first-principles studies of NLO phenomena.
Energy Technology Data Exchange (ETDEWEB)
Pimentel, David A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sheppard, Daniel G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-02-01
It was recently demonstrated that EOSPAC 6 continued to incorrectly create and interpolate pre-inverted SESAME data tables after the release of version 6.3.2beta.2. Significant interpolation pathologies were discovered to occur when EOSPAC 6's host software enabled pre-inversion with the EOS_INVERT_AT_SETUP option. This document describes a solution that uses data transformations found in EOSPAC 5 and its predecessors. The numerical results and performance characteristics of both the default and pre-inverted interpolation modes in both EOSPAC 6.3.2beta.2 and the fixed logic of EOSPAC 6.4.0beta.1 are presented herein, and the latter software release is shown to produce significantly-improved numerical results for the pre-inverted interpolation mode.
Directory of Open Access Journals (Sweden)
Silvio Jorge Coelho Simões
2012-08-01
Full Text Available The reference evapotranspiration is an important hydrometeorological variable; its measurement is scarce in large portions of the Brazilian territory, what demands the search for alternative methods and techniques for its quantification. In this sense, the present work investigated a method for the spatialization of the reference evapotranspiration using the geostatistical method of kriging, in regions with limited data and hydrometeorological stations. The monthly average reference evapotranspiration was calculated by the Penman-Monteith-FAO equation, based on data from three weather stations located in southern Minas Gerais (Itajubá, Lavras and Poços de Caldas, and subsequently interpolated by ordinary point kriging using the approach "calculate and interpolate." The meteorological data for a fourth station (Três Corações located within the area of interpolation were used to validate the reference evapotranspiration interpolated spatially. Due to the reduced number of stations and the consequent impossibility of carrying variographic analyzes, correlation coefficient (r, index of agreement (d, medium bias error (MBE, root mean square error (RMSE and t-test were used for comparison between the calculated and interpolated reference evapotranspiration for the Três Corações station. The results of this comparison indicated that the spatial kriging procedure, even using a few stations, allows to interpolate satisfactorily the reference evapotranspiration, therefore, it is an important tool for agricultural and hydrological applications in regions with lack of data.
Small-scale health-related indicator acquisition using secondary data spatial interpolation
Directory of Open Access Journals (Sweden)
Thompson Mary E
2010-10-01
Full Text Available Abstract Background Due to the lack of small-scale neighbourhood-level health related indicators, the analysis of social and spatial determinants of health often encounter difficulties in assessing the interrelations of neighbourhood and health. Although secondary data sources are now becoming increasingly available, they usually cannot be directly utilized for analysis in other than the designed study due to sampling issues. This paper aims to develop data handling and spatial interpolation procedures to obtain small area level variables using the Canadian Community Health Surveys (CCHS data so that meaningful small-scale neighbourhood level health-related indicators can be obtained for community health research and health geographical analysis. Results Through the analysis of spatial autocorrelation, cross validation comparison, and modeled effect comparison with census data, kriging is identified as the most appropriate spatial interpolation method for obtaining predicted values of CCHS variables at unknown locations. Based on the spatial structures of CCHS data, kriging parameters are suggested and potential small-area-level health-related indicators are derived. An empirical study is conducted to demonstrate the effective use of derived neighbourhood variables in spatial statistical modeling. Suggestions are also given on the accuracy, reliability and usage of the obtained small area level indicators, as well as further improvements of the interpolation procedures. Conclusions CCHS variables are moderately spatially autocorrelated, making kriging a valid method for predicting values at unsampled locations. The derived variables are reliable but somewhat smoother, with smaller variations than the real values. As potential neighbourhood exposures in spatial statistical modeling, these variables are more suitable to be used for exploring potential associations than for testing the significance of these associations, especially for associations
Energy Technology Data Exchange (ETDEWEB)
Castillo M, J. A
1999-07-01
Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks.
Directory of Open Access Journals (Sweden)
Tao ZHU
2017-04-01
Full Text Available Objective To explore the anti-HBV activity of anodonta polysaccharides (AP and dose-effect relationship in vitro. Methods HepG2.2.15 cells were cultured in vitro and incubated at 37℃for nine days with AP at a dilution ratio of 1:10. The expression of HBsAg and HBeAg were detected using ELISA and HBV-DNA copies were detected by real-time fluorescent quantitative PCR. Based on Thiele-type continued-fraction interpolation method, the anti-HBV activity of AP was studied, and the IC50 and the maximum inhibition rate were calculated. Results AP had significant inhibitory effect on the expression of HBsAg and HBeAg in HepG2.2.15 cells in vitro, as well as HBV DNA replication. By Thiele-type continued-fraction interpolation the equations of dose-effect relationship were obtained to determine the maximum inhibition rates of AP on HBeAg and HBsAg secretion being 47.7% and 56.4%, and the IC50 inhibiting the expression of HBeAg being 143.7mg/L. AP was also able to inhibit HBV-DNA replication and the maximum inhibition rate was 17.8% with the same method above. Conclusion Anodonta polysaccharides have anti-HBV activity. The thiele-type continued-fraction interpolation method is simple and practical and could be used as a new method for the analysis of drug activity. DOI: 10.11855/j.issn.0577-7402.2017.03.02
Zhang, Yongqiang; Vaze, Jai; Chiew, Francis H. S.; Teng, Jin; Li, Ming
2014-09-01
Understanding a catchment's behaviours in terms of its underlying hydrological signatures is a fundamental task in surface water hydrology. It can help in water resource management, catchment classification, and prediction of runoff time series. This study investigated three approaches for predicting six hydrological signatures in southeastern Australia. These approaches were (1) spatial interpolation with three weighting schemes, (2) index model that estimates hydrological signatures using catchment characteristics, and (3) classical rainfall-runoff modelling. The six hydrological signatures fell into two categories: (1) long-term aggregated signatures - annual runoff coefficient, mean of log-transformed daily runoff, and zero flow ratio, and (2) signatures obtained from daily flow metrics - concavity index, seasonality ratio of runoff, and standard deviation of log-transformed daily flow. A total of 228 unregulated catchments were selected, with half the catchments randomly selected as gauged (or donors) for model building and the rest considered as ungauged (or receivers) to evaluate performance of the three approaches. The results showed that for two long-term aggregated signatures - the log-transformed daily runoff and runoff coefficient, the index model and rainfall-runoff modelling performed similarly, and were better than the spatial interpolation methods. For the zero flow ratio, the index model was best and the rainfall-runoff modelling performed worst. The other three signatures, derived from daily flow metrics and considered to be salient flow characteristics, were best predicted by the spatial interpolation methods of inverse distance weighting (IDW) and kriging. Comparison of flow duration curves predicted by the three approaches showed that the IDW method was best. The results found here provide guidelines for choosing the most appropriate approach for predicting hydrological behaviours at large scales.
Mohammadi, Seyedeh Atefeh; Azadi, Majid; Rahmani, Morteza
2017-08-01
All numerical weather prediction (NWP) models inherently have substantial biases, especially in the forecast of near-surface weather variables. Statistical methods can be used to remove the systematic error based on historical bias data at observation stations. However, many end users of weather forecasts need bias corrected forecasts at locations that scarcely have any historical bias data. To circumvent this limitation, the bias of surface temperature forecasts on a regular grid covering Iran is removed, by using the information available at observation stations in the vicinity of any given grid point. To this end, the running mean error method is first used to correct the forecasts at observation stations, then four interpolation methods including inverse distance squared weighting with constant lapse rate (IDSW-CLR), Kriging with constant lapse rate (Kriging-CLR), gradient inverse distance squared with linear lapse rate (GIDS-LR), and gradient inverse distance squared with lapse rate determined by classification and regression tree (GIDS-CART), are employed to interpolate the bias corrected forecasts at neighboring observation stations to any given location. The results show that all four interpolation methods used do reduce the model error significantly, but Kriging-CLR has better performance than the other methods. For Kriging-CLR, root mean square error (RMSE) and mean absolute error (MAE) were decreased by 26% and 29%, respectively, as compared to the raw forecasts. It is found also, that after applying any of the proposed methods, unlike the raw forecasts, the bias corrected forecasts do not show spatial or temporal dependency.
Real-Time Curvature Defect Detection on Outer Surfaces Using Best-Fit Polynomial Interpolation
Directory of Open Access Journals (Sweden)
Ahmed Patel
2012-11-01
Full Text Available This paper presents a novel, real-time defect detection system, based on a best-fit polynomial interpolation, that inspects the conditions of outer surfaces. The defect detection system is an enhanced feature extraction method that employs this technique to inspect the flatness, waviness, blob, and curvature faults of these surfaces. The proposed method has been performed, tested, and validated on numerous pipes and ceramic tiles. The results illustrate that the physical defects such as abnormal, popped-up blobs are recognized completely, and that flames, waviness, and curvature faults are detected simultaneously.
Franco, Ademir; Thevissen, Patrick; Coudyzer, Walter; Develter, Wim; Van de Voorde, Wim; Oyen, Raymond; Vandermeulen, Dirk; Jacobs, Reinhilde; Willems, Guy
2013-05-01
Virtual autopsy is a medical imaging technique, using full body computed tomography (CT), allowing for a noninvasive and permanent observation of all body parts. For dental identification clinically and radiologically observed ante-mortem (AM) and post-mortem (PM) oral identifiers are compared. The study aimed to verify if a PM dental charting can be performed on virtual reconstructions of full-body CT's using the Interpol dental codes. A sample of 103 PM full-body CT's was collected from the forensic autopsy files of the Department of Forensic Medicine University Hospitals, KU Leuven, Belgium. For validation purposes, 3 of these bodies underwent a complete dental autopsy, a dental radiological and a full-body CT examination. The bodies were scanned in a Siemens Definition Flash CT Scanner (Siemens Medical Solutions, Germany). The images were examined on 8- and 12-bit screen resolution as three-dimensional (3D) reconstructions and as axial, coronal and sagittal slices. InSpace(®) (Siemens Medical Solutions, Germany) software was used for 3D reconstruction. The dental identifiers were charted on pink PM Interpol forms (F1, F2), using the related dental codes. Optimal dental charting was obtained by combining observations on 3D reconstructions and CT slices. It was not feasible to differentiate between different kinds of dental restoration materials. The 12-bit resolution enabled to collect more detailed evidences, mainly related to positions within a tooth. Oral identifiers, not implemented in the Interpol dental coding were observed. Amongst these, the observed (3D) morphological features of dental and maxillofacial structures are important identifiers. The latter can become particularly more relevant towards the future, not only because of the inherent spatial features, yet also because of the increasing preventive dental treatment, and the decreasing application of dental restorations. In conclusion, PM full-body CT examinations need to be implemented in the
Xu, Jing; Liu, Xiaofei; Wang, Yutian
2016-08-01
Parallel factor analysis is a widely used method to extract qualitative and quantitative information of the analyte of interest from fluorescence emission-excitation matrix containing unknown components. Big amplitude of scattering will influence the results of parallel factor analysis. Many methods of eliminating scattering have been proposed. Each of these methods has its advantages and disadvantages. The combination of symmetrical subtraction and interpolated values has been discussed. The combination refers to both the combination of results and the combination of methods. Nine methods were used for comparison. The results show the combination of results can make a better concentration prediction for all the components.
Energy Technology Data Exchange (ETDEWEB)
Suescun D, D.; Figueroa J, J. H. [Pontificia Universidad Javeriana Cali, Departamento de Ciencias Naturales y Matematicas, Calle 18 No. 118-250, Cali, Valle del Cauca (Colombia); Rodriguez R, K. C.; Villada P, J. P., E-mail: dsuescun@javerianacali.edu.co [Universidad del Valle, Departamento de Fisica, Calle 13 No. 100-00, Cali, Valle del Cauca (Colombia)
2015-09-15
A new method to solve numerically the inverse equation of punctual kinetics without using Lagrange interpolating polynomial is formulated; this method uses a polynomial approximation with N points based on a process of recurrence for simulating different forms of nuclear power. The results show a reliable accuracy. Furthermore, the method proposed here is suitable for real-time measurements of reactivity, with step sizes of calculations greater that Δt = 0.3 s; due to its precision can be used to implement a digital meter of reactivity in real time. (Author)
A Weight-Averaged Interpolation Method for Coupling Time-Accurate Rarefied and Continuum Flows
Diaz, Steven William
A novel approach to coupling rarefied and continuum flow regimes as a single, hybrid model is introduced. The method borrows from techniques used in the simulation of spray flows to interpolate Lagrangian point-particles onto an Eulerian grid in a weight-averaged sense. A brief overview of traditional methods for modeling both rarefied and continuum domains is given, and a review of the literature regarding rarefied/continuum flow coupling is presented. Details of the theoretical development of the method of weighted interpolation are then described. The method evaluates macroscopic properties at the nodes of a CFD grid via the weighted interpolation of all simulated molecules in a set surrounding the node. The weight factor applied to each simulated molecule is the inverse of the linear distance between it and the given node. During development, the method was applied to several preliminary cases, including supersonic flow over an airfoil, subsonic flow over tandem airfoils, and supersonic flow over a backward facing step; all at low Knudsen numbers. The main thrust of the research centered on the time-accurate expansion of a rocket plume into a near-vacuum. The method proves flexible enough to be used with various flow solvers, demonstrated by the use of Fluent as the continuum solver for the preliminary cases and a NASA-developed Large Eddy Simulation research code, WRLES, for the full lunar model. The method is applicable to a wide range of Mach numbers and is completely grid independent, allowing the rarefied and continuum solvers to be optimized for their respective domains without consideration of the other. The work presented demonstrates the validity, and flexibility of the method of weighted interpolation as a novel concept in the field of hybrid flow coupling. The method marks a significant divergence from current practices in the coupling of rarefied and continuum flow domains and offers a kernel on which to base an ongoing field of research. It has the
Directory of Open Access Journals (Sweden)
Elmira Ashpazzadeh
2018-04-01
Full Text Available A numerical technique based on the Hermite interpolant multiscaling functions is presented for the solution of Convection-diusion equations. The operational matrices of derivative, integration and product are presented for multiscaling functions and are utilized to reduce the solution of linear Convection-diusion equation to the solution of algebraic equations. Because of sparsity of these matrices, this method is computationally very attractive and reduces the CPU time and computer memory. Illustrative examples are included to demonstrate the validity and applicability of the new technique.
Interpolation of text from the Castilian Macer Floridus in ms. II-3063 of the Real Biblioteca
Directory of Open Access Journals (Sweden)
Jesús Pensado Figueiras
2012-12-01
Full Text Available Textual analysis of codex II-3063 of the Real Biblioteca (Royal Palace, Madrid has located a new Castilian version of several passages from the Latin work on herbalism De viribus herbarum, better known as Macer Floridus. The text of codex II-3063 contains interpolated descriptions of the virtues of six plants outlined in the Latin text and confirms the hypothesis of a single translation from Latin into a peninsular Romance language, from which later versions would be made until the current number of texts of the Macer Floridus in Catalan, Aragonese, and Castilian was reached.
On Interpolation Functions of the Generalized Twisted (h,q-Euler Polynomials
Directory of Open Access Journals (Sweden)
Kyoung Ho Park
2009-01-01
Full Text Available The aim of this paper is to construct p-adic twisted two-variable Euler-(h,q-L-functions, which interpolate generalized twisted (h,q-Euler polynomials at negative integers. In this paper, we treat twisted (h,q-Euler numbers and polynomials associated with p-adic invariant integral on ℤp. We will construct two-variable twisted (h,q-Euler-zeta function and two-variable (h,q-L-function in Complex s-plane.
Vnukov, A. A.; Shershnev, M. B.
2018-01-01
The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.
Novel method of interpolation and extrapolation of functions by a linear initial value problem
CSIR Research Space (South Africa)
Shatalov, M
2008-09-01
Full Text Available stream_source_info Shatalov2_2008.pdf.txt stream_content_type text/plain stream_size 13399 Content-Encoding UTF-8 stream_name Shatalov2_2008.pdf.txt Content-Type text/plain; charset=UTF-8 Buffelspoort TIME2008 Peer... Buffelspoort TIME2008 Peer-reviewed Conference Proceedings, 22 – 26 September 2008 - 94 - periodic functions, periodic functions and exponents, and polynomials, exponents and periodic functions. It is well suited for the purposes of interpolation...
Optimal sixteenth order convergent method based on quasi-Hermite interpolation for computing roots.
Zafar, Fiza; Hussain, Nawab; Fatimah, Zirwah; Kharal, Athar
2014-01-01
We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton's method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
Optimal Sixteenth Order Convergent Method Based on Quasi-Hermite Interpolation for Computing Roots
Hussain, Nawab; Fatimah, Zirwah
2014-01-01
We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton's method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method. PMID:25197701
Directory of Open Access Journals (Sweden)
Shaofeng Wang
2017-05-01
Full Text Available Mineral reserve estimation and mining design depend on a precise modeling of the mineralized deposit. A multi-step interpolation algorithm, including 1D biharmonic spline estimator for interpolating floor altitudes, 2D nearest neighbor, linear, natural neighbor, cubic, biharmonic spline, inverse distance weighted, simple kriging, and ordinary kriging interpolations for grade distribution on the two vertical sections at roadways, and 3D linear interpolation for grade distribution between sections, was proposed to build a 3D grade distribution model of the mineralized seam in a longwall mining panel with a U-shaped layout having two roadways at both sides. Compared to field data from exploratory boreholes, this multi-step interpolation using a natural neighbor method shows an optimal stability and a minimal difference between interpolation and field data. Using this method, the 97,576 m3 of bauxite, in which the mass fraction of Al2O3 (Wa and the mass ratio of Al2O3 to SiO2 (Wa/s are 61.68% and 27.72, respectively, was delimited from the 189,260 m3 mineralized deposit in the 1102 longwall mining panel in the Wachangping mine, Southwest China. The mean absolute errors, the root mean squared errors and the relative standard deviations of errors between interpolated data and exploratory grade data at six boreholes are 2.544, 2.674, and 32.37% of Wa; and 1.761, 1.974, and 67.37% of Wa/s, respectively. The proposed method can be used for characterizing the grade distribution in a mineralized seam between two roadways at both sides of a longwall mining panel.
Directory of Open Access Journals (Sweden)
Nikesh S. Dattani
2012-03-01
Full Text Available One of the most successful methods for calculating reduced density operator dynamics in open quantum systems, that can give numerically exact results, uses Feynman integrals. However, when simulating the dynamics for a given amount of time, the number of time steps that can realistically be used with this method is always limited, therefore one often obtains an approximation of the reduced density operator at a sparse grid of points in time. Instead of relying only on ad hoc interpolation methods (such as splines to estimate the system density operator in between these points, I propose a method that uses physical information to assist with this interpolation. This method is tested on a physically significant system, on which its use allows important qualitative features of the density operator dynamics to be captured with as little as two time steps in the Feynman integral. This method allows for an enormous reduction in the amount of memory and CPU time required for approximating density operator dynamics within a desired accuracy. Since this method does not change the way the Feynman integral itself is calculated, the value of the density operator approximation at the points in time used to discretize the Feynamn integral will be the same whether or not this method is used, but its approximation in between these points in time is considerably improved by this method. A list of ways in which this proposed method can be further improved is presented in the last section of the article.
An Improved DOA Estimation Approach Using Coarray Interpolation and Matrix Denoising.
Guo, Muran; Chen, Tao; Wang, Ben
2017-05-16
Co-prime arrays can estimate the directions of arrival (DOAs) of O ( M N ) sources with O ( M + N ) sensors, and are convenient to analyze due to their closed-form expression for the locations of virtual lags. However, the number of degrees of freedom is limited due to the existence of holes in difference coarrays if subspace-based algorithms such as the spatial smoothing multiple signal classification (MUSIC) algorithm are utilized. To address this issue, techniques such as positive definite Toeplitz completion and array interpolation have been proposed in the literature. Another factor that compromises the accuracy of DOA estimation is the limitation of the number of snapshots. Coarray-based processing is particularly sensitive to the discrepancy between the sample covariance matrix and the ideal covariance matrix due to the finite number of snapshots. In this paper, coarray interpolation based on matrix completion (MC) followed by a denoising operation is proposed to detect more sources with a higher accuracy. The effectiveness of the proposed method is based on the capability of MC to fill in holes in the virtual sensors and that of MC denoising operation to reduce the perturbation in the sample covariance matrix. The results of numerical simulations verify the superiority of the proposed approach.
Shortcut in DIC error assessment induced by image interpolation used for subpixel shifting
Bornert, Michel; Doumalin, Pascal; Dupré, Jean-Christophe; Poilane, Christophe; Robert, Laurent; Toussaint, Evelyne; Wattrisse, Bertrand
2017-04-01
In order to characterize errors of Digital Image Correlation (DIC) algorithms, sets of virtual images are often generated from a reference image by in-plane sub-pixel translations. This leads to the determination of the well-known S-shaped bias error curves and their corresponding random error curves. As images are usually shifted by using interpolation schemes similar to those used in DIC algorithms, the question of the possible bias in the quantification of measurement uncertainties of DIC softwares is raised and constitutes the main problematic of this paper. In this collaborative work, synthetic numerically shifted images are built from two methods: one based on interpolations of the reference image and the other based on the transformation of an analytic texture function. Images are analyzed using an in-house subset-based DIC software and results are compared and discussed. The effect of image noise is also highlighted. The main result is that the a priori choices to numerically shift the reference image modify DIC results and may lead to wrong conclusions in terms of DIC error assessment.
Memory-efficient optimization of Gyrokinetic particle-to-grid interpolation for multicore processors
Energy Technology Data Exchange (ETDEWEB)
Madduri, Kamesh [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ethier, Stephane [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Shalf, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Strohmaier, Erich [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yelicky, Katherine [Univ. of California, Berkeley, CA (United States)
2009-01-01
We present multicore parallelization strategies for the particle-to-grid interpolation step in the Gyrokinetic Toroidal Code (GTC), a 3D particle-in-cell (PIC) application to study turbulent transport in magnetic-confinement fusion devices. Particle-grid interpolation is a known performance bottleneck in several PIC applications. In GTC, this step involves particles depositing charges to a 3D toroidal mesh, and multiple particles may contribute to the charge at a grid point. We design new parallel algorithms for the GTC charge deposition kernel, and analyze their performance on three leading multicore platforms. We implement thirteen different variants for this kernel and identify the best-performing ones given typical PIC parameters such as the grid size, number of particles per cell, and the GTC-specific particle Larmor radius variation. We find that our best strategies can be 2x faster than the reference optimized MPI implementation, and our analysis provides insight into desirable architectural features for high-performance PIC simulation codes.
Directory of Open Access Journals (Sweden)
Kurt James Werner
2016-10-01
Full Text Available The magnitude of the Discrete Fourier Transform (DFT of a discrete-time signal has a limited frequency definition. Quadratic interpolation over the three DFT samples surrounding magnitude peaks improves the estimation of parameters (frequency and amplitude of resolved sinusoids beyond that limit. Interpolating on a rescaled magnitude spectrum using a logarithmic scale has been shown to improve those estimates. In this article, we show how to heuristically tune a power scaling parameter to outperform linear and logarithmic scaling at an equivalent computational cost. Although this power scaling factor is computed heuristically rather than analytically, it is shown to depend in a structured way on window parameters. Invariance properties of this family of estimators are studied and the existence of a bias due to noise is shown. Comparing to two state-of-the-art estimators, we show that an optimized power scaling has a lower systematic bias and lower mean-squared-error in noisy conditions for ten out of twelve common windowing functions.
Interpolation by Hankel translates of a basis function: inversion formulas and polynomial bounds.
Arteaga, Cristian; Marrero, Isabel
2014-01-01
For μ≥-1/2, the authors have developed elsewhere a scheme for interpolation by Hankel translates of a basis function Φ in certain spaces of continuous functions Yn(n∈ℕ) depending on a weight w. The functions Φ and w are connected through the distributional identity t4n(hμ'Φ)(t)=1/w(t), where hμ' denotes the generalized Hankel transform of order μ. In this paper, we use the projection operators associated with an appropriate direct sum decomposition of the Zemanian space ℋμ in order to derive explicit representations of the derivatives SμmΦ and their Hankel transforms, the former ones being valid when m∈ℤ+ is restricted to a suitable interval for which SμmΦ is continuous. Here, Sμm denotes the mth iterate of the Bessel differential operator Sμ if m∈ℕ, while Sμ0 is the identity operator. These formulas, which can be regarded as inverses of generalizations of the equation (hμ'Φ)(t)=1/t4nw(t), will allow us to get some polynomial bounds for such derivatives. Corresponding results are obtained for the members of the interpolation space Y n .
Seismic data interpolation and denoising by learning a tensor tight frame
Liu, Lina; Plonka, Gerlind; Ma, Jianwei
2017-10-01
Seismic data interpolation and denoising plays a key role in seismic data processing. These problems can be understood as sparse inverse problems, where the desired data are assumed to be sparsely representable within a suitable dictionary. In this paper, we present a new method based on a data-driven tight frame (DDTF) of Kronecker type (KronTF) that avoids the vectorization step and considers the multidimensional structure of data in a tensor-product way. It takes advantage of the structure contained in all different modes (dimensions) simultaneously. In order to overcome the limitations of a usual tensor-product approach we also incorporate data-driven directionality. The complete method is formulated as a sparsity-promoting minimization problem. It includes two main steps. In the first step, a hard thresholding algorithm is used to update the frame coefficients of the data in the dictionary; in the second step, an iterative alternating method is used to update the tight frame (dictionary) in each different mode. The dictionary that is learned in this way contains the principal components in each mode. Furthermore, we apply the proposed KronTF to seismic interpolation and denoising. Examples with synthetic and real seismic data show that the proposed method achieves better results than the traditional projection onto convex sets method based on the Fourier transform and the previous vectorized DDTF methods. In particular, the simple structure of the new frame construction makes it essentially more efficient.
Ferragina, V.; Frassone, A.; Ghittori, N.; Malcovati, P.; Vigna, A.
2005-06-01
The behavioral analysis and the design in a 0.13 μm CMOS technology of a digital interpolator filter for wireless applications are presented. The proposed block is designed to be embedded in the baseband part of a reconfigurable transmitter (WLAN 802.11a, UMTS) to operate as a sampling frequency boost between the digital signal processor (DSP) and the digital-to-analog converter (DAC). In recent trends the DAC of such transmitters usually operates at high conversion frequencies (to allow a relaxed implementation of the following analog reconstruction filter), while the DSP output flows at low frequencies (typically Nyquist rate). Thus a block able to increase the digital data rate, like the one proposed, is needed before the DAC. For example, in the WLAN case, an interpolation factor of 4 has been used, allowing the digital data frequency to raise from 20 MHz to 80 MHz. Using a time-domain model of the TX chain, a behavioral analysis has been performed to determine the impact of the filter performance on the quality of the signal at the antenna. This study has led to the evaluation of the z-domain filter transfer function, together with the specifications concerning a finite precision implementation. A VHDL description has allowed an automatic synthesis of the circuit in a 0.13 μm CMOS technology (with a supply voltage of 1.2 V). Post-synthesis simulations have confirmed the effectiveness of the proposed study.
Segmental interpolating spectra for solar particle events and in situ validation
Hu, S.; Zeitlin, C.; Atwell, W.; Fry, D.; Barzilla, J. E.; Semones, E.
2016-10-01
It is a delicate task to accurately assess the impact of solar particle events (SPEs) on future long-duration human exploration missions. In the past, researchers have used several functional forms to fit satellite data for radiation exposure estimation. In this work we present a segmental power law interpolating algorithm to stream satellite data and get time series of proton spectra, which can be used to derive dosimetric quantities for any short period during which a single SPE or multiple SPEs occur. Directly using the corrected High Energy Proton and Alpha Detector fluxes of GOES, this method interpolates the intensity spectrum of a typical SPE to hundreds of MeV and extrapolates to the GeV level as long as sufficient particles are recorded in the high-energy sensors. The high-energy branch of the May 2012 SPE is consistent with the Band functional fitting, which is calibrated with ground level measurement. Modeling simulations indicate that the input spectrum of an SPE beyond 100 MeV is the major contributor for dose estimation behind the normal shielding thickness of spacecraft. Applying this method to the three SPEs that occurred in 2012 generates results consistent with two sets of in situ measurements, demonstrating that this approach could be a way to perform real-time dose estimation. This work also indicates that the galactic cosmic ray dose rate is important for accurately modeling the temporal profile of radiation exposure during an SPE.
Voice Morphing Using 3D Waveform Interpolation Surfaces and Lossless Tube Area Functions
Directory of Open Access Journals (Sweden)
Lavner Yizhar
2005-01-01
Full Text Available Voice morphing is the process of producing intermediate or hybrid voices between the utterances of two speakers. It can also be defined as the process of gradually transforming the voice of one speaker to that of another. The ability to change the speaker's individual characteristics and to produce high-quality voices can be used in many applications. Examples include multimedia and video entertainment, as well as enrichment of speech databases in text-to-speech systems. In this study we present a new technique which enables production of a given number of intermediate voices or of utterances which gradually change from one voice to another. This technique is based on two components: (1 creation of a 3D prototype waveform interpolation (PWI surface from the LPC residual signal, to produce an intermediate excitation signal; (2 a representation of the vocal tract by a lossless tube area function, and an interpolation of the parameters of the two speakers. The resulting synthesized signal sounds like a natural voice lying between the two original voices.
Interpolated conjunctival pedicle flaps for the treatment of exposed glaucoma drainage devices.
Godfrey, David G; Merritt, James H; Fellman, Ronald L; Starita, Richard J
2003-12-01
To describe an alternative method to repair exposed glaucoma drainage devices (GDDs) when conventional attempts have failed. Four eyes, from 3 patients, with severe ocular surface disease were included in the study. All eyes had previously received a Baerveldt GDD for uncontrollable intraocular pressure and postoperatively had exposed GDDs. The conjunctival defects were unrepairable with a scleral patch or pericardium, conjunctival advancement, or a conjunctival patch graft. Two eyes had chemical burns, one eye had extensive scarring from multiple surgical procedures, and one patient had rheumatoid arthritis. Each patient provided informed consent, and was given the option of removing the GDD and undergoing diode cyclophotocoagulation or attempting to save the GDD by a conjunctival pedicle flap. An interpolated conjunctival pedicle flap was taken from the cul-de-sac (fornix). The conjunctiva and Tenon capsule were incised radially to the tube, rotated from the fornix at a 90 degrees angle, and sutured to the remaining healthy conjunctiva to cover the exposed tube. Postoperatively, all eyes had vascularized flaps that showed viable tissue. All eyes retained the GDDs, and the intraocular pressure has been under control during follow-up (7, 13, 25, and 27 months). Interpolated conjunctival pedicle flaps seem to be a viable alternative to repairing exposed GDDs when other methods are impractical or impossible.
A Web-Based Tool to Interpolate Nitrogen Loading Using a Genetic Algorithm
Directory of Open Access Journals (Sweden)
Youn Shik Park
2014-09-01
Full Text Available Water quality data may not be collected at a high frequency, nor over the range of streamflow data. For instance, water quality data are often collected monthly, biweekly, or weekly, since collecting and analyzing water quality samples are costly compared to streamflow data. Regression models are often used to interpolate pollutant loads from measurements made intermittently. Web-based Load Interpolation Tool (LOADIN was developed to provide user-friendly interfaces and to allow use of streamflow and water quality data from U.S. Geological Survey (USGS via web access. LOADIN has a regression model assuming that instantaneous load is comprised of the pollutant load based on streamflow and the pollutant load variation within the period. The regression model has eight coefficients determined by a genetic algorithm with measured water quality data. LOADIN was applied to eleven water quality datasets from USGS gage stations located in Illinois, Indiana, Michigan, Minnesota, and Wisconsin states with drainage areas from 44 km2 to 1,847,170 km2. Measured loads were calculated by multiplying nitrogen data by streamflow data associated with measured nitrogen data. The estimated nitrogen loads and measured loads were evaluated using Nash-Sutcliffe Efficiency (NSE and coefficient of determination (R2. NSE ranged from 0.45 to 0.91, and R2 ranged from 0.51 to 0.91 for nitrogen load estimation.
Nguyen, Hoai-Nam
2014-01-01
A comprehensive development of interpolating control, this monograph demonstrates the reduced computational complexity of a ground-breaking technique compared with the established model predictive control. The text deals with the regulation problem for linear, time-invariant, discrete-time uncertain dynamical systems having polyhedral state and control constraints, with and without disturbances, and under state or output feedback. For output feedback a non-minimal state-space representation is used with old inputs and outputs as state variables. Constrained Control of Uncertain, Time-Varying, Discrete-time Systems details interpolating control in both its implicit and explicit forms. In the former at most two linear-programming or one quadratic-programming problem are solved on-line at each sampling instant to yield the value of the control variable. In the latter the control law is shown to be piecewise affine in the state, and so the state space is partitioned into polyhedral cells so that at each sampling ...
Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment
Energy Technology Data Exchange (ETDEWEB)
Bernauer, J.C. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Diefenbach, J. [Hampton University, Hampton, VA (United States); Elbakian, G. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Gavrilov, G. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation); Goerrissen, N. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Hasell, D.K.; Henderson, B.S. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Holler, Y. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Karyan, G. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Ludwig, J. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Marukyan, H. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Naryshkin, Y. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation); O' Connor, C.; Russell, R.L.; Schmidt, A. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Schneekloth, U. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Suvorov, K.; Veretennikov, D. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation)
2016-07-01
The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.
Lin, Chi-Kun; Wu, Yi-Hsien; Yang, Jar-Ferr; Liu, Bin-Da
2015-12-01
For super-resolution (4K × 2K) displays, super-resolution technologies, which can upsample videos to higher resolution and achieve better visual quality, become more and more important currently. In this paper, an iterative enhanced super-resolution (IESR) system which is based on two-pass edge-dominated interpolation, adaptive enhancement, and adaptive dithering techniques is proposed. The two-pass edge-dominated interpolation with a simple and regular kernel can sharpen visual quality while the adaptive enhancement can provide high-frequency perfection and the adaptive dithering conveys naturalization enhancement such that the proposed IESR system achieves better peak signal-to-noise ratio (PSNR) and exhibits better visual quality. Experimental results indicate that the proposed IESR system, which improves PSNR up to 28.748 dB and promotes structural similarity index measurement (SSIM) up to 0.917611 in averages, is better than the other existing methods. Simulations also exhibit that the proposed IESR system acquires lower computational complexity than the methods which achieve similar visual quality.
Roy, Subrata P.
2014-01-28
The method of moments with interpolative closure (MOMIC) for soot formation and growth provides a detailed modeling framework maintaining a good balance in generality, accuracy, robustness, and computational efficiency. This study presents several computational issues in the development and implementation of the MOMIC-based soot modeling for direct numerical simulations (DNS). The issues of concern include a wide dynamic range of numbers, choice of normalization, high effective Schmidt number of soot particles, and realizability of the soot particle size distribution function (PSDF). These problems are not unique to DNS, but they are often exacerbated by the high-order numerical schemes used in DNS. Four specific issues are discussed in this article: the treatment of soot diffusion, choice of interpolation scheme for MOMIC, an approach to deal with strongly oxidizing environments, and realizability of the PSDF. General, robust, and stable approaches are sought to address these issues, minimizing the use of ad hoc treatments such as clipping. The solutions proposed and demonstrated here are being applied to generate new physical insight into complex turbulence-chemistry-soot-radiation interactions in turbulent reacting flows using DNS. © 2014 Copyright Taylor and Francis Group, LLC.
Batch orographic interpolation of monthly precipitation based on free-of-charge geostatistical tools
Ledvinka, Ondrej
2017-11-01
The effects of possible climate change on water resources in prescribed areas (e.g. river basins) are intensively studied in hydrology. These resources are highly dependent on precipitation totals. When focusing on long-term changes in climate variables, one has to rely on station measurements. However, hydrologists need the information on spatial distribution of precipitation over the areas. For this purpose, the spatial interpolation techniques must be employed. In Czechia, where the addition of elevation co-variables proved to be a good choice, several GIS tools exist that are able to produce time series necessary for climate change analyses. Nevertheless, these tools are exclusively based on commercial software and there is a lack of free-of-charge tools that could be used by everyone. Here, selected free-of-charge geostatistical tools were utilized in order to produce monthly precipitation time series representing six river basins in the Ore Mountains located in NW Bohemia, Czechia and SE Saxony, Germany. The produced series span from January 1961 to December 2012. Rain-gauge data from both Czechia and Germany were used. The universal kriging technique was employed where a multiple linear regression (based on elevation and coordinates) residuals were interpolated. The final time series seem to be homogeneous.
Directory of Open Access Journals (Sweden)
Xihua Yang
2015-01-01
Full Text Available This paper presents spatial interpolation techniques to produce finer-scale daily rainfall data from regional climate modeling. Four common interpolation techniques (ANUDEM, Spline, IDW, and Kriging were compared and assessed against station rainfall data and modeled rainfall. The performance was assessed by the mean absolute error (MAE, mean relative error (MRE, root mean squared error (RMSE, and the spatial and temporal distributions. The results indicate that Inverse Distance Weighting (IDW method is slightly better than the other three methods and it is also easy to implement in a geographic information system (GIS. The IDW method was then used to produce forty-year (1990–2009 and 2040–2059 time series rainfall data at daily, monthly, and annual time scales at a ground resolution of 100 m for the Greater Sydney Region (GSR. The downscaled daily rainfall data have been further utilized to predict rainfall erosivity and soil erosion risk and their future changes in GSR to support assessments and planning of climate change impact and adaptation in local scale.
Joint seismic data denoising and interpolation with double-sparsity dictionary learning
Zhu, Lingchen; Liu, Entao; McClellan, James H.
2017-08-01
Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.
Late-Pleistocene precipitation δ18O interpolated across the global landmass
Jasechko, Scott
2016-08-01
Global water cycles, ecosystem assemblages, and weathering rates were impacted by the ˜4°C of global warming that took place over the course of the last glacial termination. Fossil groundwaters can be useful indicators of late-Pleistocene precipitation isotope compositions, which, in turn, can help to test hypotheses about the drivers and impacts of glacial-interglacial climate changes. Here, a global catalog of 126 fossil groundwater records is used to interpolate late-Pleistocene precipitation δ18O across the global landmass. The interpolated data show that extratropical late-Pleistocene terrestrial precipitation was near uniformly depleted in 18O relative to the late Holocene. By contrast, tropical δ18O responses to deglacial warming diverged; late-Pleistocene δ18O was higher-than-modern across India and South China but lower-than-modern throughout much of northern and southern Africa. Groundwaters that recharged beneath large northern hemisphere ice sheets have different Holocene-Pleistocene δ18O relationships than paleowaters that recharged subaerially, potentially aiding reconstructions of englacial transport in paleo ice sheets. Global terrestrial late-Pleistocene precipitation δ18O maps may help to determine 3-D groundwater age distributions, constrain Pleistocene mammal movements, and better understand glacial climate dynamics.
Spatial and temporal interpolation of DInSAR data at different scales
Tessitore, Serena; Fiaschi, Simone; Achilli, Vladimiro; Ahmed, Ahmed; Calcaterra, Domenico; Di Martire, Diego; Guardiola-Albert, Carolina; Meisina, Claudia; Ramondini, Massimo; Floris, Mario
2015-04-01
The present study regards the utilization of multi-pass DInSAR algorithms to the ground displacements monitoring at small and large scales. An integration of in situ and DInSAR data to the elaboration of 2D maps of deformation is proposed. A geo-statistical method for "radar-gauge combination" called Ordinary Kriging of Radar Errors (OKRE) has been used. This algorithm uses the punctual values of a primary variable that is represented by measurements of true deformations, whereas radar is comprised as auxiliary information on the spatial distribution (Erdin, 2013). According to this method, is possible to obtain the interpolated map of deformations by subtracting a radar error map from the original interpolated radar map. In particular, the radar error map is carried out by interpolating the differences between radar and in situ data with the OK interpolator. To this aim, in the present work the available standard spirit levelling and GPS data have been used. Moreover, DInSAR data achieved through two different approaches have been taken into account for the spatial analysis and the error map computation at different scales. Specifically, the Persistent Scatterer Technique (PS-InSAR) and the Small BAseline Subset approach (SBAS) have been used to process the ENVISAT SAR images acquired in the period 2002-2010. In the SBAS processing chain, it is possible to activate the Disconnected Blocks tool and perform the SAR data "temporal interpolation". Since the estimation of the results in the processing takes into account the coherence threshold on the input images stack and their connection criteria, only the pixels above the threshold that are fully connected in all the images are solved. By activating the Disconnect Blocks tool, the results are estimated also for those pixels that respect the threshold criteria at least in the 60% of the images even in a not fully connected stack. In this way, the spatial coverage is higher but the reliability of the results is has to
Hybrid digital signal processing and neural networks applications in PWRs
Energy Technology Data Exchange (ETDEWEB)
Eryurek, E.; Upadhyaya, B.R.; Kavaklioglu, K.
1991-12-31
Signal validation and plant subsystem tracking in power and process industries require the prediction of one or more state variables. Both heteroassociative and auotassociative neural networks were applied for characterizing relationships among sets of signals. A multi-layer neural network paradigm was applied for sensor and process monitoring in a Pressurized Water Reactor (PWR). This nonlinear interpolation technique was found to be very effective for these applications.
Liu, Yilong; Fischer, Achim; Eberhard, Peter; Wu, Baohai
2015-06-01
A high-order full-discretization method (FDM) using Hermite interpolation (HFDM) is proposed and implemented for periodic systems with time delay. Both Lagrange interpolation and Hermite interpolation are used to approximate state values and delayed state values in each discretization step. The transition matrix over a single period is determined and used for stability analysis. The proposed method increases the approximation order of the semidiscretization method and the FDM without increasing the computational time. The convergence, precision, and efficiency of the proposed method are investigated using several Mathieu equations and a complex turning model as examples. Comparison shows that the proposed HFDM converges faster and uses less computational time than existing methods.
Park, Jae Woo; Rhee, Young Min
2014-10-20
Understanding photochemical processes often requires accurate descriptions of the nonadiabatic events involved. The cost of accurate quantum chemical simulations of the nonadiabatic dynamics of complex systems is typically high. Here, we discuss the use of interpolated quasi-diabatic potential-energy matrices, which aims to reduce the computational cost with minimal sacrifices in accuracy. It is shown that interpolation reproduces the reference ab initio information satisfactorily for a sizeable chromophore in terms of its adiabatic energies and derivative coupling vectors. Actual nonadiabatic simulation results of the chromophore in the gas phase and in aqueous solution are presented, and it is demonstrated that the interpolated quasi-diabatic Hamiltonian can be applied to studying nonadiabatic events of a complex system in an ensemble manner at a much-reduced cost. Limitations, and how they can be overcome in future studies, are also discussed. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Energy Technology Data Exchange (ETDEWEB)
Penteado, Miguel Suarez Xavier [Pos-Graduacao em Agronomia - Energia na Agricultura, FCA UNESP - Botucatu, SP (Brazil), Dept. de Recursos Naturais], e-mail: miguel_penteado@fca.unesp.br; Escobedo, Joao Francisco [Dept. de Recursos Naturais, FCA/UNESP, Botucatu, SP (Brazil)], e-mail: escobedo@fca.unesp.br; Dal Pai, Alexandre [Faculdade de Tecnologia de Botucatu - FATEC, Botucatu, SP (Brazil)], e-mail: adalpai@fatecbt.edu.br
2011-07-01
This work explores the suitability of the Lagrange interpolating polynomial as a tool to estimate and correct solar databases. From the knowledge of the irradiance distribution over a day, portion of it was removed for applying Lagrange interpolation polynomial. After generation of the estimates by interpolation, the assessment was made by MBE and Rms statistical indicators. The application of Lagrange interpolating generated the following results: underestimation of 0.27% (MBE = -1.83 W/m{sup 2}) and scattering of 0.51% (Rms = 3.48 W/m{sup 2}). (author)
Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong
2016-01-01
Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.
Validation of Binary, Fractional and Interpolated Snow Maps at Multiple Resolutions
Rittger, K.; McKenzie, C.; Painter, T.; Dozier, J.
2008-12-01
Mapping snow cover from multispectral sensors began with a simple normalized index using visible and near infrared wavelengths to classify pixels as either snow covered or snow free, a "binary" classification. Using a canopy reflectance model and incorporating a vegetation index improved the binary algorithm. Although the binary snow mapping methods are computationally simple, they are in practice flawed because sensors with fine spatial resolution usually have a coarse temporal resolution, and vice versa. For sensors with fine enough temporal resolution to track the dynamic seasonal snow environment, few pixels are either completely snow covered or completely snow free. Methods to estimate snow cover enable us to determine the fraction of the pixel covered with snow. Fractional methods include: decision tree classifiers, relationships of snow cover to snow index developed using regressions with finer-resolution data, and spectral un-mixing. Finally, daily data can be interpolated to produce a best estimate of snow cover. Here, we compare snow cover retrievals from binary and fractional snow cover algorithms using various satellites at fine and moderate resolution: AVHRR (1km), MODIS (500m), Landsat (30m) and ASTER (15m), AVIRIS (2m), and 1m data from degraded classified imagery. For binary snow cover we use both NDSI and NDSI with vegetation correction. For fractional snow cover we use a currently implemented operation algorithm MOD10A1 and our own estimates from MODSCAG spectral un-mixing. For smoothed estimates of snow cover we use another operational algorithm, MOD10A2 and our own reanalysis of MODSCAG fractional snow cover. The main study area is the Sierra Nevada of California, along with scenes in the Upper Rio Grande, Colorado Rocky Mountains and the Annapurna and Khumbal Himal. We find that fractional methods are superior to binary methods. Moreover, we find that linear spectral un-mixing gives the best estimates of snow cover at moderate resolution over
A high-resolution time interpolator based on a delay locked loop and an RC delay line
Mota, M
1999-01-01
An architecture for a time interpolation circuit with an rms error of ~25 ps has been developed in a 0.7- mu m CMOS technology. It is based on a delay locked loop (DLL) driven by a 160-MHz reference clock and a passive RC delay line controlled by an autocalibration circuit. Start-up calibration of the RC delay line is performed using code density tests (CDT). The very small temperature/voltage dependence of R and C parameters and the self calibrating DLL results in a low- power, high-resolution time interpolation circuit in a standard digital CMOS technology. (11 refs).
Directory of Open Access Journals (Sweden)
Shulun Liu
2018-01-01
Full Text Available Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW, nearest neighbors (NN, linear spline (LN, and ordinary Kriging (OK, were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In terms of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations