Stein, A.
1991-01-01
The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are
Directory of Open Access Journals (Sweden)
Mauricio Castro Franco
2017-07-01
Full Text Available Context: Interpolating soil properties at field-scale in the Colombian piedmont eastern plains is challenging due to: the highly and complex variable nature of some processes; the effects of the soil; the land use; and the management. While interpolation techniques are being adapted to include auxiliary information of these effects, the soil data are often difficult to predict using conventional techniques of spatial interpolation. Method: In this paper, we evaluated and compared six spatial interpolation techniques: Inverse Distance Weighting (IDW, Spline, Ordinary Kriging (KO, Universal Kriging (UK, Cokriging (Ckg, and Residual Maximum Likelihood-Empirical Best Linear Unbiased Predictor (REML-EBLUP, from conditioned Latin Hypercube as a sampling strategy. The ancillary information used in Ckg and REML-EBLUP was indexes calculated from a digital elevation model (MDE. The “Random forest” algorithm was used for selecting the most important terrain index for each soil properties. Error metrics were used to validate interpolations against cross validation. Results: The results support the underlying assumption that HCLc captured adequately the full distribution of variables of ancillary information in the Colombian piedmont eastern plains conditions. They also suggest that Ckg and REML-EBLUP perform best in the prediction in most of the evaluated soil properties. Conclusions: Mixed interpolation techniques having auxiliary soil information and terrain indexes, provided a significant improvement in the prediction of soil properties, in comparison with other techniques.
DEFF Research Database (Denmark)
Shekarchi, Sayedali; Christensen-Dalsgaard, Jakob; Hallam, John
2015-01-01
A head-related transfer function (HRTF) model employing Legendre polynomials (LPs) is evaluated as an HRTF spatial complexity indicator and interpolation technique in the azimuth plane. LPs are a set of orthogonal functions derived on the sphere which can be used to compress an HRTF dataset...
A disposition of interpolation techniques
Knotters, M.; Heuvelink, G.B.M.
2010-01-01
A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated
COMPARISONS BETWEEN DIFFERENT INTERPOLATION TECHNIQUES
Directory of Open Access Journals (Sweden)
G. Garnero
2014-01-01
In the present study different algorithms will be analysed in order to spot an optimal interpolation methodology. The availability of the recent digital model produced by the Regione Piemonte with airborne LIDAR and the presence of sections of testing realized with higher resolutions and the presence of independent digital models on the same territory allow to set a series of analysis with consequent determination of the best methodologies of interpolation. The analysis of the residuals on the test sites allows to calculate the descriptive statistics of the computed values: all the algorithms have furnished interesting results; all the more interesting, notably for dense models, the IDW (Inverse Distance Weighing algorithm results to give best results in this study case. Moreover, a comparative analysis was carried out by interpolating data at different input point density, with the purpose of highlighting thresholds in input density that may influence the quality reduction of the final output in the interpolation phase.
Air Quality Assessment Using Interpolation Technique
Directory of Open Access Journals (Sweden)
Awkash Kumar
2016-07-01
Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.
Research progress and hotspot analysis of spatial interpolation
Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li
2018-02-01
In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Technique for image interpolation using polynomial transforms
Escalante Ramírez, B.; Martens, J.B.; Haskell, G.G.; Hang, H.M.
1993-01-01
We present a new technique for image interpolation based on polynomial transforms. This is an image representation model that analyzes an image by locally expanding it into a weighted sum of orthogonal polynomials. In the discrete case, the image segment within every window of analysis is
Spatial interpolation of point velocities in stream cross-section
Directory of Open Access Journals (Sweden)
Hasníková Eliška
2015-03-01
Full Text Available The most frequently used instrument for measuring velocity distribution in the cross-section of small rivers is the propeller-type current meter. Output of measuring using this instrument is point data of a tiny bulk. Spatial interpolation of measured data should produce a dense velocity profile, which is not available from the measuring itself. This paper describes the preparation of interpolation models.
Directory of Open Access Journals (Sweden)
A. Jo
2018-04-01
Full Text Available The purpose of this study is to create a new dataset of spatially interpolated monthly climate data for South Korea at high spatial resolution (approximately 30m by performing various spatio-statistical interpolation and comparing with forecast LDAPS gridded climate data provided from Korea Meterological Administration (KMA. Automatic Weather System (AWS and Automated Synoptic Observing System (ASOS data in 2017 obtained from KMA were included for the spatial mapping of temperature and rainfall; instantaneous temperature and 1-hour accumulated precipitation at 09:00 am on 31th March, 21th June, 23th September, and 24th December. Among observation data, 80 percent of the total point (478 and remaining 120 points were used for interpolations and for quantification, respectively. With the training data and digital elevation model (DEM with 30 m resolution, inverse distance weighting (IDW, co-kriging, and kriging were performed by using ArcGIS10.3.1 software and Python 3.6.4. Bias and root mean square were computed to compare prediction performance quantitatively. When statistical analysis was performed for each cluster using 20 % validation data, co kriging was more suitable for spatialization of instantaneous temperature than other interpolation method. On the other hand, IDW technique was appropriate for spatialization of precipitation.
Twitch interpolation technique in testing of maximal muscle strength
DEFF Research Database (Denmark)
Bülow, P M; Nørregaard, J; Danneskiold-Samsøe, B
1993-01-01
The aim was to study the methodological aspects of the muscle twitch interpolation technique in estimating the maximal force of contraction in the quadriceps muscle utilizing commercial muscle testing equipment. Six healthy subjects participated in seven sets of experiments testing the effects...
A comparison of spatial rainfall estimation techniques: A case study ...
African Journals Online (AJOL)
Two geostatistical interpolation techniques (kriging and cokriging) were evaluated against inverse distance weighted (IDW) and global polynomial interpolation (GPI). Of the four spatial interpolators, kriging and cokriging produced results with the least root mean square error (RMSE). A digital elevation model (DEM) was ...
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level
Reconstruction of reflectance data using an interpolation technique.
Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh
2009-03-01
A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.
Directory of Open Access Journals (Sweden)
C. Berndt
2018-02-01
New hydrological insights: Geostatistical techniques provide a better performance for all climate variables compared to simple methods Radar data improves the estimation of rainfall with hourly temporal resolution, while topography is useful for weekly to yearly values and temperature in general. No helpful information was found for cloudiness, sunshine duration, and wind speed, while interpolation of humidity benefitted from additional temperature data. The influences of temporal resolution, spatial variability, and additional information appear to be stronger than station density effects. High spatial variability of hourly precipitation causes the highest error, followed by wind speed, cloud coverage and sunshine duration. Lowest errors occur for temperature and humidity.
Data mining techniques in sensor networks summarization, interpolation and surveillance
Appice, Annalisa; Fumarola, Fabio; Malerba, Donato
2013-01-01
Sensor networks comprise of a number of sensors installed across a spatially distributed network, which gather information and periodically feed a central server with the measured data. The server monitors the data, issues possible alarms and computes fast aggregates. As data analysis requests may concern both present and past data, the server is forced to store the entire stream. But the limited storage capacity of a server may reduce the amount of data stored on the disk. One solution is to compute summaries of the data as it arrives, and to use these summaries to interpolate the real data.
Ding, Qian; Wang, Yong; Zhuang, Dafang
2018-04-15
The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for
THE EFFECT OF STIMULUS ANTICIPATION ON THE INTERPOLATED TWITCH TECHNIQUE
Directory of Open Access Journals (Sweden)
Duane C. Button
2008-12-01
Full Text Available The objective of this study was to investigate the effect of expected and unexpected interpolated stimuli (IT during a maximum voluntary contraction on quadriceps force output and activation. Two groups of male subjects who were either inexperienced (MI: no prior experience with IT tests or experienced (ME: previously experienced 10 or more series of IT tests received an expected or unexpected IT while performing quadriceps isometric maximal voluntary contractions (MVCs. Measurements included MVC force, quadriceps and hamstrings electromyographic (EMG activity, and quadriceps inactivation as measured by the interpolated twitch technique (ITT. When performing MVCs with the expectation of an IT, the knowledge or lack of knowledge of an impending IT occurring during a contraction did not result in significant overall differences in force, ITT inactivation, quadriceps or hamstrings EMG activity. However, the expectation of an IT significantly (p < 0.0001 reduced MVC force (9.5% and quadriceps EMG activity (14.9% when compared to performing MVCs with prior knowledge that stimulation would not occur. While ME exhibited non-significant decreases when expecting an IT during a MVC, MI force and EMG activity significantly decreased 12.4% and 20.9% respectively. Overall, ME had significantly (p < 0.0001 higher force (14.5% and less ITT inactivation (10.4% than MI. The expectation of the noxious stimuli may account for the significant decrements in force and activation during the ITT
Directory of Open Access Journals (Sweden)
Annalisa Di Piazza
2015-04-01
Full Text Available An exhaustive comparison among different spatial interpolation algorithms was carried out in order to derive annual and monthly air temperature maps for Sicily (Italy. Deterministic, data-driven and geostatistics algorithms were used, in some cases adding the elevation information and other physiographic variables to improve the performance of interpolation techniques and the reconstruction of the air temperature field. The dataset is given by air temperature data coming from 84 stations spread around the island of Sicily. The interpolation algorithms were optimized by using a subset of the available dataset, while the remaining subset was used to validate the results in terms of the accuracy and bias of the estimates. Validation results indicate that univariate methods, which neglect the information from physiographic variables, significantly entail the largest errors, while performances improve when such parameters are taken into account. The best results at the annual scale have been obtained using the the ordinary kriging of residuals from linear regression and from the artificial neural network algorithm, while, at the monthly scale, a Fourier-series algorithm has been used to downscale mean annual temperature to reproduce monthly values in the annual cycle.
Spatial and spectral interpolation of ground-motion intensity measure observations
Worden, Charles; Thompson, Eric M.; Baker, Jack W.; Bradley, Brendon A.; Luco, Nicolas; Wilson, David
2018-01-01
Following a significant earthquake, ground‐motion observations are available for a limited set of locations and intensity measures (IMs). Typically, however, it is desirable to know the ground motions for additional IMs and at locations where observations are unavailable. Various interpolation methods are available, but because IMs or their logarithms are normally distributed, spatially correlated, and correlated with each other at a given location, it is possible to apply the conditional multivariate normal (MVN) distribution to the problem of estimating unobserved IMs. In this article, we review the MVN and its application to general estimation problems, and then apply the MVN to the specific problem of ground‐motion IM interpolation. In particular, we present (1) a formulation of the MVN for the simultaneous interpolation of IMs across space and IM type (most commonly, spectral response at different oscillator periods) and (2) the inclusion of uncertain observation data in the MVN formulation. These techniques, in combination with modern empirical ground‐motion models and correlation functions, provide a flexible framework for estimating a variety of IMs at arbitrary locations.
Directory of Open Access Journals (Sweden)
Xihua Yang
2015-01-01
Full Text Available This paper presents spatial interpolation techniques to produce finer-scale daily rainfall data from regional climate modeling. Four common interpolation techniques (ANUDEM, Spline, IDW, and Kriging were compared and assessed against station rainfall data and modeled rainfall. The performance was assessed by the mean absolute error (MAE, mean relative error (MRE, root mean squared error (RMSE, and the spatial and temporal distributions. The results indicate that Inverse Distance Weighting (IDW method is slightly better than the other three methods and it is also easy to implement in a geographic information system (GIS. The IDW method was then used to produce forty-year (1990–2009 and 2040–2059 time series rainfall data at daily, monthly, and annual time scales at a ground resolution of 100 m for the Greater Sydney Region (GSR. The downscaled daily rainfall data have been further utilized to predict rainfall erosivity and soil erosion risk and their future changes in GSR to support assessments and planning of climate change impact and adaptation in local scale.
Directory of Open Access Journals (Sweden)
Meiling Cheng
2017-11-01
Full Text Available Accurate assessment of spatial and temporal precipitation is crucial for simulating hydrological processes in basins, but is challenging due to insufficient rain gauges. Our study aims to analyze different precipitation interpolation schemes and their performances in runoff simulation during light and heavy rain periods. In particular, combinations of different interpolation estimates are explored and their performances in runoff simulation are discussed. The study was carried out in the Pengxi River basin of the Three Gorges Basin. Precipitation data from 16 rain gauges were interpolated using the Thiessen Polygon (TP, Inverse Distance Weighted (IDW, and Co-Kriging (CK methods. Results showed that streamflow predictions employing CK inputs demonstrated the best performance in the whole process, in terms of the Nash–Sutcliffe Coefficient (NSE, the coefficient of determination (R2, and the Root Mean Square Error (RMSE indices. The TP, IDW, and CK methods showed good performance in the heavy rain period but poor performance in the light rain period compared with the default method (least sophisticated nearest neighbor technique in Soil and Water Assessment Tool (SWAT. Furthermore, the correlation between the dynamic weight of one method and its performance during runoff simulation followed a parabolic function. The combination of CK and TP achieved a better performance in decreasing the largest and lowest absolute errors compared to any single method, but the IDW method outperformed all methods in terms of the median absolute error. However, it is clear from our findings that interpolation methods should be chosen depending on the amount of precipitation, adaptability of the method, and accuracy of the estimate in different rain periods.
Analysis of Spatial Interpolation in the Material-Point Method
DEFF Research Database (Denmark)
Andersen, Søren; Andersen, Lars
2010-01-01
are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...
Kashchuk, A; Sagidova, Nailia
1998-01-01
Study of the factors limiting the spatial resolution of the MCSC caused by nonlinearity of the cathode-charge interpolation technique has been carried out using a special test arrangement that imitates the charge distribution on the cathode strips as a real MCSC and allows high precision comparison of the coordinates determined by the charge interpolation technique with the known values. We considered a MCSC with a 0.6 mm gap between the anode and the cathode strip planes and with the strip pitch of 0.9 mm. Various charge interpolation algorithms have been tested. It was demonstrated that the systematics errors in the coordinate measurements as low as 5 microns can be achieved, after applying some simple corrections, even with rather coarse sampling, when the coordinates is determined only by 2 or 3 adjacent strips. These results have been obtained with the readout electronics specially designed for fast operation of the MCSCs with the signal peaking time of 20 ns. The equivalent noise charge ss 1600e (r.m.s....
Directory of Open Access Journals (Sweden)
Huiqing Fang
2016-01-01
Full Text Available Based on geometrically exact beam theory, a hybrid interpolation is proposed for geometric nonlinear spatial Euler-Bernoulli beam elements. First, the Hermitian interpolation of the beam centerline was used for calculating nodal curvatures for two ends. Then, internal curvatures of the beam were interpolated with a second interpolation. At this point, C1 continuity was satisfied and nodal strain measures could be consistently derived from nodal displacement and rotation parameters. The explicit expression of nodal force without integration, as a function of global parameters, was founded by using the hybrid interpolation. Furthermore, the proposed beam element can be degenerated into linear beam element under the condition of small deformation. Objectivity of strain measures and patch tests are also discussed. Finally, four numerical examples are discussed to prove the validity and effectivity of the proposed beam element.
Spatial Interpolation of Historical Seasonal Rainfall Indices over Peninsular Malaysia
Directory of Open Access Journals (Sweden)
Hassan Zulkarnain
2018-01-01
Full Text Available The inconsistency in inter-seasonal rainfall due to climate change will cause a different pattern in the rainfall characteristics and distribution. Peninsular Malaysia is not an exception for this inconsistency, in which it is resulting extreme events such as flood and water scarcity. This study evaluates the seasonal patterns in rainfall indices such as total amount of rainfall, the frequency of wet days, rainfall intensity, extreme frequency, and extreme intensity in Peninsular Malaysia. 40 years (1975-2015 data records have been interpolated using Inverse Distance Weighted method. The results show that the formation of rainfall characteristics are significance during the Northeast monsoon (NEM, as compared to Southwest monsoon (SWM. Also, there is a high rainfall intensity and frequency related to extreme over eastern coasts of Peninsula during the NEM season.
Spatial Interpolation of Historical Seasonal Rainfall Indices over Peninsular Malaysia
Hassan, Zulkarnain; Haidir, Ahmad; Saad, Farah Naemah Mohd; Ayob, Afizah; Rahim, Mustaqqim Abdul; Ghazaly, Zuhayr Md.
2018-03-01
The inconsistency in inter-seasonal rainfall due to climate change will cause a different pattern in the rainfall characteristics and distribution. Peninsular Malaysia is not an exception for this inconsistency, in which it is resulting extreme events such as flood and water scarcity. This study evaluates the seasonal patterns in rainfall indices such as total amount of rainfall, the frequency of wet days, rainfall intensity, extreme frequency, and extreme intensity in Peninsular Malaysia. 40 years (1975-2015) data records have been interpolated using Inverse Distance Weighted method. The results show that the formation of rainfall characteristics are significance during the Northeast monsoon (NEM), as compared to Southwest monsoon (SWM). Also, there is a high rainfall intensity and frequency related to extreme over eastern coasts of Peninsula during the NEM season.
Estimating monthly temperature using point based interpolation techniques
Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi
2013-04-01
This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.
Directory of Open Access Journals (Sweden)
Tao Chen
2017-05-01
Full Text Available The spatial distribution of precipitation is an important aspect of water-related research. The use of different interpolation schemes in the same catchment may cause large differences and deviations from the actual spatial distribution of rainfall. Our study analyzes different methods of spatial rainfall interpolation at annual, daily, and hourly time scales to provide a comprehensive evaluation. An improved regression-based scheme is proposed using principal component regression with residual correction (PCRR and is compared with inverse distance weighting (IDW and multiple linear regression (MLR interpolation methods. In this study, the meso-scale catchment of the Fuhe River in southeastern China was selected as a typical region. Furthermore, a hydrological model HEC-HMS was used to calculate streamflow and to evaluate the impact of rainfall interpolation methods on the results of the hydrological model. Results show that the PCRR method performed better than the other methods tested in the study and can effectively eliminate the interpolation anomalies caused by terrain differences between observation points and surrounding areas. Simulated streamflow showed different characteristics based on the mean, maximum, minimum, and peak flows. The results simulated by PCRR exhibited the lowest streamflow error and highest correlation with measured values at the daily time scale. The application of the PCRR method is found to be promising because it considers multicollinearity among variables.
Directory of Open Access Journals (Sweden)
Fang Huang
2016-06-01
Full Text Available In some digital Earth engineering applications, spatial interpolation algorithms are required to process and analyze large amounts of data. Due to its powerful computing capacity, heterogeneous computing has been used in many applications for data processing in various fields. In this study, we explore the design and implementation of a parallel universal kriging spatial interpolation algorithm using the OpenCL programming model on heterogeneous computing platforms for massive Geo-spatial data processing. This study focuses primarily on transforming the hotspots in serial algorithms, i.e., the universal kriging interpolation function, into the corresponding kernel function in OpenCL. We also employ parallelization and optimization techniques in our implementation to improve the code performance. Finally, based on the results of experiments performed on two different high performance heterogeneous platforms, i.e., an NVIDIA graphics processing unit system and an Intel Xeon Phi system (MIC, we show that the parallel universal kriging algorithm can achieve the highest speedup of up to 40× with a single computing device and the highest speedup of up to 80× with multiple devices.
International Nuclear Information System (INIS)
Joseph, John; Sharif, Hatim O.; Sunil, Thankam; Alamgir, Hasanat
2013-01-01
The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. -- Highlights: •Spatial interpolation methods were tested for thousands of sparse ozone data sets. •A particular single-parameter ordinary kriging was found to be generally superior. •A Moran I p-value in the training set is not helpful in selecting the method. •The sum of the squares of the residuals is helpful in selecting the method. •R script is available for application to other sites and constituents. -- Spatial interpolation methods were compared for thousands of subsets of data for 8-h ozone using R script applicable to other constituents as well, and available from the authors
Gorji, Taha; Sertel, Elif; Tanik, Aysegul
2017-12-01
Soil management is an essential concern in protecting soil properties, in enhancing appropriate soil quality for plant growth and agricultural productivity, and in preventing soil erosion. Soil scientists and decision makers require accurate and well-distributed spatially continuous soil data across a region for risk assessment and for effectively monitoring and managing soils. Recently, spatial interpolation approaches have been utilized in various disciplines including soil sciences for analysing, predicting and mapping distribution and surface modelling of environmental factors such as soil properties. The study area selected in this research is Tuz Lake Basin in Turkey bearing ecological and economic importance. Fertile soil plays a significant role in agricultural activities, which is one of the main industries having great impact on economy of the region. Loss of trees and bushes due to intense agricultural activities in some parts of the basin lead to soil erosion. Besides, soil salinization due to both human-induced activities and natural factors has exacerbated its condition regarding agricultural land development. This study aims to compare capability of Local Polynomial Interpolation (LPI) and Radial Basis Functions (RBF) as two interpolation methods for mapping spatial pattern of soil properties including organic matter, phosphorus, lime and boron. Both LPI and RBF methods demonstrated promising results for predicting lime, organic matter, phosphorous and boron. Soil samples collected in the field were used for interpolation analysis in which approximately 80% of data was used for interpolation modelling whereas the remaining for validation of the predicted results. Relationship between validation points and their corresponding estimated values in the same location is examined by conducting linear regression analysis. Eight prediction maps generated from two different interpolation methods for soil organic matter, phosphorus, lime and boron parameters
Directory of Open Access Journals (Sweden)
Longxiang Li
Full Text Available Effective assessments of air-pollution exposure depend on the ability to accurately predict pollutant concentrations at unmonitored locations, which can be achieved through spatial interpolation. However, most interpolation approaches currently in use are based on the Euclidean distance, which cannot account for the complex nonlinear features displayed by air-pollution distributions in the wind-field. In this study, an interpolation method based on the shortest path distance is developed to characterize the impact of complex urban wind-field on the distribution of the particulate matter concentration. In this method, the wind-field is incorporated by first interpolating the observed wind-field from a meteorological-station network, then using this continuous wind-field to construct a cost surface based on Gaussian dispersion model and calculating the shortest wind-field path distances between locations, and finally replacing the Euclidean distances typically used in Inverse Distance Weighting (IDW with the shortest wind-field path distances. This proposed methodology is used to generate daily and hourly estimation surfaces for the particulate matter concentration in the urban area of Beijing in May 2013. This study demonstrates that wind-fields can be incorporated into an interpolation framework using the shortest wind-field path distance, which leads to a remarkable improvement in both the prediction accuracy and the visual reproduction of the wind-flow effect, both of which are of great importance for the assessment of the effects of pollutants on human health.
Directory of Open Access Journals (Sweden)
A. Verworn
2011-02-01
Full Text Available Hydrological modelling of floods relies on precipitation data with a high resolution in space and time. A reliable spatial representation of short time step rainfall is often difficult to achieve due to a low network density. In this study hourly precipitation was spatially interpolated with the multivariate geostatistical method kriging with external drift (KED using additional information from topography, rainfall data from the denser daily networks and weather radar data. Investigations were carried out for several flood events in the time period between 2000 and 2005 caused by different meteorological conditions. The 125 km radius around the radar station Ummendorf in northern Germany covered the overall study region. One objective was to assess the effect of different approaches for estimation of semivariograms on the interpolation performance of short time step rainfall. Another objective was the refined application of the method kriging with external drift. Special attention was not only given to find the most relevant additional information, but also to combine the additional information in the best possible way. A multi-step interpolation procedure was applied to better consider sub-regions without rainfall.
The impact of different semivariogram types on the interpolation performance was low. While it varied over the events, an averaged semivariogram was sufficient overall. Weather radar data were the most valuable additional information for KED for convective summer events. For interpolation of stratiform winter events using daily rainfall as additional information was sufficient. The application of the multi-step procedure significantly helped to improve the representation of fractional precipitation coverage.
Directory of Open Access Journals (Sweden)
Ly, S.
2013-01-01
Full Text Available Watershed management and hydrological modeling require data related to the very important matter of precipitation, often measured using raingages or weather stations. Hydrological models often require a preliminary spatial interpolation as part of the modeling process. The success of spatial interpolation varies according to the type of model chosen, its mode of geographical management and the resolution used. The quality of a result is determined by the quality of the continuous spatial rainfall, which ensues from the interpolation method used. The objective of this article is to review the existing methods for interpolation of rainfall data that are usually required in hydrological modeling. We review the basis for the application of certain common methods and geostatistical approaches used in interpolation of rainfall. Previous studies have highlighted the need for new research to investigate ways of improving the quality of rainfall data and ultimately, the quality of hydrological modeling.
Hodam, Sanayanbi; Sarkar, Sajal; Marak, Areor G. R.; Bandyopadhyay, A.; Bhadra, A.
2017-12-01
In the present study, to understand the spatial distribution characteristics of the ETo over India, spatial interpolation was performed on the means of 32 years (1971-2002) monthly data of 131 India Meteorological Department stations uniformly distributed over the country by two methods, namely, inverse distance weighted (IDW) interpolation and kriging. Kriging was found to be better while developing the monthly surfaces during cross-validation. However, in station-wise validation, IDW performed better than kriging in almost all the cases, hence is recommended for spatial interpolation of ETo and its governing meteorological parameters. This study also checked if direct kriging of FAO-56 Penman-Monteith (PM) (Allen et al. in Crop evapotranspiration—guidelines for computing crop water requirements, Irrigation and drainage paper 56, Food and Agriculture Organization of the United Nations (FAO), Rome, 1998) point ETo produced comparable results against ETo estimated with individually kriged weather parameters (indirect kriging). Indirect kriging performed marginally well compared to direct kriging. Point ETo values were extended to areal ETo values by IDW and FAO-56 PM mean ETo maps for India were developed to obtain sufficiently accurate ETo estimates at unknown locations.
The twitch interpolation technique for study of fatigue of human quadriceps muscle
DEFF Research Database (Denmark)
Bülow, P M; Nørregaard, J; Mehlsen, J
1995-01-01
The aim of the study was to examine if the twitch interpolation technique could be used to objectively measure fatigue in the quadriceps muscle in subjects performing submaximally. The 'true' maximum isometric quadriceps torque was determined in 21 healthy subject using the twitch interpolation...... technique. Then an endurance test was performed in which the subjects made repeated isometric contractions at 50% of the 'true' maximum torque for 4 s, separated by 6 s rest periods. During the test, the force response to single electrical stimulation (twitch amplitude) was measured at 50% and 25......). In conclusion, the twitch technique can be used for objectively measuring fatigue of the quadriceps muscle....
Parallel Landscape Driven Data Reduction & Spatial Interpolation Algorithm for Big LiDAR Data
Directory of Open Access Journals (Sweden)
Rahil Sharma
2016-06-01
Full Text Available Airborne Light Detection and Ranging (LiDAR topographic data provide highly accurate digital terrain information, which is used widely in applications like creating flood insurance rate maps, forest and tree studies, coastal change mapping, soil and landscape classification, 3D urban modeling, river bank management, agricultural crop studies, etc. In this paper, we focus mainly on the use of LiDAR data in terrain modeling/Digital Elevation Model (DEM generation. Technological advancements in building LiDAR sensors have enabled highly accurate and highly dense LiDAR point clouds, which have made possible high resolution modeling of terrain surfaces. However, high density data result in massive data volumes, which pose computing issues. Computational time required for dissemination, processing and storage of these data is directly proportional to the volume of the data. We describe a novel technique based on the slope map of the terrain, which addresses the challenging problem in the area of spatial data analysis, of reducing this dense LiDAR data without sacrificing its accuracy. To the best of our knowledge, this is the first ever landscape-driven data reduction algorithm. We also perform an empirical study, which shows that there is no significant loss in accuracy for the DEM generated from a 52% reduced LiDAR dataset generated by our algorithm, compared to the DEM generated from an original, complete LiDAR dataset. For the accuracy of our statistical analysis, we perform Root Mean Square Error (RMSE comparing all of the grid points of the original DEM to the DEM generated by reduced data, instead of comparing a few random control points. Besides, our multi-core data reduction algorithm is highly scalable. We also describe a modified parallel Inverse Distance Weighted (IDW spatial interpolation method and show that the DEMs it generates are time-efficient and have better accuracy than the one’s generated by the traditional IDW method.
Suparta, Wayan; Rahman, Rosnani
2016-02-01
Global Positioning System (GPS) receivers are widely installed throughout the Peninsular Malaysia, but the implementation for monitoring weather hazard system such as flash flood is still not optimal. To increase the benefit for meteorological applications, the GPS system should be installed in collocation with meteorological sensors so the precipitable water vapor (PWV) can be measured. The distribution of PWV is a key element to the Earth's climate for quantitative precipitation improvement as well as flash flood forecasts. The accuracy of this parameter depends on a large extent on the number of GPS receiver installations and meteorological sensors in the targeted area. Due to cost constraints, a spatial interpolation method is proposed to address these issues. In this paper, we investigated spatial distribution of GPS PWV and meteorological variables (surface temperature, relative humidity, and rainfall) by using thin plate spline (tps) and ordinary kriging (Krig) interpolation techniques over the Klang Valley in Peninsular Malaysia (longitude: 99.5°-102.5°E and latitude: 2.0°-6.5°N). Three flash flood cases in September, October, and December 2013 were studied. The analysis was performed using mean absolute error (MAE), root mean square error (RMSE), and coefficient of determination (R2) to determine the accuracy and reliability of the interpolation techniques. Results at different phases (pre, onset, and post) that were evaluated showed that tps interpolation technique is more accurate, reliable, and highly correlated in estimating GPS PWV and relative humidity, whereas Krig is more reliable for predicting temperature and rainfall during pre-flash flood events. During the onset of flash flood events, both methods showed good interpolation in estimating all meteorological parameters with high accuracy and reliability. The finding suggests that the proposed method of spatial interpolation techniques are capable of handling limited data sources with high
A Spatial Interpolation Framework for Efficient Valuation of Large Portfolios of Variable Annuities
Directory of Open Access Journals (Sweden)
Seyed Amir Hejazi
2017-07-01
Full Text Available Variable Annuity (VA products expose insurance companies to considerable risk becauseof the guarantees they provide to buyers of these products. Managing and hedging these risks requireinsurers to find the values of key risk metrics for a large portfolio of VA products. In practice, manycompanies rely on nested Monte Carlo (MC simulations to find key risk metrics. MC simulations arecomputationally demanding, forcing insurance companies to invest hundreds of thousands of dollars incomputational infrastructure per year. Moreover, existing academic methodologies are focused on fairvaluation of a single VA contract, exploiting ideas in option theory and regression. In most cases, thecomputational complexity of these methods surpasses the computational requirements of MC simulations.Therefore, academic methodologies cannot scale well to large portfolios of VA contracts. In thispaper, we present a framework for valuing such portfolios based on spatial interpolation. We providea comprehensive study of this framework and compare existing interpolation schemes. Our numericalresults show superior performance, in terms of both computational effciency and accuracy, for thesemethods compared to nested MC simulations. We also present insights into the challenge of findingan effective interpolation scheme in this framework, and suggest guidelines that help us build a fullyautomated scheme that is effcient and accurate.
Congdon, Peter
2014-04-01
Health data may be collected across one spatial framework (e.g. health provider agencies), but contrasts in health over another spatial framework (neighbourhoods) may be of policy interest. In the UK, population prevalence totals for chronic diseases are provided for populations served by general practitioner practices, but not for neighbourhoods (small areas of circa 1500 people), raising the question whether data for one framework can be used to provide spatially interpolated estimates of disease prevalence for the other. A discrete process convolution is applied to this end and has advantages when there are a relatively large number of area units in one or other framework. Additionally, the interpolation is modified to take account of the observed neighbourhood indicators (e.g. hospitalisation rates) of neighbourhood disease prevalence. These are reflective indicators of neighbourhood prevalence viewed as a latent construct. An illustrative application is to prevalence of psychosis in northeast London, containing 190 general practitioner practices and 562 neighbourhoods, including an assessment of sensitivity to kernel choice (e.g. normal vs exponential). This application illustrates how a zero-inflated Poisson can be used as the likelihood model for a reflective indicator.
Generation of response functions of a NaI detector by using an interpolation technique
International Nuclear Information System (INIS)
Tominaga, Shoji
1983-01-01
A computer method is developed for generating response functions of a NaI detector to monoenergetic γ-rays. The method is based on an interpolation between measured response curves by a detector. The computer programs are constructed for Heath's response spectral library. The principle of the basic mathematics used for interpolation, which was reported previously by the author, et al., is that response curves can be decomposed into a linear combination of intrinsic-component patterns, and thereby the interpolation of curves is reduced to a simple interpolation of weighting coefficients needed to combine the component patterns. This technique has some advantages of data compression, reduction in computation time, and stability of the solution, in comparison with the usual functional fitting method. The processing method of segmentation of a spectrum is devised to generate useful and precise response curves. A spectral curve, obtained for each γ-ray source, is divided into some regions defined by the physical processes, such as the photopeak area, the Compton continuum area, the backscatter peak area, and so on. Each segment curve then is processed separately for interpolation. Lastly the estimated curves to the respective areas are connected on one channel scale. The generation programs are explained briefly. It is shown that the generated curve represents the overall shape of a response spectrum including not only its photopeak but also the corresponding Compton area, with a sufficient accuracy. (author)
Joseph, John; Sharif, Hatim O; Sunil, Thankam; Alamgir, Hasanat
2013-07-01
The adverse health effects of high concentrations of ground-level ozone are well-known, but estimating exposure is difficult due to the sparseness of urban monitoring networks. This sparseness discourages the reservation of a portion of the monitoring stations for validation of interpolation techniques precisely when the risk of overfitting is greatest. In this study, we test a variety of simple spatial interpolation techniques for 8-h ozone with thousands of randomly selected subsets of data from two urban areas with monitoring stations sufficiently numerous to allow for true validation. Results indicate that ordinary kriging with only the range parameter calibrated in an exponential variogram is the generally superior method, and yields reliable confidence intervals. Sparse data sets may contain sufficient information for calibration of the range parameter even if the Moran I p-value is close to unity. R script is made available to apply the methodology to other sparsely monitored constituents. Copyright © 2013 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Park, Jinyong; Balasingham, P.; McKenna, Sean Andrew; Kulatilake, Pinnaduwa H. S. W.
2004-01-01
Sandia National Laboratories, under contract to Nuclear Waste Management Organization of Japan (NUMO), is performing research on regional classification of given sites in Japan with respect to potential volcanic disruption using multivariate statistics and geo-statistical interpolation techniques. This report provides results obtained for hierarchical probabilistic regionalization of volcanism for the Sengan region in Japan by applying multivariate statistical techniques and geostatistical interpolation techniques on the geologic data provided by NUMO. A workshop report produced in September 2003 by Sandia National Laboratories (Arnold et al., 2003) on volcanism lists a set of most important geologic variables as well as some secondary information related to volcanism. Geologic data extracted for the Sengan region in Japan from the data provided by NUMO revealed that data are not available at the same locations for all the important geologic variables. In other words, the geologic variable vectors were found to be incomplete spatially. However, it is necessary to have complete geologic variable vectors to perform multivariate statistical analyses. As a first step towards constructing complete geologic variable vectors, the Universal Transverse Mercator (UTM) zone 54 projected coordinate system and a 1 km square regular grid system were selected. The data available for each geologic variable on a geographic coordinate system were transferred to the aforementioned grid system. Also the recorded data on volcanic activity for Sengan region were produced on the same grid system. Each geologic variable map was compared with the recorded volcanic activity map to determine the geologic variables that are most important for volcanism. In the regionalized classification procedure, this step is known as the variable selection step. The following variables were determined as most important for volcanism: geothermal gradient, groundwater temperature, heat discharge, groundwater
Farr, W. M.; Mandel, I.; Stevens, D.
2015-01-01
Selection among alternative theoretical models given an observed dataset is an important challenge in many areas of physics and astronomy. Reversible-jump Markov chain Monte Carlo (RJMCMC) is an extremely powerful technique for performing Bayesian model selection, but it suffers from a fundamental difficulty and it requires jumps between model parameter spaces, but cannot efficiently explore both parameter spaces at once. Thus, a naive jump between parameter spaces is unlikely to be accepted in the Markov chain Monte Carlo (MCMC) algorithm and convergence is correspondingly slow. Here, we demonstrate an interpolation technique that uses samples from single-model MCMCs to propose intermodel jumps from an approximation to the single-model posterior of the target parameter space. The interpolation technique, based on a kD-tree data structure, is adaptive and efficient in modest dimensionality. We show that our technique leads to improved convergence over naive jumps in an RJMCMC, and compare it to other proposals in the literature to improve the convergence of RJMCMCs. We also demonstrate the use of the same interpolation technique as a way to construct efficient ‘global’ proposal distributions for single-model MCMCs without prior knowledge of the structure of the posterior distribution, and discuss improvements that permit the method to be used in higher dimensional spaces efficiently. PMID:26543580
Estimating changes in urban land and urban population using refined areal interpolation techniques
Zoraghein, Hamidreza; Leyk, Stefan
2018-05-01
The analysis of changes in urban land and population is important because the majority of future population growth will take place in urban areas. U.S. Census historically classifies urban land using population density and various land-use criteria. This study analyzes the reliability of census-defined urban lands for delineating the spatial distribution of urban population and estimating its changes over time. To overcome the problem of incompatible enumeration units between censuses, regular areal interpolation methods including Areal Weighting (AW) and Target Density Weighting (TDW), with and without spatial refinement, are implemented. The goal in this study is to estimate urban population in Massachusetts in 1990 and 2000 (source zones), within tract boundaries of the 2010 census (target zones), respectively, to create a consistent time series of comparable urban population estimates from 1990 to 2010. Spatial refinement is done using ancillary variables such as census-defined urban areas, the National Land Cover Database (NLCD) and the Global Human Settlement Layer (GHSL) as well as different combinations of them. The study results suggest that census-defined urban areas alone are not necessarily the most meaningful delineation of urban land. Instead, it appears that alternative combinations of the above-mentioned ancillary variables can better depict the spatial distribution of urban land, and thus make it possible to reduce the estimation error in transferring the urban population from source zones to target zones when running spatially-refined temporal areal interpolation.
Spatiotemporal Interpolation Methods for Solar Event Trajectories
Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe
2018-05-01
This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.
The analysis of composite laminated beams using a 2D interpolating meshless technique
Sadek, S. H. M.; Belinha, J.; Parente, M. P. L.; Natal Jorge, R. M.; de Sá, J. M. A. César; Ferreira, A. J. M.
2018-02-01
Laminated composite materials are widely implemented in several engineering constructions. For its relative light weight, these materials are suitable for aerospace, military, marine, and automotive structural applications. To obtain safe and economical structures, the modelling analysis accuracy is highly relevant. Since meshless methods in the recent years achieved a remarkable progress in computational mechanics, the present work uses one of the most flexible and stable interpolation meshless technique available in the literature—the Radial Point Interpolation Method (RPIM). Here, a 2D approach is considered to numerically analyse composite laminated beams. Both the meshless formulation and the equilibrium equations ruling the studied physical phenomenon are presented with detail. Several benchmark beam examples are studied and the results are compared with exact solutions available in the literature and the results obtained from a commercial finite element software. The results show the efficiency and accuracy of the proposed numeric technique.
Garcia, Matthew; Peters-Lidard, Christa D.; Goodrich, David C.
2008-05-01
Inaccuracy in spatially distributed precipitation fields can contribute significantly to the uncertainty of hydrological states and fluxes estimated from land surface models. This paper examines the results of selected interpolation methods for both convective and mixed/stratiform events that occurred during the North American monsoon season over a dense gauge network at the U.S. Department of Agriculture Agricultural Research Service Walnut Gulch Experimental Watershed in the southwestern United States. The spatial coefficient of variation for the precipitation field is employed as an indicator of event morphology, and a gauge clustering factor CF is formulated as a new, scale-independent measure of network organization. We consider that CF 0 (clustering in the gauge network) will produce errors because of reduced areal representation of the precipitation field. Spatial interpolation is performed using both inverse-distance-weighted (IDW) and multiquadric-biharmonic (MQB) methods. We employ ensembles of randomly selected network subsets for the statistical evaluation of interpolation errors in comparison with the observed precipitation. The magnitude of interpolation errors and differences in accuracy between interpolation methods depend on both the density and the geometrical organization of the gauge network. Generally, MQB methods outperform IDW methods in terms of interpolation accuracy under all conditions, but it is found that the order of the IDW method is important to the results and may, under some conditions, be just as accurate as the MQB method. In almost all results it is demonstrated that the inverse-distance-squared method for spatial interpolation, commonly employed in operational analyses and for engineering assessments, is inferior to the ID-cubed method, which is also more computationally efficient than the MQB method in studies of large networks.
Mayne, Terence P; Paskaranandavadivel, Niranchan; Erickson, Jonathan C; OGrady, Gregory; Cheng, Leo K; Angeli, Timothy R
2018-02-01
High-resolution mapping of gastrointestinal (GI) slow waves is a valuable technique for research and clinical applications. Interpretation of high-resolution GI mapping data relies on animations of slow wave propagation, but current methods remain as rudimentary, pixelated electrode activation animations. This study aimed to develop improved methods of visualizing high-resolution slow wave recordings that increases ease of interpretation. The novel method of "wavefront-orientation" interpolation was created to account for the planar movement of the slow wave wavefront, negate any need for distance calculations, remain robust in atypical wavefronts (i.e., dysrhythmias), and produce an appropriate interpolation boundary. The wavefront-orientation method determines the orthogonal wavefront direction and calculates interpolated values as the mean slow wave activation-time (AT) of the pair of linearly adjacent electrodes along that direction. Stairstep upsampling increased smoothness and clarity. Animation accuracy of 17 human high-resolution slow wave recordings (64-256 electrodes) was verified by visual comparison to the prior method showing a clear improvement in wave smoothness that enabled more accurate interpretation of propagation, as confirmed by an assessment of clinical applicability performed by eight GI clinicians. Quantitatively, the new method produced accurate interpolation values compared to experimental data (mean difference 0.02 ± 0.05 s) and was accurate when applied solely to dysrhythmic data (0.02 ± 0.06 s), both within the error in manual AT marking (mean 0.2 s). Mean interpolation processing time was 6.0 s per wave. These novel methods provide a validated visualization platform that will improve analysis of high-resolution GI mapping in research and clinical translation.
Garen, D. C.; Kahl, A.; Marks, D. G.; Winstral, A. H.
2012-12-01
In mountainous catchments, it is well known that meteorological inputs, such as precipitation, air temperature, humidity, etc. vary greatly with elevation, spatial location, and time. Understanding and monitoring catchment inputs is necessary in characterizing and predicting hydrologic response to these inputs. This is true all of the time, but it is the most dramatically critical during large storms, when the input to the stream system due to rain and snowmelt creates the potential for flooding. Besides such crisis events, however, proper estimation of catchment inputs and their spatial distribution is also needed in more prosaic but no less important water and related resource management activities. The first objective of this study is to apply a geostatistical spatial interpolation technique (elevationally detrended kriging) to precipitation and dew point temperature on an hourly basis and explore its characteristics, accuracy, and other issues. The second objective is to use these spatial fields to determine precipitation phase (rain or snow) during a large, dynamic winter storm. The catchment studied is the data-rich Reynolds Creek Experimental Watershed near Boise, Idaho. As part of this analysis, precipitation-elevation lapse rates are examined for spatial and temporal consistency. A clear dependence of lapse rate on precipitation amount exists. Certain stations, however, are outliers from these relationships, showing that significant local effects can be present and raising the question of whether such stations should be used for spatial interpolation. Experiments with selecting subsets of stations demonstrate the importance of elevation range and spatial placement on the interpolated fields. Hourly spatial fields of precipitation and dew point temperature are used to distinguish precipitation phase during a large rain-on-snow storm in December 2005. This application demonstrates the feasibility of producing hourly spatial fields and the importance of doing
Kolyaie, S.; Yaghooti, M.; Majidi, G.
2011-12-01
This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.
Directory of Open Access Journals (Sweden)
Michel Castro Moreira
Full Text Available ABSTRACT Water erosion is the process of disaggregation and transport of sediments, and rainfall erosivity is a numerical value that expresses the erosive capacity of rain. The scarcity of information on rainfall erosivity makes it difficult or impossible to use to estimate losses occasioned by the erosive process. The objective of this study was to develop Artificial Neural Networks (ANNs for spatial interpolation of the monthly and annual values of rainfall erosivity at any location in the state of Rio Grande do Sul, and a software that enables the use of these networks in a simple and fast manner. This experiment used 103 rainfall stations in Rio Grande do Sul and their surrounding area to generate synthetic rainfall series on the software ClimaBR 2.0. Rainfall erosivity was determined by summing the values of the EI30 and KE >25 indexes, considering two methodologies for obtaining the kinetic energy of rainfall. With these values of rainfall erosivity and latitude, longitude, and altitude of the stations, the ANNs were trained and tested for spatializations of rainfall erosivity. To facilitate the use of the ANNs, a computer program was generated, called netErosividade RS, which makes feasible the use of ANNs to estimate the values of rainfall erosivity for any location in the state of Rio Grande do Sul.
Wu, Wei; Tang, Xiao-Ping; Ma, Xue-Qing; Liu, Hong-Bin
2016-08-01
Soil temperature variability data provide valuable information on understanding land-surface ecosystem processes and climate change. This study developed and analyzed a spatial dataset of monthly mean soil temperature at a depth of 10 cm over a complex topographical region in southwestern China. The records were measured at 83 stations during the period of 1961-2000. Nine approaches were compared for interpolating soil temperature. The accuracy indicators were root mean square error (RMSE), modelling efficiency (ME), and coefficient of residual mass (CRM). The results indicated that thin plate spline with latitude, longitude, and elevation gave the best performance with RMSE varying between 0.425 and 0.592 °C, ME between 0.895 and 0.947, and CRM between -0.007 and 0.001. A spatial database was developed based on the best model. The dataset showed that larger seasonal changes of soil temperature were from autumn to winter over the region. The northern and eastern areas with hilly and low-middle mountains experienced larger seasonal changes.
A new method for reducing DNL in nuclear ADCs using an interpolation technique
International Nuclear Information System (INIS)
Vaidya, P.P.; Gopalakrishnan, K.R.; Pethe, V.A.; Anjaneyulu, T.
1986-01-01
The paper describes a new method for reducing the DNL associated with nuclear ADCs. The method named the ''interpolation technique'' is utilized to derive the quantisation steps corresponding to the last n bits of the digital code by dividing quantisation steps due to higher significant bits of the DAC, using a chain of resistors. Using comparators, these quantisation steps are compared with the analog voltage to be digitized, which is applied as a voltage shift at both ends of this chain. The output states of the comparators define the n bit code. The errors due to offset voltages and bias currents of the comparators are statistically neutralized by changing the polarity of quantisation steps as well as the polarity of analog voltage (corresponding to last n bits) for alternate A/D conversion. The effect of averaging on the channel profile can be minimized. A 12 bit ADC was constructured using this technique which gives DNL of less than +-1% over most of the channels for conversion time of nearly 4.5 μs. Gatti's sliding scale technique can be implemented for further reduction of DNL. The interpolation technique has a promising potential of improving the resolution of existing 12 bit ADCs to 16 bit, without degrading the percentage DNL significantly. (orig.)
International Nuclear Information System (INIS)
Yamada, Yoshifumi; Liu, Na; Ito, Satoshi
2006-01-01
The signal in the Fresnel transform technique corresponds to a blurred one of the spin density image. Because the amplitudes of adjacent sampled signals have a high interrelation, the signal amplitude at a point between sampled points can be estimated with a high degree of accuracy even if the sampling is so coarse as to generate aliasing in the reconstructed images. In this report, we describe a new aliasless image reconstruction technique in the phase scrambling Fourier transform (PSFT) imaging technique in which the PSFT signals are converted to Fresnel transform signals by multiplying them by a quadratic phase term and are then interpolated using polynomial expressions to generate fully encoded signals. Numerical simulation using MR images showed that almost completely aliasless images are reconstructed by this technique. Experiments using ultra-low-field PSFT MRI were conducted, and aliasless images were reconstructed from coarsely sampled PSFT signals. (author)
Safarpour, S.; Abdullah, K.; Lim, H. S.; Dadras, M.
2017-09-01
Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD) values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard NASA's Terra satellites, for the 10 years period of 2000 - 2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE) and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs) yielded the best results for spring, summer and winter and ordinary kriging yielded the best results for fall.
Chen, Hui; Fan, Li; Wu, Wei; Liu, Hong-Bin
2017-09-26
Soil moisture data can reflect valuable information on soil properties, terrain features, and drought condition. The current study compared and assessed the performance of different interpolation methods for estimating soil moisture in an area with complex topography in southwest China. The approaches were inverse distance weighting, multifarious forms of kriging, regularized spline with tension, and thin plate spline. The 5-day soil moisture observed at 167 stations and daily temperature recorded at 33 stations during the period of 2010-2014 were used in the current work. Model performance was tested with accuracy indicators of determination coefficient (R 2 ), mean absolute percentage error (MAPE), root mean square error (RMSE), relative root mean square error (RRMSE), and modeling efficiency (ME). The results indicated that inverse distance weighting had the best performance with R 2 , MAPE, RMSE, RRMSE, and ME of 0.32, 14.37, 13.02%, 0.16, and 0.30, respectively. Based on the best method, a spatial database of soil moisture was developed and used to investigate drought condition over the study area. The results showed that the distribution of drought was characterized by evidently regional difference. Besides, drought mainly occurred in August and September in the 5 years and was prone to happening in the western and central parts rather than in the northeastern and southeastern areas.
Directory of Open Access Journals (Sweden)
S. Safarpour
2017-09-01
Full Text Available Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD. The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS sensor onboard NASA’s Terra satellites, for the 10 years period of 2000 - 2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs yielded the best results for spring, summer and winter and ordinary kriging yielded the best results for fall.
Directory of Open Access Journals (Sweden)
Mahacine Amrani
2008-06-01
Full Text Available Several methods are currently used to optimize edges and contours of geophysical data maps. A resistivity map was expectedto allow the electrical resistivity signal to be imaged in 2D in Moroccan resistivity survey in the phosphate mining domain. Anomalouszones of phosphate deposit “disturbances” correspond to resistivity anomalies. The resistivity measurements were taken at 5151discrete locations. Much of the geophysical spatial analysis requires a continuous data set and this study is designed to create that surface. This paper identifies the best spatial interpolation method to use for the creation of continuous data for Moroccan resistivity data of phosphate “disturbances” zones. The effectiveness of our approach for successfully reducing noise has been used much successin the analysis of stationary geophysical data as resistivity data. The interpolation filtering approach methods applied to modelingsurface phosphate “disturbances” was found to be consistently useful.
Elumalai, Vetrimurugan; Brindha, K; Sithole, Bongani; Lakshmanan, Elango
2017-04-01
Mapping groundwater contaminants and identifying the sources are the initial steps in pollution control and mitigation. Due to the availability of different mapping methods and the large number of emerging pollutants, these methods need to be used together in decision making. The present study aims to map the contaminated areas in Richards Bay, South Africa and compare the results of ordinary kriging (OK) and inverse distance weighted (IDW) interpolation techniques. Statistical methods were also used for identifying contamination sources. Na-Cl groundwater type was dominant followed by Ca-Mg-Cl. Data analysis indicate that silicate weathering, ion exchange and fresh water-seawater mixing are the major geochemical processes controlling the presence of major ions in groundwater. Factor analysis also helped to confirm the results. Overlay analysis by OK and IDW gave different results. Areas where groundwater was unsuitable as a drinking source were 419 and 116 km 2 for OK and IDW, respectively. Such diverse results make decision making difficult, if only one method was to be used. Three highly contaminated zones within the study area were more accurately identified by OK. If large areas are identified as being contaminated such as by IDW in this study, the mitigation measures will be expensive. If these areas were underestimated, then even though management measures are taken, it will not be effective for a longer time. Use of multiple techniques like this study will help to avoid taking harsh decisions. Overall, the groundwater quality in this area was poor, and it is essential to identify alternate drinking water source or treat the groundwater before ingestion.
基于GIS的新疆气温数据栅格化方法研究%GIS-based spatial interpolation of air temperature in Xinjiang
Institute of Scientific and Technical Information of China (English)
陈鹏翔; 毛炜峰
2012-01-01
以新疆99个气象台站1971-2010年年平均气温为数据源,采用多元回归结合空间插值的方法对新疆区域气温数据进行栅格化研究.建立了年平均气温与台站经纬度和海拔高度的多元回归模型,对于残差数据的插值采用了反距离权重法(IDW)、普通克立格法(Kriging)和样来函数法(Spline)3种目前应用广泛的空间插值方法,针对于这3种方法进行了基于MAE和RMSIE的交叉验证和对比分析,结果表明在新疆的年平均气温的GIS插值方案中,IDW方法精度总体要高于其他两种插值方法.%With Surfer, Grads as a platform for direct space interpolation was widely used in meteorological rasterization of air temperature data, whatever the spatial interpolation technique ( Spline, 1DW, Lagrangian, Hennite interpolation, etc. ) , do not take into account the effects of topography on the air temperature distribution, In recent years with the expansion of GIS technology applications, the method of regression model by geographic factors (elevation, longitude, latitude, etc. ) combined with spatial interpolation was used in grid-ba3ed regional climate factors and get good results. In this paper, used regression analysis methods combined with GIS spatial interpolation to rasterization of year mean air temperatures from 1971 to2010 in Xinjiang area, the 99 meteorological stations(10 of them in order to verify) that has complete observations involved in the calculation. We use the following method for air tempcrature data rasterization in Xinjiang region, Firstly, establish the average temperature multiple regression model with the air temperature data that measured by weather station (excluding test station) for the output variables, and the longitude grid data, latitude grid data and altitude grid dala of meteorological stations for the input variables, obtain the regression equation and the temperature residuals data for each weather station; Secondly, calculate the air
International Nuclear Information System (INIS)
Oh, Se-Jin; Kim, Young-Chul; Chung, Chin-Wook
2011-01-01
An interpolation algorithm for the evaluation of the spatial profile of plasma densities in a cylindrical reactor was developed for low gas pressures. The algorithm is based on a collisionless two-dimensional fluid model. Contrary to the collisional case, i.e., diffusion fluid model, the fitting algorithm depends on the aspect ratio of the cylindrical reactor. The spatial density profile of the collisionless fitting algorithm is presented in two-dimensional images and compared with the results of the diffusion fluid model.
Prasetyo, S. Y. J.; Hartomo, K. D.
2018-01-01
The Spatial Plan of the Province of Central Java 2009-2029 identifies that most regencies or cities in Central Java Province are very vulnerable to landslide disaster. The data are also supported by other data from Indonesian Disaster Risk Index (In Indonesia called Indeks Risiko Bencana Indonesia) 2013 that suggest that some areas in Central Java Province exhibit a high risk of natural disasters. This research aims to develop an application architecture and analysis methodology in GIS to predict and to map rainfall distribution. We propose our GIS architectural application of “Multiplatform Architectural Spatiotemporal” and data analysis methods of “Triple Exponential Smoothing” and “Spatial Interpolation” as our significant scientific contribution. This research consists of 2 (two) parts, namely attribute data prediction using TES method and spatial data prediction using Inverse Distance Weight (IDW) method. We conduct our research in 19 subdistricts in the Boyolali Regency, Central Java Province, Indonesia. Our main research data is the biweekly rainfall data in 2000-2016 Climatology, Meteorology, and Geophysics Agency (In Indonesia called Badan Meteorologi, Klimatologi, dan Geofisika) of Central Java Province and Laboratory of Plant Disease Observations Region V Surakarta, Central Java. The application architecture and analytical methodology of “Multiplatform Architectural Spatiotemporal” and spatial data analysis methodology of “Triple Exponential Smoothing” and “Spatial Interpolation” can be developed as a GIS application framework of rainfall distribution for various applied fields. The comparison between the TES and IDW methods show that relative to time series prediction, spatial interpolation exhibit values that are approaching actual. Spatial interpolation is closer to actual data because computed values are the rainfall data of the nearest location or the neighbour of sample values. However, the IDW’s main weakness is that some
Pfister, Nicolas; O'Neill, Norman T.; Aube, Martin; Nguyen, Minh-Nghia; Bechamp-Laganiere, Xavier; Besnier, Albert; Corriveau, Louis; Gasse, Geremie; Levert, Etienne; Plante, Danick
2005-08-01
Satellite-based measurements of aerosol optical depth (AOD) over land are obtained from an inversion procedure applied to dense dark vegetation pixels of remotely sensed images. The limited number of pixels over which the inversion procedure can be applied leaves many areas with little or no AOD data. Moreover, satellite coverage by sensors such as MODIS yields only daily images of a given region with four sequential overpasses required to straddle mid-latitude North America. Ground based AOD data from AERONET sun photometers are available on a more continuous basis but only at approximately fifty locations throughout North America. The object of this work is to produce a complete and coherent mapping of AOD over North America with a spatial resolution of 0.1 degree and a frequency of three hours by interpolating MODIS satellite-based data together with available AERONET ground based measurements. Before being interpolated, the MODIS AOD data extracted from different passes are synchronized to the mapping time using analyzed wind fields from the Global Multiscale Model (Meteorological Service of Canada). This approach amounts to a trajectory type of simplified atmospheric dynamics correction method. The spatial interpolation is performed using a weighted least squares method applied to bicubic B-spline functions defined on a rectangular grid. The least squares method enables one to weight the data accordingly to the measurement errors while the B-splines properties of local support and C2 continuity offer a good approximation of AOD behaviour viewed as a function of time and space.
International Nuclear Information System (INIS)
Dubcova, Lenka; Solin, Pavel; Hansen, Glen; Park, HyeongKae
2011-01-01
Multiphysics solution challenges are legion within the field of nuclear reactor design and analysis. One major issue concerns the coupling between heat and neutron flow (neutronics) within the reactor assembly. These phenomena are usually very tightly interdependent, as large amounts of heat are quickly produced with an increase in fission events within the fuel, which raises the temperature that affects the neutron cross section of the fuel. Furthermore, there typically is a large diversity of time and spatial scales between mathematical models of heat and neutronics. Indeed, the different spatial resolution requirements often lead to the use of very different meshes for the two phenomena. As the equations are coupled, one must take care in exchanging solution data between them, or significant error can be introduced into the coupled problem. We propose a novel approach to the discretization of the coupled problem on different meshes based on an adaptive multimesh higher-order finite element method (hp-FEM), and compare it to popular interpolation and projection methods. We show that the multimesh hp-FEM method is significantly more accurate than the interpolation and projection approaches considered in this study.
Spatial interpolation and simulation of post-burn duff thickness after prescribed fire
Peter R. Robichaud; S. M. Miller
1999-01-01
Prescribed fire is used as a site treatment after timber harvesting. These fires result in spatial patterns with some portions consuming all of the forest floor material (duff) and others consuming little. Prior to the burn, spatial sampling of duff thickness and duff water content can be used to generate geostatistical spatial simulations of these characteristics....
Directory of Open Access Journals (Sweden)
Zhenyi Jia
2017-12-01
Full Text Available Soil pollution by metal(loids resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As and cadmium (Cd pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loids in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid pollution.
Jia, Zhenyi; Zhou, Shenglu; Su, Quanlong; Yi, Haomin; Wang, Junxiao
2017-12-26
Soil pollution by metal(loid)s resulting from rapid economic development is a major concern. Accurately estimating the spatial distribution of soil metal(loid) pollution has great significance in preventing and controlling soil pollution. In this study, 126 topsoil samples were collected in Kunshan City and the geo-accumulation index was selected as a pollution index. We used Kriging interpolation and BP neural network methods to estimate the spatial distribution of arsenic (As) and cadmium (Cd) pollution in the study area. Additionally, we introduced a cross-validation method to measure the errors of the estimation results by the two interpolation methods and discussed the accuracy of the information contained in the estimation results. The conclusions are as follows: data distribution characteristics, spatial variability, and mean square errors (MSE) of the different methods showed large differences. Estimation results from BP neural network models have a higher accuracy, the MSE of As and Cd are 0.0661 and 0.1743, respectively. However, the interpolation results show significant skewed distribution, and spatial autocorrelation is strong. Using Kriging interpolation, the MSE of As and Cd are 0.0804 and 0.2983, respectively. The estimation results have poorer accuracy. Combining the two methods can improve the accuracy of the Kriging interpolation and more comprehensively represent the spatial distribution characteristics of metal(loid)s in regional soil. The study may provide a scientific basis and technical support for the regulation of soil metal(loid) pollution.
Congdon, Peter
2013-01-01
This paper considers estimation of disease prevalence for small areas (neighbourhoods) when the available observations on prevalence are for an alternative partition of a region, such as service areas. Interpolation to neighbourhoods uses a kernel method extended to take account of two types of collateral information. The first is morbidity and service use data, such as hospital admissions, observed for neighbourhoods. Variations in morbidity and service use are expected to reflect prevalence. The second type of collateral information is ecological risk factors (e.g., pollution indices) that are expected to explain variability in prevalence in service areas, but are typically observed only for neighbourhoods. An application involves estimating neighbourhood asthma prevalence in a London health region involving 562 neighbourhoods and 189 service (primary care) areas. PMID:24129116
Directory of Open Access Journals (Sweden)
Peter Congdon
2013-10-01
Full Text Available This paper considers estimation of disease prevalence for small areas (neighbourhoods when the available observations on prevalence are for an alternative partition of a region, such as service areas. Interpolation to neighbourhoods uses a kernel method extended to take account of two types of collateral information. The first is morbidity and service use data, such as hospital admissions, observed for neighbourhoods. Variations in morbidity and service use are expected to reflect prevalence. The second type of collateral information is ecological risk factors (e.g., pollution indices that are expected to explain variability in prevalence in service areas, but are typically observed only for neighbourhoods. An application involves estimating neighbourhood asthma prevalence in a London health region involving 562 neighbourhoods and 189 service (primary care areas.
Congdon, Peter
2013-10-14
This paper considers estimation of disease prevalence for small areas (neighbourhoods) when the available observations on prevalence are for an alternative partition of a region, such as service areas. Interpolation to neighbourhoods uses a kernel method extended to take account of two types of collateral information. The first is morbidity and service use data, such as hospital admissions, observed for neighbourhoods. Variations in morbidity and service use are expected to reflect prevalence. The second type of collateral information is ecological risk factors (e.g., pollution indices) that are expected to explain variability in prevalence in service areas, but are typically observed only for neighbourhoods. An application involves estimating neighbourhood asthma prevalence in a London health region involving 562 neighbourhoods and 189 service (primary care) areas.
CMB anisotropies interpolation
Zinger, S.; Delabrouille, Jacques; Roux, Michel; Maitre, Henri
2010-01-01
We consider the problem of the interpolation of irregularly spaced spatial data, applied to observation of Cosmic Microwave Background (CMB) anisotropies. The well-known interpolation methods and kriging are compared to the binning method which serves as a reference approach. We analyse kriging
Directory of Open Access Journals (Sweden)
Benvindo Sirtoli Gardiman
2012-04-01
values was observed and estimated by round Krigagem. in fact it was less than the others four methods used, thus it is indeed that this method shows that the interpolation which represents the best distribution of monthly spatial pluvial rainfall to database.
MAGNETIC VERSUS ELECTRICAL STIMULATION IN THE INTERPOLATION TWITCH TECHNIQUE OF ELBOW FLEXORS
Directory of Open Access Journals (Sweden)
Sofia I. Lampropoulou
2012-12-01
Full Text Available The study compared peripheral magnetic with electrical stimulation of the biceps brachii m. (BB in the single pulse Interpolation Twitch Technique (ITT. 14 healthy participants (31±7 years participated in a within-subjects repeated-measures design study. Single, constant-current electrical and magnetic stimuli were delivered over the motor point of BB with supramaximal intensity (20% above maximum at rest and at various levels of voluntary contraction. Force measurements from right elbow isometric flexion and muscle electromyograms (EMG from the BB, the triceps brachii m. (TB and the abductor pollicis brevis m. (APB were obtained. The twitch forces at rest and maximal contractions, the twitch force-voluntary force relationship, the M-waves and the voluntary activation (VA of BB between magnetic and electrical stimulation were compared. The mean amplitude of the twitches evoked at MVC was not significantly different between electrical (0.62 ± 0.49 N and magnetic (0.81 ± 0.49 N stimulation (p > 0.05, and the maximum VA of BB was comparable between electrical (95% and magnetic (93% stimulation (p > 0. 05. No differences (p >0.05 were revealed in the BB M-waves between electrical (13.47 ± 0.49 mV.ms and magnetic (12.61 ± 0.58 mV.ms stimulation. The TB M-waves were also similar (p > 0.05 but electrically evoked APB M-waves were significantly larger than those evoked by magnetic stimulation (p < 0.05. The twitch-voluntary force relationship over the range of MVCs was best described by non-linear functions for both electrical and magnetic stimulation. The electrically evoked resting twitches were consistently larger in amplitude than the magnetically evoked ones (mean difference 3.1 ± 3.34 N, p < 0.05. Reduction of the inter-electrodes distance reduced the twitch amplitude by 6.5 ± 6.2 N (p < 0.05. The fundamental similarities in voluntary activation assessment of BB with peripheral electrical and magnetic stimulation point towards a promising
Directory of Open Access Journals (Sweden)
J Swain
2017-12-01
Full Text Available Indian Space Research Organization had launched Oceansat-2 on 23 September 2009, and the scatterometer onboard was a space-borne sensor capable of providing ocean surface winds (both speed and direction over the globe for a mission life of 5 years. The observations of ocean surface winds from such a space-borne sensor are the potential source of data covering the global oceans and useful for driving the state-of-the-art numerical models for simulating ocean state if assimilated/blended with weather prediction model products. In this study, an efficient interpolation technique of inverse distance and time is demonstrated using the Oceansat-2 wind measurements alone for a selected month of June 2010 to generate gridded outputs. As the data are available only along the satellite tracks and there are obvious data gaps due to various other reasons, Oceansat-2 winds were subjected to spatio-temporal interpolation, and 6-hour global wind fields for the global oceans were generated over 1 × 1 degree grid resolution. Such interpolated wind fields can be used to drive the state-of-the-art numerical models to predict/hindcast ocean-state so as to experiment and test the utility/performance of satellite measurements alone in the absence of blended fields. The technique can be tested for other satellites, which provide wind speed as well as direction data. However, the accuracy of input winds is obviously expected to have a perceptible influence on the predicted ocean-state parameters. Here, some attempts are also made to compare the interpolated Oceansat-2 winds with available buoy measurements and it was found that they are reasonably in good agreement with a correlation coefficient of R > 0.8 and mean deviation 1.04 m/s and 25° for wind speed and direction, respectively.
Directory of Open Access Journals (Sweden)
Pakhnutov I.A.
2017-04-01
Full Text Available the paper deals with iterative interpolation methods in forms of similar recursive procedures defined by a sort of simple functions (interpolation basis not necessarily real valued. These basic functions are kind of arbitrary type being defined just by wish and considerations of user. The studied interpolant construction shows virtue of versatility: it may be used in a wide range of vector spaces endowed with scalar product, no dimension restrictions, both in Euclidean and Hilbert spaces. The choice of basic interpolation functions is as wide as possible since it is subdued nonessential restrictions. The interpolation method considered in particular coincides with traditional polynomial interpolation (mimic of Lagrange method in real unidimensional case or rational, exponential etc. in other cases. The interpolation as iterative process, in fact, is fairly flexible and allows one procedure to change the type of interpolation, depending on the node number in a given set. Linear interpolation basis options (perhaps some nonlinear ones allow to interpolate in noncommutative spaces, such as spaces of nondegenerate matrices, interpolated data can also be relevant elements of vector spaces over arbitrary numeric field. By way of illustration, the author gives the examples of interpolation on the real plane, in the separable Hilbert space and the space of square matrices with vektorvalued source data.
Martinez, G.; Vanderlinden, K.; Ordóñez, R.; Muriel, J. L.
2009-04-01
Soil organic carbon (SOC) spatial characterization is necessary to evaluate under what circumstances soil acts as a source or sink of carbon dioxide. However, at the field or catchment scale it is hard to accurately characterize its spatial distribution since large numbers of soil samples are necessary. As an alternative, near-surface geophysical sensor-based information can improve the spatial estimation of soil properties at these scales. Electromagnetic induction (EMI) sensors provide non-invasive and non-destructive measurements of the soil apparent electrical conductivity (ECa), which depends under non-saline conditions on clay content, water content or SOC, among other properties that determine the electromagnetic behavior of the soil. This study deals with the possible use of ECa-derived maps to improve SOC spatial estimation by Simple Kriging with varying local means (SKlm). Field work was carried out in a vertisol in SW Spain. The field is part of a long-term tillage experiment set up in 1982 with three replicates of conventional tillage (CT) and Direct Drilling (DD) plots with unitary dimensions of 15x65m. Shallow and deep (up to 0.8m depth) apparent electrical conductivity (ECas and ECad, respectively) was measured using the EM38-DD EMI sensor. Soil samples were taken from the upper horizont and analyzed for their SOC content. Correlation coefficients of ECas and ECad with SOC were low (0.331 and 0.175) due to the small range of SOC values and possibly also to the different support of the ECa and SOC data. Especially the ECas values were higher in the DD plots. The normalized ECa difference (ΔECa), calculated as the difference between the normalized ECas and ECad values, distinguished clearly the CT and DD plots, with the DD plots showing positive ΔECa values and CT plots ΔECa negative values. The field was stratified using fuzzy k-means (FKM) classification of ΔECa (FKM1), and ECas and ECad (FKM2). The FKM1 map mainly showed the difference between
Energy Technology Data Exchange (ETDEWEB)
Mol, Antonio Carlos A., E-mail: mol@ien.gov.br [Comissao Nacional de Energia Nuclear, Instituto de Engenharia Nuclear Rua Helio de Almeida, 75, Ilha do Fundao, P.O. Box 68550, 21941-906 Rio de Janeiro, RJ (Brazil); Instituto Nacional de Ciencia e Tecnologia de Reatores Nucleares Inovadores/CNPq (Brazil); Pereira, Claudio Marcio N.A., E-mail: cmnap@ien.gov.br [Comissao Nacional de Energia Nuclear, Instituto de Engenharia Nuclear Rua Helio de Almeida, 75, Ilha do Fundao, P.O. Box 68550, 21941-906 Rio de Janeiro, RJ (Brazil); Instituto Nacional de Ciencia e Tecnologia de Reatores Nucleares Inovadores/CNPq (Brazil); Freitas, Victor Goncalves G. [Universidade Federal do Rio de Janeiro, Programa de Engenharia Nuclear, Rio de Janeiro, RJ (Brazil); Jorge, Carlos Alexandre F., E-mail: calexandre@ien.gov.br [Comissao Nacional de Energia Nuclear, Instituto de Engenharia Nuclear Rua Helio de Almeida, 75, Ilha do Fundao, P.O. Box 68550, 21941-906 Rio de Janeiro, RJ (Brazil)
2011-02-15
This paper reports the most recent development results of a simulation tool for assessment of radiation dose exposition by nuclear plant's personnel, using artificial intelligence and virtual reality technologies. The main purpose of this tool is to support training of nuclear plants' personnel, to optimize working tasks for minimisation of received dose. A finer grid of measurement points was considered within the nuclear plant's room, for different power operating conditions. Further, an intelligent system was developed, based on neural networks, to interpolate dose rate values among measured points. The intelligent dose prediction system is thus able to improve the simulation of dose received by personnel. This work describes the improvements implemented in this simulation tool.
International Nuclear Information System (INIS)
Mol, Antonio Carlos A.; Pereira, Claudio Marcio N.A.; Freitas, Victor Goncalves G.; Jorge, Carlos Alexandre F.
2011-01-01
This paper reports the most recent development results of a simulation tool for assessment of radiation dose exposition by nuclear plant's personnel, using artificial intelligence and virtual reality technologies. The main purpose of this tool is to support training of nuclear plants' personnel, to optimize working tasks for minimisation of received dose. A finer grid of measurement points was considered within the nuclear plant's room, for different power operating conditions. Further, an intelligent system was developed, based on neural networks, to interpolate dose rate values among measured points. The intelligent dose prediction system is thus able to improve the simulation of dose received by personnel. This work describes the improvements implemented in this simulation tool.
Directory of Open Access Journals (Sweden)
Huda M. Madhloom
2017-12-01
Full Text Available The aim of this research was to simulate the water quality along the lower course of the Diyala River using Geographic Information Systems (GIS techniques. For this purpose, the samples were taken at 24 sites along the study area. The parameters: total dissolved solids (T.D.S, total suspended solids (T.S.S, iron (Fe, copper (Cu, chromium (Cr, and manganese (Mn were considered. Water samples were collected on a monthly basis for a duration of five years. The adopted analyzing approach was tested by calculating the mean absolute error (MAE and the correlation coefficient (R between observed water samples and predicted results. The result showed a percentage error less than 10% and significant correlation at R > 89% for all pollutant indicators. It was concluded that the accuracy of the applied model to simulate the river pollutants can decrease the number of monitoring station to 50%. Additionally, a distribution map for the concentrations’ results indicated that many of the major pollution indicators did not satisfy the river water quality standards.
International Nuclear Information System (INIS)
Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkin, Halim; Çevik, Uğur
2015-01-01
In this study, compliance of geostatistical estimation methods is compared to ensure investigation and imaging natural Fon radiation using the minimum number of data. Artvin province, which has a quite hilly terrain and wide variety of soil and located in the north–east of Turkey, is selected as the study area. Outdoor gamma dose rate (OGDR), which is an important determinant of environmental radioactivity level, is measured in 204 stations. Spatial structure of OGDR is determined by anisotropic, isotropic and residual variograms. Ordinary kriging (OK) and universal kriging (UK) interpolation estimations were calculated with the help of model parameters obtained from these variograms. In OK, although calculations are made based on positions of points where samples are taken, in the UK technique, general soil groups and altitude values directly affecting OGDR are included in the calculations. When two methods are evaluated based on their performances, it has been determined that UK model (r = 0.88, p < 0.001) gives quite better results than OK model (r = 0.64, p < 0.001). In addition, as a result of the maps created at the end of the study, it was illustrated that local changes are better reflected by UK method compared to OK method and its error variance is found to be lower. - Highlights: • The spatial dispersion of gamma dose rates in Artvin, which possesses one of the roughest lands in Turkey were studied. • The performance of different Geostatistic methods (OK and UK methods) for dispersion of gamma dose rates were compared. • Estimation values were calculated for non-sampling points by using the geostatistical model, the results were mapped. • The general radiological structure was determined in much less time with lower costs compared to experimental methods. • When theoretical methods are evaluated, it was obtained that UK gives more descriptive results compared to OK.
Lamb, David W.; Mengersen, Kerrie
2016-01-01
Modern soil mapping is characterised by the need to interpolate point referenced (geostatistical) observations and the availability of large numbers of environmental characteristics for consideration as covariates to aid this interpolation. Modelling tasks of this nature also occur in other fields such as biogeography and environmental science. This analysis employs the Least Angle Regression (LAR) algorithm for fitting Least Absolute Shrinkage and Selection Operator (LASSO) penalized Multiple Linear Regressions models. This analysis demonstrates the efficiency of the LAR algorithm at selecting covariates to aid the interpolation of geostatistical soil carbon observations. Where an exhaustive search of the models that could be constructed from 800 potential covariate terms and 60 observations would be prohibitively demanding, LASSO variable selection is accomplished with trivial computational investment. PMID:27603135
Rufo, Montaña; Antolín, Alicia; Paniagua, Jesús M; Jiménez, Antonio
2018-04-01
A comparative study was made of three methods of interpolation - inverse distance weighting (IDW), spline and ordinary kriging - after optimization of their characteristic parameters. These interpolation methods were used to represent the electric field levels for three emission frequencies (774kHz, 900kHz, and 1107kHz) and for the electrical stimulation quotient, Q E , characteristic of complex electromagnetic environments. Measurements were made with a spectrum analyser in a village in the vicinity of medium-wave radio broadcasting antennas. The accuracy of the models was quantified by comparing their predictions with levels measured at the control points not used to generate the models. The results showed that optimizing the characteristic parameters of each interpolation method allows any of them to be used. However, the best results in terms of the regression coefficient between each model's predictions and the actual control point field measurements were for the IDW method. Copyright © 2018 Elsevier Inc. All rights reserved.
Precipitation interpolation in mountainous areas
Kolberg, Sjur
2015-04-01
Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.
NOAA Optimum Interpolation 1/4 Degree Daily Sea Surface Temperature (OISST) Analysis, Version 2
National Oceanic and Atmospheric Administration, Department of Commerce — This high-resolution sea surface temperature (SST) analysis product was developed using an optimum interpolation (OI) technique. The SST analysis has a spatial grid...
Directory of Open Access Journals (Sweden)
Xuehong Chen
2014-03-01
Full Text Available There have been many studies and much attention paid to spatial sharpening for thermal imagery. Among them, TsHARP, based on the good correlation between vegetation index and land surface temperature (LST, is regarded as a standard technique because of its operational simplicity and effectiveness. However, as LST is affected by other factors (e.g., soil moisture in the areas with low vegetation cover, these areas cannot be well sharpened by TsHARP. Thin plate spline (TPS is another popular downscaling technique for surface data. It has been shown to be accurate and robust for different datasets; however, it has not yet been attempted in thermal sharpening. This paper proposes to combine the TsHARP and TPS methods to enhance the advantages of each. The spatially explicit errors of these two methods were firstly estimated in theory, and then the results of TPS and TsHARP were combined with the estimation of their errors. The experiments performed across various landscapes and data showed that the proposed combined method performs more robustly and accurately than TsHARP.
International Nuclear Information System (INIS)
Herda, Trent J; Walter, Ashley A; Hoge, Katherine M; Stout, Jeffrey R; Costa, Pablo B; Ryan, Eric D; Cramer, Joel T
2011-01-01
The purpose of this study was to examine the sensitivity and peak force prediction capability of the interpolated twitch technique (ITT) performed during submaximal and maximal voluntary contractions (MVCs) in subjects with the ability to maximally activate their plantar flexors. Twelve subjects performed two MVCs and nine submaximal contractions with the ITT method to calculate percent voluntary inactivation (%VI). Additionally, two MVCs were performed without the ITT. Polynomial models (linear, quadratic and cubic) were applied to the 10–90% VI and 40–90% VI versus force relationships to predict force. Peak force from the ITT MVC was 6.7% less than peak force from the MVC without the ITT. Fifty-eight percent of the 10–90% VI versus force relationships were best fit with nonlinear models; however, all 40–90% VI versus force relationships were best fit with linear models. Regardless of the polynomial model or the contraction intensities used to predict force, all models underestimated the actual force from 22% to 28%. There was low sensitivity of the ITT method at high contraction intensities and the predicted force from polynomial models significantly underestimated the actual force. Caution is warranted when interpreting the % VI at high contraction intensities and predicted peak force from submaximal contractions
Lunardi, Alessandra
2018-01-01
This book is the third edition of the 1999 lecture notes of the courses on interpolation theory that the author delivered at the Scuola Normale in 1998 and 1999. In the mathematical literature there are many good books on the subject, but none of them is very elementary, and in many cases the basic principles are hidden below great generality. In this book the principles of interpolation theory are illustrated aiming at simplification rather than at generality. The abstract theory is reduced as far as possible, and many examples and applications are given, especially to operator theory and to regularity in partial differential equations. Moreover the treatment is self-contained, the only prerequisite being the knowledge of basic functional analysis.
Multivariate Birkhoff interpolation
Lorentz, Rudolph A
1992-01-01
The subject of this book is Lagrange, Hermite and Birkhoff (lacunary Hermite) interpolation by multivariate algebraic polynomials. It unifies and extends a new algorithmic approach to this subject which was introduced and developed by G.G. Lorentz and the author. One particularly interesting feature of this algorithmic approach is that it obviates the necessity of finding a formula for the Vandermonde determinant of a multivariate interpolation in order to determine its regularity (which formulas are practically unknown anyways) by determining the regularity through simple geometric manipulations in the Euclidean space. Although interpolation is a classical problem, it is surprising how little is known about its basic properties in the multivariate case. The book therefore starts by exploring its fundamental properties and its limitations. The main part of the book is devoted to a complete and detailed elaboration of the new technique. A chapter with an extensive selection of finite elements follows as well a...
Czech Academy of Sciences Publication Activity Database
Štěpánek, P.; Zahradníček, P.; Huth, Radan
2011-01-01
Roč. 115, 1-2 (2011), s. 87-98 ISSN 0324-6329 R&D Projects: GA ČR GA205/08/1619 Institutional research plan: CEZ:AV0Z30420517 Keywords : data quality control * filling missing values * interpolation techniques * climatological time series Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 0.364, year: 2011 http://www.met.hu/en/ismeret-tar/kiadvanyok/idojaras/index.php?id=34
International Nuclear Information System (INIS)
Magnusson Seger, Maria
1998-01-01
We here present LINCON FAST which is an exact method for 3D reconstruction from cone-beam projection data. The new method is compared to the LINCON method which is known to be fast and to give good image quality. Both methods have O(N 3 log N) complexity and are based on Grangeat's result which states that the derivative of the Radon transform of the object function can be obtained from cone-beam projections. One disadvantage with LINCON is that the rather computationally intensive chirp z-transform is frequently used. In LINCON FAST , FFT and interpolation in the Fourier domain are used instead, which are less computationally demanding. The computation tools involved in LINCON FAST are solely FFT, 1D eight-point interpolation, multiplicative weighting and tri-linear interpolation. We estimate that LINCON FAST will be 2-2.5 times faster than LINCON. The interpolation filter belongs to a special class of filters developed by us. It turns out that the filter must be very carefully designed to keep a good image quality. Visual inspection of experimental results shows that the image quality is almost the same for LINCON and the new method LINCON FAST . However, it should be remembered that LINCON FAST can never give better image quality than LINCON, since LINCON FAST is designed to approximate LINCON as well as possible. (author)
El Serafy, Ghada; Gaytan Aguilar, Sandra; Ziemba, Alexander
2016-04-01
There is an increasing use of process-based models in the investigation of ecological systems and scenario predictions. The accuracy and quality of these models are improved when run with high spatial and temporal resolution data sets. However, ecological data can often be difficult to collect which manifests itself through irregularities in the spatial and temporal domain of these data sets. Through the use of Data INterpolating Empirical Orthogonal Functions(DINEOF) methodology, earth observation products can be improved to have full spatial coverage within the desired domain as well as increased temporal resolution to daily and weekly time step, those frequently required by process-based models[1]. The DINEOF methodology results in a degree of error being affixed to the refined data product. In order to determine the degree of error introduced through this process, the suspended particulate matter and chlorophyll-a data from MERIS is used with DINEOF to produce high resolution products for the Wadden Sea. These new data sets are then compared with in-situ and other data sources to determine the error. Also, artificial cloud cover scenarios are conducted in order to substantiate the findings from MERIS data experiments. Secondly, the accuracy of DINEOF is explored to evaluate the variance of the methodology. The degree of accuracy is combined with the overall error produced by the methodology and reported in an assessment of the quality of DINEOF when applied to resolution refinement of chlorophyll-a and suspended particulate matter in the Wadden Sea. References [1] Sirjacobs, D.; Alvera-Azcárate, A.; Barth, A.; Lacroix, G.; Park, Y.; Nechad, B.; Ruddick, K.G.; Beckers, J.-M. (2011). Cloud filling of ocean colour and sea surface temperature remote sensing products over the Southern North Sea by the Data Interpolating Empirical Orthogonal Functions methodology. J. Sea Res. 65(1): 114-130. Dx.doi.org/10.1016/j.seares.2010.08.002
International Nuclear Information System (INIS)
Kong, V; Zhang, H; Jin, J; Ren, L
2014-01-01
Purpose: Synchronized moving grid (SMOG) is a promising technique to reduce scatter and ghost artifacts in cone beam computed tomography (CBCT). However, the grid blocks part of image information in each projection, and multiple projections at the same gantry angle have to been taken to obtain full information. Because of the continuity of a patient's anatomy in the projection, the blocked information may be estimated by interpolation. This study aims to evaluate an inpainting-based interpolation approach to recover the missing information for CBCT reconstruction. Method: We used a simple region-based inpainting approach to interpolate the missing information. For a pixel to be interpolated, we divided the nearby regions having image information into 6 sub-regions: up-left, up-middle, up-right, down-left, down-middle, and down-right, each with 9 pixels. The average pixel value of each sub-region was calculated. These average values, along with the pixel location, were used to determine the interpolated pixel value. We compared our approach with the Criminisi Exemplar (CE) and total variation (TV) based inpainting techniques. Projection images of Catphan and a head phantom were used for the comparison. The SMOG was simulated by erasing the information (filling with “0”) of the areas in each projection corresponding to the grid. Results: For the Catphan, the processing time was 178, 45 and 0.98 minutes for CE, TV and our approach, respectively. The signal to noise ratio (SNR) was 19.4, 18.5 and 26.4 db, correspondingly. For the head phantom, the processing time was 222, 45 and 0.93 minutes for CE, TV and our approach, respectively. The SNR was 24.6, 20.2 and 26.2db correspondingly. Conclusion: We have developed a simple inpainting based interpolation approach, which can recover some of the image information for the SMOG-based CBCT imaging. This study is supported by NIH/NCI grant 1R01CA166948-01
Directory of Open Access Journals (Sweden)
Arnaud Grüss
2018-01-01
Full Text Available To be able to simulate spatial patterns of predator-prey interactions, many spatially-explicit ecosystem modeling platforms, including Atlantis, need to be provided with distribution maps defining the annual or seasonal spatial distributions of functional groups and life stages. We developed a methodology combining extrapolation and interpolation of the predictions made by statistical habitat models to produce distribution maps for the fish and invertebrates represented in the Atlantis model of the Gulf of Mexico (GOM Large Marine Ecosystem (LME (“Atlantis-GOM”. This methodology consists of: (1 compiling a large monitoring database, gathering all the fisheries-independent and fisheries-dependent data collected in the northern (U.S. GOM since 2000; (2 compiling a large environmental database, storing all the environmental parameters known to influence the spatial distribution patterns of fish and invertebrates of the GOM; (3 fitting binomial generalized additive models (GAMs to the large monitoring and environmental databases, and geostatistical binomial generalized linear mixed models (GLMMs to the large monitoring database; and (4 employing GAM predictions to infer spatial distributions in the southern GOM, and GLMM predictions to infer spatial distributions in the U.S. GOM. Thus, our methodology allows for reasonable extrapolation in the southern GOM based on a large amount of monitoring and environmental data, and for interpolation in the U.S. GOM accurately reflecting the probability of encountering fish and invertebrates in that region. We used an iterative cross-validation procedure to validate GAMs. When a GAM did not pass the validation test, we employed a GAM for a related functional group/life stage to generate distribution maps for the southern GOM. In addition, no geostatistical GLMMs were fit for the functional groups and life stages whose depth, longitudinal and latitudinal ranges within the U.S. GOM are not entirely covered by
International Nuclear Information System (INIS)
Garcia, Francisco; Palacio, Carlos; Garcia, Uriel
2012-01-01
Multivariate statistical techniques were used to investigate the temporal and spatial variations of water quality at the Santa Marta coastal area where a submarine out fall that discharges 1 m3/s of domestic wastewater is located. Two-way analysis of variance (ANOVA), cluster and principal component analysis and Krigging interpolation were considered for this report. Temporal variation showed two heterogeneous periods. From December to April, and July, where the concentration of the water quality parameters is higher; the rest of the year (May, June, August-November) were significantly lower. The spatial variation reported two areas where the water quality is different, this difference is related to the proximity to the submarine out fall discharge.
International Nuclear Information System (INIS)
Aspinall, M D; Joyce, M J; Mackin, R O; Jarrah, Z; Boston, A J; Nolan, P J; Peyton, A J; Hawkes, N P
2009-01-01
A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s −1 . Events arising from the 7 Li(p, n) 7 Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential
Application of ordinary kriging for interpolation of micro-structured technical surfaces
International Nuclear Information System (INIS)
Raid, Indek; Kusnezowa, Tatjana; Seewig, Jörg
2013-01-01
Kriging is an interpolation technique used in geostatistics. In this paper we present kriging applied in the field of three-dimensional optical surface metrology. Technical surfaces are not always optically cooperative, meaning that measurements of technical surfaces contain invalid data points because of different effects. These data points need to be interpolated to obtain a complete area in order to fulfil further processing. We present an elementary type of kriging, known as ordinary kriging, and apply it to interpolate measurements of different technical surfaces containing different kinds of realistic defects. The result of the interpolation with kriging is compared to six common interpolation techniques: nearest neighbour, natural neighbour, inverse distance to a power, triangulation with linear interpolation, modified Shepard's method and radial basis function. In order to quantify the results of different interpolations, the topographies are compared to defect-free reference topographies. Kriging is derived from a stochastic model that suggests providing an unbiased, linear estimation with a minimized error variance. The estimation with kriging is based on a preceding statistical analysis of the spatial structure of the surface. This comprises the choice and adaptation of specific models of spatial continuity. In contrast to common methods, kriging furthermore considers specific anisotropy in the data and adopts the interpolation accordingly. The gained benefit requires some additional effort in preparation and makes the overall estimation more time-consuming than common methods. However, the adaptation to the data makes this method very flexible and accurate. (paper)
Göl, Ceyhun; Bulut, Sinan; Bolat, Ferhat
2017-10-01
The purpose of this research is to compare the spatial variability of soil organic carbon (SOC) in four adjacent land uses including the cultivated area, the grassland area, the plantation area and the natural forest area in the semi - arid region of Black Sea backward region of Turkey. Some of the soil properties, including total nitrogen, SOC, soil organic matter, and bulk density were measured on a grid with a 50 m sampling distance on the top soil (0-15 cm depth). Accordingly, a total of 120 samples were taken from the four adjacent land uses. Data was analyzed using geostatistical methods. The methods used were: Block kriging (BK), co - kriging (CK) with organic matter, total nitrogen and bulk density as auxiliary variables and inverse distance weighting (IDW) methods with the power of 1, 2 and 4. The methods were compared using a performance criteria that included root mean square error (RMSE), mean absolute error (MAE) and the coefficient of correlation (r). The one - way ANOVA test showed that differences between the natural (0.6653 ± 0.2901) - plantation forest (0.7109 ± 0.2729) areas and the grassland (1.3964 ± 0.6828) - cultivated areas (1.5851 ± 0.5541) were statistically significant at 0.05 level (F = 28.462). The best model for describing spatially variation of SOC was CK with the lowest error criteria (RMSE = 0.3342, MAE = 0.2292) and the highest coefficient of correlation (r = 0.84). The spatial structure of SOC could be well described by the spherical model. The nugget effect indicated that SOC was moderately dependent on the study area. The error distributions of the model showed that the improved model was unbiased in predicting the spatial distribution of SOC. This study's results revealed that an explanatory variable linked SOC increased success of spatial interpolation methods. In subsequent studies, this case should be taken into account for reaching more accurate outputs.
International Nuclear Information System (INIS)
Blok, M. de; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica
1990-01-01
This report describes a time-interpolator with which time differences can be measured using digital and analog techniques. It concerns a maximum measuring time of 6.4 μs with a resolution of 100 ps. Use is made of Emitter Coupled Logic (ECL) and analogues of high-frequency techniques. The difficulty which accompanies the use of ECL-logic is keeping as short as possible the mutual connections and closing properly the outputs in order to avoid reflections. The digital part of the time-interpolator consists of a continuous running clock and logic which converts an input signal into a start- and stop signal. The analog part consists of a Time to Amplitude Converter (TAC) and an analog to digital converter. (author). 3 refs.; 30 figs
Interpolation functors and interpolation spaces
Brudnyi, Yu A
1991-01-01
The theory of interpolation spaces has its origin in the classical work of Riesz and Marcinkiewicz but had its first flowering in the years around 1960 with the pioneering work of Aronszajn, Calderón, Gagliardo, Krein, Lions and a few others. It is interesting to note that what originally triggered off this avalanche were concrete problems in the theory of elliptic boundary value problems related to the scale of Sobolev spaces. Later on, applications were found in many other areas of mathematics: harmonic analysis, approximation theory, theoretical numerical analysis, geometry of Banach spaces, nonlinear functional analysis, etc. Besides this the theory has a considerable internal beauty and must by now be regarded as an independent branch of analysis, with its own problems and methods. Further development in the 1970s and 1980s included the solution by the authors of this book of one of the outstanding questions in the theory of the real method, the K-divisibility problem. In a way, this book harvests the r...
Yeşilkanat, Cafer Mert; Kobya, Yaşar; Taşkın, Halim; Çevik, Uğur
2017-09-01
The aim of this study was to determine spatial risk dispersion of ambient gamma dose rate (AGDR) by using both artificial neural network (ANN) and fuzzy logic (FL) methods, compare the performances of methods, make dose estimations for intermediate stations with no previous measurements and create dose rate risk maps of the study area. In order to determine the dose distribution by using artificial neural networks, two main networks and five different network structures were used; feed forward ANN; Multi-layer perceptron (MLP), Radial basis functional neural network (RBFNN), Quantile regression neural network (QRNN) and recurrent ANN; Jordan networks (JN), Elman networks (EN). In the evaluation of estimation performance obtained for the test data, all models appear to give similar results. According to the cross-validation results obtained for explaining AGDR distribution, Pearson's r coefficients were calculated as 0.94, 0.91, 0.89, 0.91, 0.91 and 0.92 and RMSE values were calculated as 34.78, 43.28, 63.92, 44.86, 46.77 and 37.92 for MLP, RBFNN, QRNN, JN, EN and FL, respectively. In addition, spatial risk maps showing distributions of AGDR of the study area were created by all models and results were compared with geological, topological and soil structure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Marcelo R. Viola
2010-09-01
Full Text Available A espacialização de variáveis climáticas, notadamente a precipitação pluvial, necessita de estudos constantes, visando ao aperfeiçoamento de interpoladores e desenvolvimento de mapas sem tendência. Objetivou-se, neste contexto, avaliar o desempenho dos interpoladores krigagem (KG, a partir do melhor modelo de semivariograma, cokrigagem (CA, introduzindo a altitude como variável secundária, modelagem estatística (ME, na qual a precipitação média pode ser estimada a partir de coordenadas geográficas, e inverso do quadrado da distância (IQD, para espacialização da precipitação média mensal, precipitação média do período seco e precipitação média anual, em Minas Gerais; para isto se utilizaram informações de 232 postos pluviométricos para modelagem e de 70 para validação, com base no erro médio absoluto, além de um modelo digital de elevação com resolução de 270 m. Quanto à avaliação dos métodos de interpolação, constatou-se bom desempenho das metodologias abordadas, com erro absoluto médio variando de 12,84 a 19,96%, com destaque para a cokrigagem, que obteve menor erro em 50% das situações analisadas.The mapping of weather elements, especially rainfall, needs constant studies to improve the performance of interpolators, and to obtain unbiased maps. This work aimed to evaluate the performance of some spatial interpolators for mean monthly rainfall, mean rainfall during the dry season and mean annual rainfall in the Minas Gerais State. For that, the ordinary kriging was evaluated and compared, after the semi-variogram modeling, co-kriging (introducing the altitude above sea level as a secondary variable, statistical modeling in which the mean precipitation can be estimated from geographical coordinates, and inverse square distance. In this study, data sets were evaluated from 232 pluviometric stations, in the Minas Gerais area, to apply each one of the interpolators mentioned and 70 stations to
Directory of Open Access Journals (Sweden)
Larry Niño
2011-06-01
Full Text Available OBJETIVO: Diseñar e implementar una metodología de vigilancia que localice los focos de infestación de Aedes aegypti con el empleo de larvitrampas y técnicas de interpolación espacial, las cuales permiten estimar la abundancia vectorial de forma continua en el espacio a partir del conteo de individuos colectados en el área de estudio. MÉTODOS: Se instalaron 228 larvitrampas -a razón de una por manzana- en la zona más densamente poblada de la comuna cinco de Villavicencio (Meta. Con la información regionalizada de la abundancia de larvas se realizaron interpolaciones espaciales con las técnicas polígonos de Voronoi, Kriging ordinario y ponderación de distancias inversas. RESULTADOS: Se presenta una metodología alternativa para la vigilancia del vector del dengue, basada en el uso de larvitrampas y técnicas de interpolación espacial, con las cuales se obtuvieron mapas de superficie sustentados en observaciones puntuales. CONCLUSIONES: Los resultados muestran que esta estrategia aventaja a los índices normalmente usados, dado que permite visualizar de manera continua el nivel de infestación vectorial y por ende el riesgo de transmisión de dengue de acuerdo al grado de infestación por A. aegypti. Es de esperar que su adopción contribuya a planificar, optimizar y evaluar con mayor efectividad las actividades de prevención y control.OBJECTIVE: Design and implement a surveillance method for locating Aedes aegypti infestation foci with the use of larvae traps and spatial interpolation techniques, which facilitate the ongoing estimation of vector abundance in the area by counting the individuals collected in the study area. METHODs: A total of 228 larvae traps were installed-at a rate of one per block-in the most densely populated area of commune five of Villavicencio (Meta. With regionalized information on larvae abundance, spatial interpolations were conducted with the Voronoi polygon, ordinary kriging, and inverse distance
Directory of Open Access Journals (Sweden)
M. H. Nazarifar
2014-01-01
Full Text Available Water is the main constraint for production of agricultural crops. The temporal and spatial variations in water requirement for agriculture products are limiting factors in the study of optimum use of water resources in regional planning and management. However, due to unfavorable distribution and density of meteorological stations, it is not possible to monitor the regional variations precisely. Therefore, there is a need to estimate the evapotranspiration of crops at places where meteorological data are not available and then extend the findings from points of measurements to regional scale. Geostatistical methods are among those methods that can be used for estimation of evapotranspiration at regional scale. The present study attempts to investigate different geostatistical methods for temporal and spatial estimation of water requirements for wheat crop in different periods. The study employs the data provided by 16 synoptic and climatology meteorological stations in Hamadan province in Iran. Evapotranspiration for each month and for the growth period were determined using Penman-Mantis and Torrent-White methods for different water periods based on Standardized Precipitation Index (SPI. Among the available geostatistical methods, three methods: Kriging Method, Cokriging Method, and inverse weighted distance were selected, and analyzed, using GS+ software. Analysis and selection of the suitable geostatistical method were performed based on two measures, namely Mean Absolute Error (MAE and Mean Bias Error (MBE. The findings suggest that, in general, during the drought period, Kriging method is the proper one for estimating water requirements for the six months: January, February, April, May, August, and December. However, weighted moving average is a better estimation method for the months March, June, September, and October. In addition, Kriging is the best method for July. In normal conditions, Kriging is suitable for April, August, December
Linear Methods for Image Interpolation
Pascal Getreuer
2011-01-01
We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.
Directory of Open Access Journals (Sweden)
Sumedh Dhabu
2015-09-01
Full Text Available This paper presents the design and FPGA implementation of interpolated continuously variable fractional delay structure based filter (ICVFD filter with fine control over the cutoff frequency. In the ICVFD filter, each unit delay of the prototype lowpass filter is replaced by a continuously variable fractional delay (CVFD element proposed in this paper. The CVFD element requires the same number of multiplications as that of the second-order fractional delay structure used in the existing fractional delay structure based variable filter (FDS based filter, however it provides fractional delays corresponding to the higher-order fractional delay structures. Hence, the proposed ICVFD filter provides wider cutoff frequency range compared to the FDS based filter. The ICVFD filter is also capable of providing variable bandpass and highpass responses. We use two-stage approach for the FPGA implementation of the ICVFD filter. First, we use pipelining stages to shorten the critical path and improve the operating frequency. Then, we make use of specific hardware resource, i.e. RAM-based Shift Register (SRL to further improve the operating frequency and resource usage.
Kriging for interpolation of sparse and irregularly distributed geologic data
Energy Technology Data Exchange (ETDEWEB)
Campbell, K.
1986-12-31
For many geologic problems, subsurface observations are available only from a small number of irregularly distributed locations, for example from a handful of drill holes in the region of interest. These observations will be interpolated one way or another, for example by hand-drawn stratigraphic cross-sections, by trend-fitting techniques, or by simple averaging which ignores spatial correlation. In this paper we consider an interpolation technique for such situations which provides, in addition to point estimates, the error estimates which are lacking from other ad hoc methods. The proposed estimator is like a kriging estimator in form, but because direct estimation of the spatial covariance function is not possible the parameters of the estimator are selected by cross-validation. Its use in estimating subsurface stratigraphy at a candidate site for geologic waste repository provides an example.
Energy Technology Data Exchange (ETDEWEB)
Yu Jinshan; Zhang Ruitao; Zhang Zhengping; Wang Yonglu; Zhu Can; Zhang Lei; Yu Zhou; Han Yong, E-mail: yujinshan@yeah.net [National Laboratory of Analog IC' s, Chongqing 400060 (China)
2011-01-15
A digital calibration technique for an ultra high-speed folding and interpolating analog-to-digital converter in 0.18-{mu}m CMOS technology is presented. The similar digital calibration techniques are taken for high 3-bit flash converter and low 5-bit folding and interpolating converter, which are based on well-designed calibration reference, calibration DAC and comparators. The spice simulation and the measured results show the ADC produces 5.9 ENOB with calibration disabled and 7.2 ENOB with calibration enabled for high-frequency wide-bandwidth analog input. (semiconductor integrated circuits)
I. Kuba; J. Zavacky; J. Mihalik
1995-01-01
This paper presents the use of B spline functions in various digital signal processing applications. The theory of one-dimensional B spline interpolation is briefly reviewed, followed by its extending to two dimensions. After presenting of one and two dimensional spline interpolation, the algorithms of image interpolation and resolution increasing were proposed. Finally, experimental results of computer simulations are presented.
Visualization techniques for spatial probability density function data
Directory of Open Access Journals (Sweden)
Udeepta D Bordoloi
2006-01-01
Full Text Available Novel visualization methods are presented for spatial probability density function data. These are spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We use clustering as a means to reduce the information contained in these datasets; and present two different ways of interpreting and clustering the data. The clustering methods are used on two datasets, and the results are discussed with the help of visualization techniques designed for the spatial probability data.
Vedadi, Farhang; Shirani, Shahram
2014-01-01
A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.
SPLINE, Spline Interpolation Function
International Nuclear Information System (INIS)
Allouard, Y.
1977-01-01
1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10
Kaurkin, M. N.; Ibrayev, R. A.; Belyaev, K. P.
2018-01-01
A parallel realization of the Ensemble Optimal Interpolation (EnOI) data assimilation (DA) method in conjunction with the eddy-resolving global circulation model is implemented. The results of DA experiments in the North Atlantic with the assimilation of the Archiving, Validation and Interpretation of Satellite Oceanographic (AVISO) data from the Jason-1 satellite are analyzed. The results of simulation are compared with the independent temperature and salinity data from the ARGO drifters.
Spatial Angular Compounding Technique for H-Scan Ultrasound Imaging.
Khairalseed, Mawia; Xiong, Fangyuan; Kim, Jung-Whan; Mattrey, Robert F; Parker, Kevin J; Hoyt, Kenneth
2018-01-01
H-Scan is a new ultrasound imaging technique that relies on matching a model of pulse-echo formation to the mathematics of a class of Gaussian-weighted Hermite polynomials. This technique may be beneficial in the measurement of relative scatterer sizes and in cancer therapy, particularly for early response to drug treatment. Because current H-scan techniques use focused ultrasound data acquisitions, spatial resolution degrades away from the focal region and inherently affects relative scatterer size estimation. Although the resolution of ultrasound plane wave imaging can be inferior to that of traditional focused ultrasound approaches, the former exhibits a homogeneous spatial resolution throughout the image plane. The purpose of this study was to implement H-scan using plane wave imaging and investigate the impact of spatial angular compounding on H-scan image quality. Parallel convolution filters using two different Gaussian-weighted Hermite polynomials that describe ultrasound scattering events are applied to the radiofrequency data. The H-scan processing is done on each radiofrequency image plane before averaging to get the angular compounded image. The relative strength from each convolution is color-coded to represent relative scatterer size. Given results from a series of phantom materials, H-scan imaging with spatial angular compounding more accurately reflects the true scatterer size caused by reductions in the system point spread function and improved signal-to-noise ratio. Preliminary in vivo H-scan imaging of tumor-bearing animals suggests this modality may be useful for monitoring early response to chemotherapeutic treatment. Overall, H-scan imaging using ultrasound plane waves and spatial angular compounding is a promising approach for visualizing the relative size and distribution of acoustic scattering sources. Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.
Development of Spatial Scaling Technique of Forest Health Sample Point Information
Lee, J. H.; Ryu, J. E.; Chung, H. I.; Choi, Y. Y.; Jeon, S. W.; Kim, S. H.
2018-04-01
Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016). Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015) were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted) method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.
DEVELOPMENT OF SPATIAL SCALING TECHNIQUE OF FOREST HEALTH SAMPLE POINT INFORMATION
Directory of Open Access Journals (Sweden)
J. H. Lee
2018-04-01
Full Text Available Forests provide many goods, Ecosystem services, and resources to humans such as recreation air purification and water protection functions. In rececnt years, there has been an increase in the factors that threaten the health of forests such as global warming due to climate change, environmental pollution, and the increase in interest in forests, and efforts are being made in various countries for forest management. Thus, existing forest ecosystem survey method is a monitoring method of sampling points, and it is difficult to utilize forests for forest management because Korea is surveying only a small part of the forest area occupying 63.7 % of the country (Ministry of Land Infrastructure and Transport Korea, 2016. Therefore, in order to manage large forests, a method of interpolating and spatializing data is needed. In this study, The 1st Korea Forest Health Management biodiversity Shannon;s index data (National Institute of Forests Science, 2015 were used for spatial interpolation. Two widely used methods of interpolation, Kriging method and IDW(Inverse Distance Weighted method were used to interpolate the biodiversity index. Vegetation indices SAVI, NDVI, LAI and SR were used. As a result, Kriging method was the most accurate method.
Directory of Open Access Journals (Sweden)
Lixin Li
2014-09-01
Full Text Available Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-01-01
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation
Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard
2014-09-03
Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation
Generalized interpolative quantum statistics
International Nuclear Information System (INIS)
Ramanathan, R.
1992-01-01
A generalized interpolative quantum statistics is presented by conjecturing a certain reordering of phase space due to the presence of possible exotic objects other than bosons and fermions. Such an interpolation achieved through a Bose-counting strategy predicts the existence of an infinite quantum Boltzmann-Gibbs statistics akin to the one discovered by Greenberg recently
Monotone piecewise bicubic interpolation
International Nuclear Information System (INIS)
Carlson, R.E.; Fritsch, F.N.
1985-01-01
In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables
Linear Methods for Image Interpolation
Directory of Open Access Journals (Sweden)
Pascal Getreuer
2011-09-01
Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.
The interpolation method based on endpoint coordinate for CT three-dimensional image
International Nuclear Information System (INIS)
Suto, Yasuzo; Ueno, Shigeru.
1997-01-01
Image interpolation is frequently used to improve slice resolution to reach spatial resolution. Improved quality of reconstructed three-dimensional images can be attained with this technique as a result. Linear interpolation is a well-known and widely used method. The distance-image method, which is a non-linear interpolation technique, is also used to convert CT value images to distance images. This paper describes a newly developed method that makes use of end-point coordinates: CT-value images are initially converted to binary images by thresholding them and then sequences of pixels with 1-value are arranged in vertical or horizontal directions. A sequence of pixels with 1-value is defined as a line segment which has starting and end points. For each pair of adjacent line segments, another line segment was composed by spatial interpolation of the start and end points. Binary slice images are constructed from the composed line segments. Three-dimensional images were reconstructed from clinical X-ray CT images, using three different interpolation methods and their quality and processing speed were evaluated and compared. (author)
Measurement of spatial correlation functions using image processing techniques
International Nuclear Information System (INIS)
Berryman, J.G.
1985-01-01
A procedure for using digital image processing techniques to measure the spatial correlation functions of composite heterogeneous materials is presented. Methods for eliminating undesirable biases and warping in digitized photographs are discussed. Fourier transform methods and array processor techniques for calculating the spatial correlation functions are treated. By introducing a minimal set of lattice-commensurate triangles, a method of sorting and storing the values of three-point correlation functions in a compact one-dimensional array is developed. Examples are presented at each stage of the analysis using synthetic photographs of cross sections of a model random material (the penetrable sphere model) for which the analytical form of the spatial correlations functions is known. Although results depend somewhat on magnification and on relative volume fraction, it is found that photographs digitized with 512 x 512 pixels generally have sufficiently good statistics for most practical purposes. To illustrate the use of the correlation functions, bounds on conductivity for the penetrable sphere model are calculated with a general numerical scheme developed for treating the singular three-dimensional integrals which must be evaluated
Directory of Open Access Journals (Sweden)
Silvio Jorge Coelho Simões
2012-08-01
Full Text Available The reference evapotranspiration is an important hydrometeorological variable; its measurement is scarce in large portions of the Brazilian territory, what demands the search for alternative methods and techniques for its quantification. In this sense, the present work investigated a method for the spatialization of the reference evapotranspiration using the geostatistical method of kriging, in regions with limited data and hydrometeorological stations. The monthly average reference evapotranspiration was calculated by the Penman-Monteith-FAO equation, based on data from three weather stations located in southern Minas Gerais (Itajubá, Lavras and Poços de Caldas, and subsequently interpolated by ordinary point kriging using the approach "calculate and interpolate." The meteorological data for a fourth station (Três Corações located within the area of interpolation were used to validate the reference evapotranspiration interpolated spatially. Due to the reduced number of stations and the consequent impossibility of carrying variographic analyzes, correlation coefficient (r, index of agreement (d, medium bias error (MBE, root mean square error (RMSE and t-test were used for comparison between the calculated and interpolated reference evapotranspiration for the Três Corações station. The results of this comparison indicated that the spatial kriging procedure, even using a few stations, allows to interpolate satisfactorily the reference evapotranspiration, therefore, it is an important tool for agricultural and hydrological applications in regions with lack of data.
Feature displacement interpolation
DEFF Research Database (Denmark)
Nielsen, Mads; Andresen, Per Rønsholt
1998-01-01
Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features...... often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov regularizations, also including scale......-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface....
Extension Of Lagrange Interpolation
Directory of Open Access Journals (Sweden)
Mousa Makey Krady
2015-01-01
Full Text Available Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.
Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study
Energy Technology Data Exchange (ETDEWEB)
Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)
2016-06-15
High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.
Comparison of Interpolation Methods as Applied to Time Synchronous Averaging
National Research Council Canada - National Science Library
Decker, Harry
1999-01-01
Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...
Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang
2018-02-20
We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.
Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul
2013-01-01
Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra
Optimal interpolation method for intercomparison of atmospheric measurements.
Ridolfi, Marco; Ceccherini, Simone; Carli, Bruno
2006-04-01
Intercomparison of atmospheric measurements is often a difficult task because of the different spatial response functions of the experiments considered. We propose a new method for comparison of two atmospheric profiles characterized by averaging kernels with different vertical resolutions. The method minimizes the smoothing error induced by the differences in the averaging kernels by exploiting an optimal interpolation rule to map one profile into the retrieval grid of the other. Compared with the techniques published so far, this method permits one to retain the vertical resolution of the less-resolved profile involved in the intercomparison.
International Nuclear Information System (INIS)
Monzo, Jose M.; Lerche, Christoph W.; Martinez, Jorge D.; Esteve, Raul; Toledo, Jose; Gadea, Rafael; Colom, Ricardo J.; Herrero, Vicente; Ferrando, Nestor; Aliaga, Ramon J.; Mateo, Fernando; Sanchez, Filomeno; Mora, Francisco J.; Benlloch, Jose M.; Sebastia, Angel
2009-01-01
PET systems need good time resolution to improve the true event rate, random event rejection, and pile-up rejection. In this study we propose a digital procedure for this task using a low pass filter interpolation plus a Digital Constant Fraction Discriminator (DCFD). We analyzed the best way to implement this algorithm on our dual head PET system and how varying the quality of the acquired signal and electronic noise analytically affects timing resolution. Our detector uses two continuous LSO crystals with a position sensitive PMT. Six signals per detector are acquired using an analog electronics front-end and these signals are processed using an in-house digital acquisition board. The test bench developed simulates the electronics and digital algorithms using Matlab. Results show that electronic noise and other undesired effects have a significant effect on the timing resolution of the system. Interpolated DCFD gives better results than non-interpolated DCFD. In high noise environments, differences are reduced. An optimum delay selection, based on the environment noise, improves time resolution.
Energy Technology Data Exchange (ETDEWEB)
Monzo, Jose M. [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain)], E-mail: jmonfer@aaa.upv.es; Lerche, Christoph W.; Martinez, Jorge D.; Esteve, Raul; Toledo, Jose; Gadea, Rafael; Colom, Ricardo J.; Herrero, Vicente; Ferrando, Nestor; Aliaga, Ramon J.; Mateo, Fernando [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Sanchez, Filomeno [Nuclear Medical Physics Group, IFIC Institute, Consejo Superior de Investigaciones Cientificas (CSIC), 46980 Paterna (Spain); Mora, Francisco J. [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain); Benlloch, Jose M. [Nuclear Medical Physics Group, IFIC Institute, Consejo Superior de Investigaciones Cientificas (CSIC), 46980 Paterna (Spain); Sebastia, Angel [Digital Systems Design (DSD) Group, ITACA Institute, Universidad Politecnica de Valencia, Camino de Vera s/n, 46022 Valencia (Spain)
2009-06-01
PET systems need good time resolution to improve the true event rate, random event rejection, and pile-up rejection. In this study we propose a digital procedure for this task using a low pass filter interpolation plus a Digital Constant Fraction Discriminator (DCFD). We analyzed the best way to implement this algorithm on our dual head PET system and how varying the quality of the acquired signal and electronic noise analytically affects timing resolution. Our detector uses two continuous LSO crystals with a position sensitive PMT. Six signals per detector are acquired using an analog electronics front-end and these signals are processed using an in-house digital acquisition board. The test bench developed simulates the electronics and digital algorithms using Matlab. Results show that electronic noise and other undesired effects have a significant effect on the timing resolution of the system. Interpolated DCFD gives better results than non-interpolated DCFD. In high noise environments, differences are reduced. An optimum delay selection, based on the environment noise, improves time resolution.
International Nuclear Information System (INIS)
Schuller, S.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica
1990-01-01
This report presents a description of the design of a digital time meter. This time meter should be able to measure, by means of interpolation, times of 100 ns with an accuracy of 50 ps. In order to determine the best principle for interpolation, three methods were simulated at the computer with a Pascal code. On the basis of this the best method was chosen and used in the design. In order to test the principal operation of the circuit a part of the circuit was constructed with which the interpolation could be tested. The remainder of the circuit was simulated with a computer. So there are no data available about the operation of the complete circuit in practice. The interpolation part however is the most critical part, the remainder of the circuit is more or less simple logic. Besides this report also gives a description of the principle of interpolation and the design of the circuit. The measurement results at the prototype are presented finally. (author). 3 refs.; 37 figs.; 2 tabs
SLR and GPS spatial techniques in ITRF. Argentine results.
Actis, Eloy Vicente; Huang, Dongping; Márquez, Raúl; Adarvez, Sonia; Flores, Matías; Brizuela, Diego; Nievas, Jesica; Podestá, Ricardo; Pacheco, Ana M.; Rojas, Hernán Alvis; Yin, Zhiqiang; Li, Jinzeng; Han, Yanben; Liu, Weidong; Wang, Rui
2012-08-01
Along the late 30 years spatial geodetic techniques enable us to measure horizontal and vertical deformations of the Earth’s surface with a very high precision. Performing this task we made Satellite Laser Ranging (SLR), and Global Positioning System (GPS) observations in South America ILRS 7406 Station placed at Observatorio Astronómico Félix Aguilar (OAFA) in San Juan, Argentina, accomplishing a Cooperation Agreement between CAS - NAOC and OAFA - UNSJ. Trough LAGEOS II Satellite observations we obtain rectangular coordinates of San Juan ILRS Station in the Terrestrial Reference Frame (ITR 2000), standing out that Argentine Station data were included in the late arrangements ITRF given by International Earth Rotation and Reference System Service (IERS). Spatial and temporary variations of the epoch 2010 - 2011 were evaluated finding out remarkable displacements, of about a half meter, related with seismic events on the region. We confirm these deformations by means of GP S determinations referred to Permanent GPS Station placed nearby the SLR Station.
Image interpolation and denoising for division of focal plane sensors using Gaussian processes.
Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor
2014-06-16
Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.
A FAST MORPHING-BASED INTERPOLATION FOR MEDICAL IMAGES: APPLICATION TO CONFORMAL RADIOTHERAPY
Directory of Open Access Journals (Sweden)
Hussein Atoui
2011-05-01
Full Text Available A method is presented for fast interpolation between medical images. The method is intended for both slice and projective interpolation. It allows offline interpolation between neighboring slices in tomographic data. Spatial correspondence between adjacent images is established using a block matching algorithm. Interpolation of image intensities is then carried out by morphing between the images. The morphing-based method is compared to standard linear interpolation, block-matching-based interpolation and registrationbased interpolation in 3D tomographic data sets. Results show that the proposed method scored similar performance in comparison to registration-based interpolation, and significantly outperforms both linear and block-matching-based interpolation. This method is applied in the context of conformal radiotherapy for online projective interpolation between Digitally Reconstructed Radiographs (DRRs.
Directory of Open Access Journals (Sweden)
Mateusz Szcześniak
2015-02-01
Full Text Available Ground-based precipitation data are still the dominant input type for hydrological models. Spatial variability in precipitation can be represented by spatially interpolating gauge data using various techniques. In this study, the effect of daily precipitation interpolation methods on discharge simulations using the semi-distributed SWAT (Soil and Water Assessment Tool model over a 30-year period is examined. The study was carried out in 11 meso-scale (119–3935 km2 sub-catchments lying in the Sulejów reservoir catchment in central Poland. Four methods were tested: the default SWAT method (Def based on the Nearest Neighbour technique, Thiessen Polygons (TP, Inverse Distance Weighted (IDW and Ordinary Kriging (OK. =The evaluation of methods was performed using a semi-automated calibration program SUFI-2 (Sequential Uncertainty Fitting Procedure Version 2 with two objective functions: Nash-Sutcliffe Efficiency (NSE and the adjusted R2 coefficient (bR2. The results show that: (1 the most complex OK method outperformed other methods in terms of NSE; and (2 OK, IDW, and TP outperformed Def in terms of bR2. The median difference in daily/monthly NSE between OK and Def/TP/IDW calculated across all catchments ranged between 0.05 and 0.15, while the median difference between TP/IDW/OK and Def ranged between 0.05 and 0.07. The differences between pairs of interpolation methods were, however, spatially variable and a part of this variability was attributed to catchment properties: catchments characterised by low station density and low coefficient of variation of daily flows experienced more pronounced improvement resulting from using interpolation methods. Methods providing higher precipitation estimates often resulted in a better model performance. The implication from this study is that appropriate consideration of spatial precipitation variability (often neglected by model users that can be achieved using relatively simple interpolation methods can
Lindley, S. J.; Walsh, T.
There are many modelling methods dedicated to the estimation of spatial patterns in pollutant concentrations, each with their distinctive advantages and disadvantages. The derivation of a surface of air quality values from monitoring data alone requires the conversion of point-based data from a limited number of monitoring stations to a continuous surface using interpolation. Since interpolation techniques involve the estimation of data at un-sampled points based on calculated relationships between data measured at a number of known sample points, they are subject to some uncertainty, both in terms of the values estimated and their spatial distribution. These uncertainties, which are incorporated into many empirical and semi-empirical mapping methodologies, could be recognised in any further usage of the data and also in the assessment of the extent of an exceedence of an air quality standard and the degree of exposure this may represent. There is a wide range of available interpolation techniques and the differences in the characteristics of these result in variations in the output surfaces estimated from the same set of input points. The work presented in this paper provides an examination of uncertainties through the application of a number of interpolation techniques available in standard GIS packages to a case study nitrogen dioxide data set for the Greater Manchester conurbation in northern England. The implications of the use of different techniques are discussed through application to hourly concentrations during an air quality episode and annual average concentrations in 2001. Patterns of concentrations demonstrate considerable differences in the estimated spatial pattern of maxima as the combined effects of chemical processes, topography and meteorology. In the case of air quality episodes, the considerable spatial variability of concentrations results in large uncertainties in the surfaces produced but these uncertainties vary widely from area to area
Review of Spatial Indexing Techniques for Large Urban Data Management
DEFF Research Database (Denmark)
Azri, Suhaibah; Ujang, Uznir; Anton, François
Pressure on land development in urban areas causes progressive efforts in spatial planning and management. The physical expansion of urban areas to accommodate rural migration implies a massive impact to social, economical and political situations of major cities. Most of the models used...... in managing urban areas are moving towards sustainable urban development in order to fulfill current necessities while preserving the resources for future generations. However, in order to manage large amounts of urban spatial data, an efficient spatial data constellation method is needed. With the ease...... of three dimensional (3D) spatial data usage in urban areas as a new source of data input, practical spatial data indexing is necessary to improve data retrieval and management. Current two dimensional (2D) spatial indexing approaches seem not applicable to the current and future spatial developments...
Interpolative Boolean Networks
Directory of Open Access Journals (Sweden)
Vladimir Dobrić
2017-01-01
Full Text Available Boolean networks are used for modeling and analysis of complex systems of interacting entities. Classical Boolean networks are binary and they are relevant for modeling systems with complex switch-like causal interactions. More descriptive power can be provided by the introduction of gradation in this model. If this is accomplished by using conventional fuzzy logics, the generalized model cannot secure the Boolean frame. Consequently, the validity of the model’s dynamics is not secured. The aim of this paper is to present the Boolean consistent generalization of Boolean networks, interpolative Boolean networks. The generalization is based on interpolative Boolean algebra, the [0,1]-valued realization of Boolean algebra. The proposed model is adaptive with respect to the nature of input variables and it offers greater descriptive power as compared with traditional models. For illustrative purposes, IBN is compared to the models based on existing real-valued approaches. Due to the complexity of the most systems to be analyzed and the characteristics of interpolative Boolean algebra, the software support is developed to provide graphical and numerical tools for complex system modeling and analysis.
Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph
2014-04-01
Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.
Multiscale empirical interpolation for solving nonlinear PDEs
Calo, Victor M.
2014-12-01
In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.
Interpolation of quasi-Banach spaces
International Nuclear Information System (INIS)
Tabacco Vignati, A.M.
1986-01-01
This dissertation presents a method of complex interpolation for familities of quasi-Banach spaces. This method generalizes the theory for families of Banach spaces, introduced by others. Intermediate spaces in several particular cases are characterized using different approaches. The situation when all the spaces have finite dimensions is studied first. The second chapter contains the definitions and main properties of the new interpolation spaces, and an example concerning the Schatten ideals associated with a separable Hilbert space. The case of L/sup P/ spaces follows from the maximal operator theory contained in Chapter III. Also introduced is a different method of interpolation for quasi-Banach lattices of functions, and conditions are given to guarantee that the two techniques yield the same result. Finally, the last chapter contains a different, and more direct, approach to the case of Hardy spaces
Spatiotemporal video deinterlacing using control grid interpolation
Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin
2015-03-01
With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.
Smooth Phase Interpolated Keying
Borah, Deva K.
2007-01-01
Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values
Interpolating string field theories
International Nuclear Information System (INIS)
Zwiebach, B.
1992-01-01
This paper reports that a minimal area problem imposing different length conditions on open and closed curves is shown to define a one-parameter family of covariant open-closed quantum string field theories. These interpolate from a recently proposed factorizable open-closed theory up to an extended version of Witten's open string field theory capable of incorporating on shell closed strings. The string diagrams of the latter define a new decomposition of the moduli spaces of Riemann surfaces with punctures and boundaries based on quadratic differentials with both first order and second order poles
Spatial Distribution of TDS in Drinking Water of Tehsil Jampur using Ordinary and Bayesian Kriging
Directory of Open Access Journals (Sweden)
Maqsood Ahmad
2015-09-01
Full Text Available In this research article, level of TDS in groundwater with spatial domain Tehsil Jampur, Pakistan is considered as response variable. Its enhanced level in drinking water produces both the human health concerns and aquatic ecological impacts. Its high value causes several diseases like bilestone, joints stiffness, obstruction of blood vessel and kidney stones. Some Geostatistical techniques were used to interpolate TDS at unmonitored locations of Tehsil Jampur. Four estimation techniques were comparatively studied for fitting well known matern spatial covariance models. Model based Ordinary Kriging (OK and Bayesian Kriging (BK were used for spatial interpolation at unmonitored locations. Cross validation statistic was used to select best interpolation technique with reduced RMSPE. Prediction maps were generated for visual presentation of interpolated sited for both techniques. This study revealed that among thirty observed locations, 56% water samples exceed the maximum permissible limit (1000g/ml of TDS as described by WHO
Magnetic Resonance Microscopy Spatially Resolved NMR Techniques and Applications
Codd, Sarah
2008-01-01
This handbook and ready reference covers materials science applications as well as microfluidic, biomedical and dental applications and the monitoring of physicochemical processes. It includes the latest in hardware, methodology and applications of spatially resolved magnetic resonance, such as portable imaging and single-sided spectroscopy. For materials scientists, spectroscopists, chemists, physicists, and medicinal chemists.
Image Interpolation with Contour Stencils
Pascal Getreuer
2011-01-01
Image interpolation is the problem of increasing the resolution of an image. Linear methods must compromise between artifacts like jagged edges, blurring, and overshoot (halo) artifacts. More recent works consider nonlinear methods to improve interpolation of edges and textures. In this paper we apply contour stencils for estimating the image contours based on total variation along curves and then use this estimation to construct a fast edge-adaptive interpolation.
Quasi interpolation with Voronoi splines.
Mirzargar, Mahsa; Entezari, Alireza
2011-12-01
We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE
Beaming teaching application: recording techniques for spatial xylophone sound rendering
DEFF Research Database (Denmark)
Markovic, Milos; Madsen, Esben; Olesen, Søren Krarup
2012-01-01
BEAMING is a telepresence research project aiming at providing a multimodal interaction between two or more participants located at distant locations. One of the BEAMING applications allows a distant teacher to give a xylophone playing lecture to the students. Therefore, rendering of the xylophon...... to spatial improvements mainly in terms of the Apparent Source Width (ASW). Rendered examples are subjectively evaluated in listening tests by comparing them with binaural recording....
Mintěl, Tomáš
2009-01-01
Tato diplomová práce se zabývá akcelerací interpolačních metod s využitím GPU a architektury NVIDIA (R) CUDA TM. Grafický výstup je reprezentován demonstrační aplikací pro transformaci obrazu nebo videa s použitím vybrané interpolace. Časově kritické části kódu jsou přesunuty na GPU a vykonány paralelně. Pro práci s obrazem a videem jsou použity vysoce optimalizované algoritmy z knihovny OpenCV, od firmy Intel. This master's thesis deals with acceleration of pixel interpolation methods usi...
Spectral interpolation - Zero fill or convolution. [image processing
Forman, M. L.
1977-01-01
Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.
International Nuclear Information System (INIS)
Casa, L D C; Krueger, P S
2013-01-01
Unstructured three-dimensional fluid velocity data were interpolated using Gaussian radial basis function (RBF) interpolation. Data were generated to imitate the spatial resolution and experimental uncertainty of a typical implementation of defocusing digital particle image velocimetry. The velocity field associated with a steadily rotating infinite plate was simulated to provide a bounded, fully three-dimensional analytical solution of the Navier–Stokes equations, allowing for robust analysis of the interpolation accuracy. The spatial resolution of the data (i.e. particle density) and the number of RBFs were varied in order to assess the requirements for accurate interpolation. Interpolation constraints, including boundary conditions and continuity, were included in the error metric used for the least-squares minimization that determines the interpolation parameters to explore methods for improving RBF interpolation results. Even spacing and logarithmic spacing of RBF locations were also investigated. Interpolation accuracy was assessed using the velocity field, divergence of the velocity field, and viscous torque on the rotating boundary. The results suggest that for the present implementation, RBF spacing of 0.28 times the boundary layer thickness is sufficient for accurate interpolation, though theoretical error analysis suggests that improved RBF positioning may yield more accurate results. All RBF interpolation results were compared to standard Gaussian weighting and Taylor expansion interpolation methods. Results showed that RBF interpolation improves interpolation results compared to the Taylor expansion method by 60% to 90% based on the average squared velocity error and provides comparable velocity results to Gaussian weighted interpolation in terms of velocity error. RMS accuracy of the flow field divergence was one to two orders of magnitude better for the RBF interpolation compared to the other two methods. RBF interpolation that was applied to
High spatial and temporal resolution cell manipulation techniques in microchannels.
Novo, Pedro; Dell'Aica, Margherita; Janasek, Dirk; Zahedi, René P
2016-03-21
The advent of microfluidics has enabled thorough control of cell manipulation experiments in so called lab on chips. Lab on chips foster the integration of actuation and detection systems, and require minute sample and reagent amounts. Typically employed microfluidic structures have similar dimensions as cells, enabling precise spatial and temporal control of individual cells and their local environments. Several strategies for high spatio-temporal control of cells in microfluidics have been reported in recent years, namely methods relying on careful design of the microfluidic structures (e.g. pinched flow), by integration of actuators (e.g. electrodes or magnets for dielectro-, acousto- and magneto-phoresis), or integrations thereof. This review presents the recent developments of cell experiments in microfluidics divided into two parts: an introduction to spatial control of cells in microchannels followed by special emphasis in the high temporal control of cell-stimulus reaction and quenching. In the end, the present state of the art is discussed in line with future perspectives and challenges for translating these devices into routine applications.
Fuzzy linguistic model for interpolation
International Nuclear Information System (INIS)
Abbasbandy, S.; Adabitabar Firozja, M.
2007-01-01
In this paper, a fuzzy method for interpolating of smooth curves was represented. We present a novel approach to interpolate real data by applying the universal approximation method. In proposed method, fuzzy linguistic model (FLM) applied as universal approximation for any nonlinear continuous function. Finally, we give some numerical examples and compare the proposed method with spline method
An algorithm for centerline extraction using natural neighbour interpolation
DEFF Research Database (Denmark)
Mioc, Darka; Antón Castro, Francesc/François; Dharmaraj, Girija
2004-01-01
, especially due to the lack of explicit topology in commercial GIS systems. Indeed, each map update might require the batch processing of the whole map. Currently, commercial GIS do not offer completely automatic raster/vector conversion even for simple scanned black and white maps. Various commercial raster...... they need user defined tolerances settings, what causes difficulties in the extraction of complex spatial features, for example: road junctions, curved or irregular lines and complex intersections of linear features. The approach we use here is based on image processing filtering techniques to extract...... to the improvement of data caption and conversion in GIS and to develop a software toolkit for automated raster/vector conversion. The approach is based on computing the skeleton from Voronoi diagrams using natural neighbour interpolation. In this paper we present the algorithm for skeleton extraction from scanned...
Digital autoradiography technique for studying of spatial Impurity distributions Delara
International Nuclear Information System (INIS)
Khamrayeva, S.
2001-01-01
In this report, the possibilities of the digital image processing for autoradiographic investigations of impurity distributions in the different objects (crystals, biology, geology et al) are shown. Activation autoradiography based on the secondary beta-irradiation is the method spread widely for investigations of the spatial distribution of chemical elements in the different objects. The analysis of autoradiography features is connected with the elucidation of optical density distribution of photoemulsion by means of photometry. The photoemulsion is used as detector of secondary beta irradiation. For different technological and nature materials to have elemental shifts the fine structure of chemical element distribution is often interested. But photometry makes it difficult to study the inhomogeneous chemical elements with the little gradient of concentration (near 20%). Therefore, the suppression of the background and betterment of linear solvability are the main problems of autoradiographic analysis. Application of the fast-acting digital computers and the technical means of signals treatment are allowed to spread the possibilities and the resolution of activation autoradiography. Mechanism of creation of autoradiographic features is described. The treatment of autoradiograms was conducted with the help of the dialogue system having matrix in 512 x 512 elements. For the interpretation of the experimental data clustering analysis methodology was used. Classification of the zones on the minimum of the square mistake was conducted according to the data of histograms of the optical densities of the studying autoradiograms. It was proposed algorithm for digital treatment for reconstruction of autoradiographic features. At a minimal contrast the resolution of the method has been enhanced on the degree by adaptation of methods of digital image processing (DIP) to suppress background activity. Results of the digital autoradiographic investigations of spatial impurity
Contrast-guided image interpolation.
Wei, Zhe; Ma, Kai-Kuang
2013-11-01
In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.
Simulation techniques for spatially evolving instabilities in compressible flow over a flat plate
Wasistho, B.; Geurts, Bernardus J.; Kuerten, Johannes G.M.
1997-01-01
In this paper we present numerical techniques suitable for a direct numerical simulation in the spatial setting. We demonstrate the application to the simulation of compressible flat plate flow instabilities. We compare second and fourth order accurate spatial discretization schemes in combination
Interpolation for de-Dopplerisation
Graham, W. R.
2018-05-01
'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.
Occlusion-Aware View Interpolation
Directory of Open Access Journals (Sweden)
Janusz Konrad
2009-01-01
Full Text Available View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.
Occlusion-Aware View Interpolation
Directory of Open Access Journals (Sweden)
Ince Serdar
2008-01-01
Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.
BIMOND3, Monotone Bivariate Interpolation
International Nuclear Information System (INIS)
Fritsch, F.N.; Carlson, R.E.
2001-01-01
1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data
A Meshfree Quasi-Interpolation Method for Solving Burgers’ Equation
Directory of Open Access Journals (Sweden)
Mingzhu Li
2014-01-01
Full Text Available The main aim of this work is to consider a meshfree algorithm for solving Burgers’ equation with the quartic B-spline quasi-interpolation. Quasi-interpolation is very useful in the study of approximation theory and its applications, since it can yield solutions directly without the need to solve any linear system of equations and overcome the ill-conditioning problem resulting from using the B-spline as a global interpolant. The numerical scheme is presented, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the time derivative of the dependent variable. Compared to other numerical methods, the main advantages of our scheme are higher accuracy and lower computational complexity. Meanwhile, the algorithm is very simple and easy to implement and the numerical experiments show that it is feasible and valid.
Quadratic polynomial interpolation on triangular domain
Li, Ying; Zhang, Congcong; Yu, Qian
2018-04-01
In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.
Trace interpolation by slant-stack migration
International Nuclear Information System (INIS)
Novotny, M.
1990-01-01
The slant-stack migration formula based on the radon transform is studied with respect to the depth steep Δz of wavefield extrapolation. It can be viewed as a generalized trace-interpolation procedure including wave extrapolation with an arbitrary step Δz. For Δz > 0 the formula yields the familiar plane-wave decomposition, while for Δz > 0 it provides a robust tool for migration transformation of spatially under sampled wavefields. Using the stationary phase method, it is shown that the slant-stack migration formula degenerates into the Rayleigh-Sommerfeld integral in the far-field approximation. Consequently, even a narrow slant-stack gather applied before the diffraction stack can significantly improve the representation of noisy data in the wavefield extrapolation process. The theory is applied to synthetic and field data to perform trace interpolation and dip reject filtration. The data examples presented prove that the radon interpolator works well in the dip range, including waves with mutual stepouts smaller than half the dominant period
Delimiting areas of endemism through kernel interpolation.
Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Delimiting areas of endemism through kernel interpolation.
Directory of Open Access Journals (Sweden)
Ubirajara Oliveira
Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
The research on NURBS adaptive interpolation technology
Zhang, Wanjun; Gao, Shanping; Zhang, Sujia; Zhang, Feng
2017-04-01
In order to solve the problems of Research on NURBS Adaptive Interpolation Technology, such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for NURBS adaptive interpolation method of NURBS curve and simulation. We can use NURBS adaptive interpolation that calculates (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meets the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.
Directory of Open Access Journals (Sweden)
Jinyang Song
2018-01-01
Full Text Available Many modulated signals exhibit a cyclostationarity property, which can be exploited in direction-of-arrival (DOA estimation to effectively eliminate interference and noise. In this paper, our aim is to integrate the cyclostationarity with the spatial domain and enable the algorithm to estimate more sources than sensors. However, DOA estimation with a sparse array is performed in the coarray domain and the holes within the coarray limit the usage of the complete coarray information. In order to use the complete coarray information to increase the degrees-of-freedom (DOFs, sparsity-aware-based methods and the difference coarray interpolation methods have been proposed. In this paper, the coarray interpolation technique is further explored with cyclostationary signals. Besides the difference coarray model and its corresponding Toeplitz completion formulation, we build up a sum coarray model and formulate a Hankel completion problem. In order to further improve the performance of the structured matrix completion, we define the spatial spectrum sampling operations and the derivative (conjugate correlation subspaces, which can be exploited to construct orthogonal constraints for the autocorrelation vectors in the coarray interpolation problem. Prior knowledge of the source interval can also be incorporated into the problem. Simulation results demonstrate that the additional constraints contribute to a remarkable performance improvement.
Interpolation in Spaces of Functions
Directory of Open Access Journals (Sweden)
K. Mosaleheh
2006-03-01
Full Text Available In this paper we consider the interpolation by certain functions such as trigonometric and rational functions for finite dimensional linear space X. Then we extend this to infinite dimensional linear spaces
Energy-Driven Image Interpolation Using Gaussian Process Regression
Directory of Open Access Journals (Sweden)
Lingling Zi
2012-01-01
Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.
Surface interpolation with radial basis functions for medical imaging
International Nuclear Information System (INIS)
Carr, J.C.; Beatson, R.K.; Fright, W.R.
1997-01-01
Radial basis functions are presented as a practical solution to the problem of interpolating incomplete surfaces derived from three-dimensional (3-D) medical graphics. The specific application considered is the design of cranial implants for the repair of defects, usually holes, in the skull. Radial basis functions impose few restrictions on the geometry of the interpolation centers and are suited to problems where interpolation centers do not form a regular grid. However, their high computational requirements have previously limited their use to problems where the number of interpolation centers is small (<300). Recently developed fast evaluation techniques have overcome these limitations and made radial basis interpolation a practical approach for larger data sets. In this paper radial basis functions are fitted to depth-maps of the skull's surface, obtained from X-ray computed tomography (CT) data using ray-tracing techniques. They are used to smoothly interpolate the surface of the skull across defect regions. The resulting mathematical description of the skull's surface can be evaluated at any desired resolution to be rendered on a graphics workstation or to generate instructions for operating a computer numerically controlled (CNC) mill
5-D interpolation with wave-front attributes
Xie, Yujiang; Gajewski, Dirk
2017-11-01
Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that
Development of spatial scaling technique of forest health sample point information
Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.
2017-12-01
Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.
Permanently calibrated interpolating time counter
International Nuclear Information System (INIS)
Jachna, Z; Szplet, R; Kwiatkowski, P; Różyc, K
2015-01-01
We propose a new architecture of an integrated time interval counter that provides its permanent calibration in the background. Time interval measurement and the calibration procedure are based on the use of a two-stage interpolation method and parallel processing of measurement and calibration data. The parallel processing is achieved by a doubling of two-stage interpolators in measurement channels of the counter, and by an appropriate extension of control logic. Such modification allows the updating of transfer characteristics of interpolators without the need to break a theoretically infinite measurement session. We describe the principle of permanent calibration, its implementation and influence on the quality of the counter. The precision of the presented counter is kept at a constant level (below 20 ps) despite significant changes in the ambient temperature (from −10 to 60 °C), which can cause a sevenfold decrease in the precision of the counter with a traditional calibration procedure. (paper)
Directory of Open Access Journals (Sweden)
Aihua Liu
2017-01-01
Full Text Available A method of direction-of-arrival (DOA estimation using array interpolation is proposed in this paper to increase the number of resolvable sources and improve the DOA estimation performance for coprime array configuration with holes in its virtual array. The virtual symmetric nonuniform linear array (VSNLA of coprime array signal model is introduced, with the conventional MUSIC with spatial smoothing algorithm (SS-MUSIC applied on the continuous lags in the VSNLA; the degrees of freedom (DoFs for DOA estimation are obviously not fully exploited. To effectively utilize the extent of DoFs offered by the coarray configuration, a compressing sensing based array interpolation algorithm is proposed. The compressing sensing technique is used to obtain the coarse initial DOA estimation, and a modified iterative initial DOA estimation based interpolation algorithm (IMCA-AI is then utilized to obtain the final DOA estimation, which maps the sample covariance matrix of the VSNLA to the covariance matrix of a filled virtual symmetric uniform linear array (VSULA with the same aperture size. The proposed DOA estimation method can efficiently improve the DOA estimation performance. The numerical simulations are provided to demonstrate the effectiveness of the proposed method.
Rasam, A. R. A.; Ghazali, R.; Noor, A. M. M.; Mohd, W. M. N. W.; Hamid, J. R. A.; Bazlan, M. J.; Ahmad, N.
2014-02-01
Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia.
International Nuclear Information System (INIS)
Rasam, A R A; Ghazali, R; Noor, A M M; Mohd, W M N W; Hamid, J R A; Bazlan, M J; Ahmad, N
2014-01-01
Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia
Multi-dimensional cubic interpolation for ICF hydrodynamics simulation
International Nuclear Information System (INIS)
Aoki, Takayuki; Yabe, Takashi.
1991-04-01
A new interpolation method is proposed to solve the multi-dimensional hyperbolic equations which appear in describing the hydrodynamics of inertial confinement fusion (ICF) implosion. The advection phase of the cubic-interpolated pseudo-particle (CIP) is greatly improved, by assuming the continuities of the second and the third spatial derivatives in addition to the physical value and the first derivative. These derivatives are derived from the given physical equation. In order to evaluate the new method, Zalesak's example is tested, and we obtain successfully good results. (author)
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to
An integral conservative gridding-algorithm using Hermitian curve interpolation
International Nuclear Information System (INIS)
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-01-01
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to
Interpolation of vector fields from human cardiac DT-MRI
International Nuclear Information System (INIS)
Yang, F; Zhu, Y M; Rapacchi, S; Robini, M; Croisille, P; Luo, J H
2011-01-01
There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.
Non-negative Feynman endash Kac kernels in Schroedinger close-quote s interpolation problem
International Nuclear Information System (INIS)
Blanchard, P.; Garbaczewski, P.; Olkiewicz, R.
1997-01-01
The local formulations of the Markovian interpolating dynamics, which is constrained by the prescribed input-output statistics data, usually utilize strictly positive Feynman endash Kac kernels. This implies that the related Markov diffusion processes admit vanishing probability densities only at the boundaries of the spatial volume confining the process. We discuss an extension of the framework to encompass singular potentials and associated non-negative Feynman endash Kac-type kernels. It allows us to deal with a class of continuous interpolations admitted by general non-negative solutions of the Schroedinger boundary data problem. The resulting nonstationary stochastic processes are capable of both developing and destroying nodes (zeros) of probability densities in the course of their evolution, also away from the spatial boundaries. This observation conforms with the general mathematical theory (due to M. Nagasawa and R. Aebi) that is based on the notion of multiplicative functionals, extending in turn the well known Doob close-quote s h-transformation technique. In view of emphasizing the role of the theory of non-negative solutions of parabolic partial differential equations and the link with open-quotes Wiener exclusionclose quotes techniques used to evaluate certain Wiener functionals, we give an alternative insight into the issue, that opens a transparent route towards applications.copyright 1997 American Institute of Physics
Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F
2012-01-01
Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.
Evaluation of intense rainfall parameters interpolation methods for the Espírito Santo State
Directory of Open Access Journals (Sweden)
José Eduardo Macedo Pezzopane
2009-12-01
Full Text Available Intense rainfalls are often responsible for the occurrence of undesirable processes in agricultural and forest areas, such as surface runoff, soil erosion and flooding. The knowledge of intense rainfall spatial distribution is important to agricultural watershed management, soil conservation and to the design of hydraulic structures. The present paper evaluated methods of spatial interpolation of the intense rainfall parameters (“K”, “a”, “b” and “c” for the Espírito Santo State, Brazil. Were compared real intense rainfall rates with those calculated by the interpolated intense rainfall parameters, considering different durations and return periods. Inverse distance to the 5th power IPD5 was the spatial interpolation method with better performance to spatial interpolated intense rainfall parameters.
Interpolant tree automata and their application in Horn clause verification
DEFF Research Database (Denmark)
Kafle, Bishoksan; Gallagher, John Patrick
2016-01-01
This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this ......This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way...... clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead....
A Note on Cubic Convolution Interpolation
Meijering, E.; Unser, M.
2003-01-01
We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.
Node insertion in Coalescence Fractal Interpolation Function
International Nuclear Information System (INIS)
Prasad, Srijanani Anurag
2013-01-01
The Iterated Function System (IFS) used in the construction of Coalescence Hidden-variable Fractal Interpolation Function (CHFIF) depends on the interpolation data. The insertion of a new point in a given set of interpolation data is called the problem of node insertion. In this paper, the effect of insertion of new point on the related IFS and the Coalescence Fractal Interpolation Function is studied. Smoothness and Fractal Dimension of a CHFIF obtained with a node are also discussed
Saager, Rolf B.; Dang, An N.; Huang, Samantha S.; Kelly, Kristen M.; Durkin, Anthony J.
2017-09-01
Spatial Frequency Domain Spectroscopy (SFDS) is a technique for quantifying in-vivo tissue optical properties. SFDS employs structured light patterns that are projected onto tissues using a spatial light modulator, such as a digital micromirror device. In combination with appropriate models of light propagation, this technique can be used to quantify tissue optical properties (absorption, μa, and scattering, μs', coefficients) and chromophore concentrations. Here we present a handheld implementation of an SFDS device that employs line (one dimensional) imaging. This instrument can measure 1088 spatial locations that span a 3 cm line as opposed to our original benchtop SFDS system that only collects a single 1 mm diameter spot. This imager, however, retains the spectral resolution (˜1 nm) and range (450-1000 nm) of our original benchtop SFDS device. In the context of homogeneous turbid media, we demonstrate that this new system matches the spectral response of our original system to within 1% across a typical range of spatial frequencies (0-0.35 mm-1). With the new form factor, the device has tremendously improved mobility and portability, allowing for greater ease of use in a clinical setting. A smaller size also enables access to different tissue locations, which increases the flexibility of the device. The design of this portable system not only enables SFDS to be used in clinical settings but also enables visualization of properties of layered tissues such as skin.
Yan, Wei; Yang, Yanlong; Tan, Yu; Chen, Xun; Li, Yang; Qu, Junle; Ye, Tong
2018-01-01
Stimulated emission depletion microscopy (STED) is one of far-field optical microscopy techniques that can provide sub-diffraction spatial resolution. The spatial resolution of the STED microscopy is determined by the specially engineered beam profile of the depletion beam and its power. However, the beam profile of the depletion beam may be distorted due to aberrations of optical systems and inhomogeneity of specimens’ optical properties, resulting in a compromised spatial resolution. The situation gets deteriorated when thick samples are imaged. In the worst case, the sever distortion of the depletion beam profile may cause complete loss of the super resolution effect no matter how much depletion power is applied to specimens. Previously several adaptive optics approaches have been explored to compensate aberrations of systems and specimens. However, it is hard to correct the complicated high-order optical aberrations of specimens. In this report, we demonstrate that the complicated distorted wavefront from a thick phantom sample can be measured by using the coherent optical adaptive technique (COAT). The full correction can effectively maintain and improve the spatial resolution in imaging thick samples. PMID:29400356
A Hybrid Method for Interpolating Missing Data in Heterogeneous Spatio-Temporal Datasets
Directory of Open Access Journals (Sweden)
Min Deng
2016-02-01
Full Text Available Space-time interpolation is widely used to estimate missing or unobserved values in a dataset integrating both spatial and temporal records. Although space-time interpolation plays a key role in space-time modeling, existing methods were mainly developed for space-time processes that exhibit stationarity in space and time. It is still challenging to model heterogeneity of space-time data in the interpolation model. To overcome this limitation, in this study, a novel space-time interpolation method considering both spatial and temporal heterogeneity is developed for estimating missing data in space-time datasets. The interpolation operation is first implemented in spatial and temporal dimensions. Heterogeneous covariance functions are constructed to obtain the best linear unbiased estimates in spatial and temporal dimensions. Spatial and temporal correlations are then considered to combine the interpolation results in spatial and temporal dimensions to estimate the missing data. The proposed method is tested on annual average temperature and precipitation data in China (1984–2009. Experimental results show that, for these datasets, the proposed method outperforms three state-of-the-art methods—e.g., spatio-temporal kriging, spatio-temporal inverse distance weighting, and point estimation model of biased hospitals-based area disease estimation methods.
Suitability of the RGB Channels for a Pixel Manipulation in a Spatial Domain Data Hiding Techniques
Directory of Open Access Journals (Sweden)
Ante Poljicak
2010-06-01
Full Text Available The aim of this research was to determine which channel in rgb color space is the most suitable (regarding perceptibility for a pixel manipulation in a spatial domain data hiding techniques. For this purpose three custom test targets were generated. The research also shows the behavior of two closely related colors in the ps (Print-Scan process. The results are interpreted using both a quantitative method (statistical comparison and a qualitative method (visual comparison.
Bayer Demosaicking with Polynomial Interpolation.
Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil
2016-08-30
Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.
Shadrack Jabes, B.; Krekeler, C.; Klein, R.; Delle Site, L.
2018-05-01
We employ the Grand Canonical Adaptive Resolution Simulation (GC-AdResS) molecular dynamics technique to test the spatial locality of the 1-ethyl 3-methyl imidazolium chloride liquid. In GC-AdResS, atomistic details are kept only in an open sub-region of the system while the environment is treated at coarse-grained level; thus, if spatial quantities calculated in such a sub-region agree with the equivalent quantities calculated in a full atomistic simulation, then the atomistic degrees of freedom outside the sub-region play a negligible role. The size of the sub-region fixes the degree of spatial locality of a certain quantity. We show that even for sub-regions whose radius corresponds to the size of a few molecules, spatial properties are reasonably reproduced thus suggesting a higher degree of spatial locality, a hypothesis put forward also by other researchers and that seems to play an important role for the characterization of fundamental properties of a large class of ionic liquids.
Directory of Open Access Journals (Sweden)
Mingjian Sun
2015-01-01
Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.
A Note on Interpolation of Stable Processes | Nassiuma | Journal of ...
African Journals Online (AJOL)
Interpolation procedures tailored for gaussian processes may not be applied to infinite variance stable processes. Alternative techniques suitable for a limited set of stable case with index α∈(1,2] were initially studied by Pourahmadi (1984) for harmonizable processes. This was later extended to the ARMA stable process ...
Kriging interpolation in seismic attribute space applied to the South Arne Field, North Sea
DEFF Research Database (Denmark)
Hansen, Thomas Mejer; Mosegaard, Klaus; Schiøtt, Christian
2010-01-01
Seismic attributes can be used to guide interpolation in-between and extrapolation away from well log locations using for example linear regression, neural networks, and kriging. Kriging-based estimation methods (and most other types of interpolation/extrapolation techniques) are intimately linke...
Al-Kindi, Khalifa M; Kwan, Paul; R Andrew, Nigel; Welch, Mitchell
2017-01-01
In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae) as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus . An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops.
Directory of Open Access Journals (Sweden)
Khalifa M. Al-Kindi
2017-08-01
Full Text Available In order to understand the distribution and prevalence of Ommatissus lybicus (Hemiptera: Tropiduchidae as well as analyse their current biographical patterns and predict their future spread, comprehensive and detailed information on the environmental, climatic, and agricultural practices are essential. The spatial analytical techniques such as Remote Sensing and Spatial Statistics Tools, can help detect and model spatial links and correlations between the presence, absence and density of O. lybicus in response to climatic, environmental, and human factors. The main objective of this paper is to review remote sensing and relevant analytical techniques that can be applied in mapping and modelling the habitat and population density of O. lybicus. An exhaustive search of related literature revealed that there are very limited studies linking location-based infestation levels of pests like the O. lybicus with climatic, environmental, and human practice related variables. This review also highlights the accumulated knowledge and addresses the gaps in this area of research. Furthermore, it makes recommendations for future studies, and gives suggestions on monitoring and surveillance methods in designing both local and regional level integrated pest management strategies of palm tree and other affected cultivated crops.
Evaluating spatial- and temporal-oriented multi-dimensional visualization techniques
Directory of Open Access Journals (Sweden)
Chong Ho Yu
2003-07-01
Full Text Available Visualization tools are said to be helpful for researchers to unveil hidden patterns and..relationships among variables, and also for teachers to present abstract statistical concepts and..complicated data structures in a concrete manner. However, higher-dimension visualization..techniques can be confusing and even misleading, especially when human-instrument interface..and cognitive issues are under-applied. In this article, the efficacy of function-based, datadriven,..spatial-oriented, and temporal-oriented visualization techniques are discussed based..upon extensive review. Readers can find practical implications for both research and..instructional practices. For research purposes, the spatial-based graphs, such as Trellis displays..in S-Plus, are preferable over the temporal-based displays, such as the 3D animated plot in..SAS/Insight. For teaching purposes, the temporal-based displays, such as the 3D animation plot..in Maple, seem to have advantages over the spatial-based graphs, such as the 3D triangular..coordinate plot in SyStat.
Syed Abdul Mutalib, Sharifah Norsukhairin; Juahir, Hafizan; Azid, Azman; Mohd Sharif, Sharifah; Latif, Mohd Talib; Aris, Ahmad Zaharin; Zain, Sharifuddin M; Dominick, Doreena
2013-09-01
The objective of this study is to identify spatial and temporal patterns in the air quality at three selected Malaysian air monitoring stations based on an eleven-year database (January 2000-December 2010). Four statistical methods, Discriminant Analysis (DA), Hierarchical Agglomerative Cluster Analysis (HACA), Principal Component Analysis (PCA) and Artificial Neural Networks (ANNs), were selected to analyze the datasets of five air quality parameters, namely: SO2, NO2, O3, CO and particulate matter with a diameter size of below 10 μm (PM10). The three selected air monitoring stations share the characteristic of being located in highly urbanized areas and are surrounded by a number of industries. The DA results show that spatial characterizations allow successful discrimination between the three stations, while HACA shows the temporal pattern from the monthly and yearly factor analysis which correlates with severe haze episodes that have happened in this country at certain periods of time. The PCA results show that the major source of air pollution is mostly due to the combustion of fossil fuel in motor vehicles and industrial activities. The spatial pattern recognition (S-ANN) results show a better prediction performance in discriminating between the regions, with an excellent percentage of correct classification compared to DA. This study presents the necessity and usefulness of environmetric techniques for the interpretation of large datasets aiming to obtain better information about air quality patterns based on spatial and temporal characterizations at the selected air monitoring stations.
Potential problems with interpolating fields
Energy Technology Data Exchange (ETDEWEB)
Birse, Michael C. [The University of Manchester, Theoretical Physics Division, School of Physics and Astronomy, Manchester (United Kingdom)
2017-11-15
A potential can have features that do not reflect the dynamics of the system it describes but rather arise from the choice of interpolating fields used to define it. This is illustrated using a toy model of scattering with two coupled channels. A Bethe-Salpeter amplitude is constructed which is a mixture of the waves in the two channels. The potential derived from this has a strong repulsive core, which arises from the admixture of the closed channel in the wave function and not from the dynamics of the model. (orig.)
Spatial Planning of Rural tourism with MAPPAC technique. Case study Khur and Biabanak County, (Iran
Directory of Open Access Journals (Sweden)
Hassan Ali Faraji Sabokbar
2014-12-01
Full Text Available Reviewing the concepts of space and tourism industry, tourism is in an old, deep, unbreakable bound with spatial and physical dimensions. In this way, the lack of systematic and scientific ranking process in spatial locating of rural tourism spots and also improper distribution of infrastructures are the critical deficiencies in this field. The research intends to introduce the hidden potentials and unique capabilities of Khur and Biabanak County, Iran. And prioritize their tourism spots. So tourism planners would be able to recognize proper space distribution. First, the weights of each criterion were calculated by a pairwise comparison questionnaire of AHP method, and MAPPAC technique was used for ranking. AHP was done in Expert Choice software and MAPPAC in MS Excel. Results showed that villages such as Bayaze, Jandagh, Mehrejan, Garmeh, and Iraj which are also older have a higher rank.
Interpolation of rational matrix functions
Ball, Joseph A; Rodman, Leiba
1990-01-01
This book aims to present the theory of interpolation for rational matrix functions as a recently matured independent mathematical subject with its own problems, methods and applications. The authors decided to start working on this book during the regional CBMS conference in Lincoln, Nebraska organized by F. Gilfeather and D. Larson. The principal lecturer, J. William Helton, presented ten lectures on operator and systems theory and the interplay between them. The conference was very stimulating and helped us to decide that the time was ripe for a book on interpolation for matrix valued functions (both rational and non-rational). When the work started and the first partial draft of the book was ready it became clear that the topic is vast and that the rational case by itself with its applications is already enough material for an interesting book. In the process of writing the book, methods for the rational case were developed and refined. As a result we are now able to present the rational case as an indepe...
Parallel optimization of IDW interpolation algorithm on multicore platform
Guan, Xuefeng; Wu, Huayi
2009-10-01
Due to increasing power consumption, heat dissipation, and other physical issues, the architecture of central processing unit (CPU) has been turning to multicore rapidly in recent years. Multicore processor is packaged with multiple processor cores in the same chip, which not only offers increased performance, but also presents significant challenges to application developers. As a matter of fact, in GIS field most of current GIS algorithms were implemented serially and could not best exploit the parallelism potential on such multicore platforms. In this paper, we choose Inverse Distance Weighted spatial interpolation algorithm (IDW) as an example to study how to optimize current serial GIS algorithms on multicore platform in order to maximize performance speedup. With the help of OpenMP, threading methodology is introduced to split and share the whole interpolation work among processor cores. After parallel optimization, execution time of interpolation algorithm is greatly reduced and good performance speedup is achieved. For example, performance speedup on Intel Xeon 5310 is 1.943 with 2 execution threads and 3.695 with 4 execution threads respectively. An additional output comparison between pre-optimization and post-optimization is carried out and shows that parallel optimization does to affect final interpolation result.
Evaluation of various interpolants available in DICE
Energy Technology Data Exchange (ETDEWEB)
Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reu, Phillip L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Crozier, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-02-01
This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.
Sahabiev, I. A.; Ryazanov, S. S.; Kolcova, T. G.; Grigoryan, B. R.
2018-03-01
The three most common techniques to interpolate soil properties at a field scale—ordinary kriging (OK), regression kriging with multiple linear regression drift model (RK + MLR), and regression kriging with principal component regression drift model (RK + PCR)—were examined. The results of the performed study were compiled into an algorithm of choosing the most appropriate soil mapping technique. Relief attributes were used as the auxiliary variables. When spatial dependence of a target variable was strong, the OK method showed more accurate interpolation results, and the inclusion of the auxiliary data resulted in an insignificant improvement in prediction accuracy. According to the algorithm, the RK + PCR method effectively eliminates multicollinearity of explanatory variables. However, if the number of predictors is less than ten, the probability of multicollinearity is reduced, and application of the PCR becomes irrational. In that case, the multiple linear regression should be used instead.
Analysis of ECT Synchronization Performance Based on Different Interpolation Methods
Directory of Open Access Journals (Sweden)
Yang Zhixin
2014-01-01
Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.
Spatial Search Techniques for Mobile 3D Queries in Sensor Web Environments
Directory of Open Access Journals (Sweden)
James D. Carswell
2013-03-01
Full Text Available Developing mobile geo-information systems for sensor web applications involves technologies that can access linked geographical and semantically related Internet information. Additionally, in tomorrow’s Web 4.0 world, it is envisioned that trillions of inexpensive micro-sensors placed throughout the environment will also become available for discovery based on their unique geo-referenced IP address. Exploring these enormous volumes of disparate heterogeneous data on today’s location and orientation aware smartphones requires context-aware smart applications and services that can deal with “information overload”. 3DQ (Three Dimensional Query is our novel mobile spatial interaction (MSI prototype that acts as a next-generation base for human interaction within such geospatial sensor web environments/urban landscapes. It filters information using “Hidden Query Removal” functionality that intelligently refines the search space by calculating the geometry of a three dimensional visibility shape (Vista space at a user’s current location. This 3D shape then becomes the query “window” in a spatial database for retrieving information on only those objects visible within a user’s actual 3D field-of-view. 3DQ reduces information overload and serves to heighten situation awareness on constrained commercial off-the-shelf devices by providing visibility space searching as a mobile web service. The effects of variations in mobile spatial search techniques in terms of query speed vs. accuracy are evaluated and presented in this paper.
Research on interpolation methods in medical image processing.
Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian
2012-04-01
Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.
Ismail, Azimah; Toriman, Mohd Ekhwan; Juahir, Hafizan; Zain, Sharifuddin Md; Habir, Nur Liyana Abdul; Retnam, Ananthy; Kamaruddin, Mohd Khairul Amri; Umar, Roslan; Azid, Azman
2016-05-15
This study presents the determination of the spatial variation and source identification of heavy metal pollution in surface water along the Straits of Malacca using several chemometric techniques. Clustering and discrimination of heavy metal compounds in surface water into two groups (northern and southern regions) are observed according to level of concentrations via the application of chemometric techniques. Principal component analysis (PCA) demonstrates that Cu and Cr dominate the source apportionment in northern region with a total variance of 57.62% and is identified with mining and shipping activities. These are the major contamination contributors in the Straits. Land-based pollution originating from vehicular emission with a total variance of 59.43% is attributed to the high level of Pb concentration in the southern region. The results revealed that one state representing each cluster (northern and southern regions) is significant as the main location for investigating heavy metal concentration in the Straits of Malacca which would save monitoring cost and time. The monitoring of spatial variation and source of heavy metals pollution at the northern and southern regions of the Straits of Malacca, Malaysia, using chemometric analysis. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Ni Sheng
2015-07-01
Full Text Available This paper proposes to visualize acoustic comfort along tourist routes. Route-based tourism is crucial to the sustainability of tourism development in historic areas. Applying the concept of route-based tourism to guide tourists rambling along cultural and heritage routes can relieve overcrowded condition at hot scenic spots and increase the overall carrying capacity of the city. However, acoustic comfort along tourist routes is rarely addressed in academic studies and decision-making. Taking Macao as an example, this paper has studied pedestrian exposure to traffic noise along the cultural and heritage routes. The study is based on a GIS-based traffic noise model system with a high spatial resolution down to individual buildings along both sides of the street. Results show that tourists suffer from excessive traffic noise at certain sites, which may have negative impact on the promotion of route-based tourism in the long run. In addition, it is found that urban growth affects urban form and street layout, which in turn affect traffic flow and acoustic comfort in urban area. The present study demonstrates spatial techniques to visualize acoustic comfort along tourist routes, and the techniques are foreseen to be used more frequently to support effective tourism planning in the future.
Moharana, S.; Dutta, S.
2015-12-01
Precision farming refers to field-specific management of an agricultural crop at a spatial scale with an aim to get the highest achievable yield and to achieve this spatial information on field variability is essential. The difficulty in mapping of spatial variability occurring within an agriculture field can be revealed by employing spectral techniques in hyperspectral imagery rather than multispectral imagery. However an advanced algorithm needs to be developed to fully make use of the rich information content in hyperspectral data. In the present study, potential of hyperspectral data acquired from space platform was examined to map the field variation of paddy crop and its species discrimination. This high dimensional data comprising 242 spectral narrow bands with 30m ground resolution Hyperion L1R product acquired for Assam, India (30th Sept and 3rd Oct, 2014) were allowed for necessary pre-processing steps followed by geometric correction using Hyperion L1GST product. Finally an atmospherically corrected and spatially deduced image consisting of 112 band was obtained. By employing an advanced clustering algorithm, 12 different clusters of spectral waveforms of the crop were generated from six paddy fields for each images. The findings showed that, some clusters were well discriminated representing specific rice genotypes and some clusters were mixed treating as a single rice genotype. As vegetation index (VI) is the best indicator of vegetation mapping, three ratio based VI maps were also generated and unsupervised classification was performed for it. The so obtained 12 clusters of paddy crop were mapped spatially to the derived VI maps. From these findings, the existence of heterogeneity was clearly captured in one of the 6 rice plots (rice plot no. 1) while heterogeneity was observed in rest of the 5 rice plots. The degree of heterogeneous was found more in rice plot no.6 as compared to other plots. Subsequently, spatial variability of paddy field was
Dynamic Stability Analysis Using High-Order Interpolation
Directory of Open Access Journals (Sweden)
Juarez-Toledo C.
2012-10-01
Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.
Conformal Interpolating Algorithm Based on Cubic NURBS in Aspheric Ultra-Precision Machining
International Nuclear Information System (INIS)
Li, C G; Zhang, Q R; Cao, C G; Zhao, S L
2006-01-01
Numeric control machining and on-line compensation for aspheric surface are key techniques in ultra-precision machining. In this paper, conformal cubic NURBS interpolating curve is applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal cubic NURBS interpolation, we compare it with the linear interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by cubic NURBS is higher than by line. The algorithm is benefit to increasing the surface form precision of workpieces in ultra-precision machining
Interpolation method for the transport theory and its application in fusion-neutronics analysis
International Nuclear Information System (INIS)
Jung, J.
1981-09-01
This report presents an interpolation method for the solution of the Boltzmann transport equation. The method is based on a flux synthesis technique using two reference-point solutions. The equation for the interpolated solution results in a Volterra integral equation which is proved to have a unique solution. As an application of the present method, tritium breeding ratio is calculated for a typical D-T fusion reactor system. The result is compared to that of a variational technique
Differential Interpolation Effects in Free Recall
Petrusic, William M.; Jamieson, Donald G.
1978-01-01
Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…
Transfinite C2 interpolant over triangles
International Nuclear Information System (INIS)
Alfeld, P.; Barnhill, R.E.
1984-01-01
A transfinite C 2 interpolant on a general triangle is created. The required data are essentially C 2 , no compatibility conditions arise, and the precision set includes all polynomials of degree less than or equal to eight. The symbol manipulation language REDUCE is used to derive the scheme. The scheme is discretized to two different finite dimensional C 2 interpolants in an appendix
INTAMAP: The design and implementation of an interoperable automated interpolation web service
Pebesma, E.; Cornford, D.; Dubois, G.; Heuvelink, G.B.M.; Hristopulos, D.; Pilz, J.; Stohlker, U.; Morin, G.; Skoien, J.O.
2011-01-01
INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and
High-spatial-resolution sub-surface imaging using a laser-based acoustic microscopy technique.
Balogun, Oluwaseyi; Cole, Garrett D; Huber, Robert; Chinn, Diane; Murray, Todd W; Spicer, James B
2011-01-01
Scanning acoustic microscopy techniques operating at frequencies in the gigahertz range are suitable for the elastic characterization and interior imaging of solid media with micrometer-scale spatial resolution. Acoustic wave propagation at these frequencies is strongly limited by energy losses, particularly from attenuation in the coupling media used to transmit ultrasound to a specimen, leading to a decrease in the depth in a specimen that can be interrogated. In this work, a laser-based acoustic microscopy technique is presented that uses a pulsed laser source for the generation of broadband acoustic waves and an optical interferometer for detection. The use of a 900-ps microchip pulsed laser facilitates the generation of acoustic waves with frequencies extending up to 1 GHz which allows for the resolution of micrometer-scale features in a specimen. Furthermore, the combination of optical generation and detection approaches eliminates the use of an ultrasonic coupling medium, and allows for elastic characterization and interior imaging at penetration depths on the order of several hundred micrometers. Experimental results illustrating the use of the laser-based acoustic microscopy technique for imaging micrometer-scale subsurface geometrical features in a 70-μm-thick single-crystal silicon wafer with a (100) orientation are presented.
Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.
2013-09-01
One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.
Interpolant Tree Automata and their Application in Horn Clause Verification
Directory of Open Access Journals (Sweden)
Bishoksan Kafle
2016-07-01
Full Text Available This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this paper. The role of an interpolant tree automaton is to provide a generalisation of a spurious counterexample during refinement, capturing a possibly infinite set of spurious counterexample traces. In our approach these traces are then eliminated using a transformation of the Horn clauses. We compare this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead.
Data interpolation for vibration diagnostics using two-variable correlations
International Nuclear Information System (INIS)
Branagan, L.
1991-01-01
This paper reports that effective machinery vibration diagnostics require a clear differentiation between normal vibration changes caused by plant process conditions and those caused by degradation. The normal relationship between vibration and a process parameter can be quantified by developing the appropriate correlation. The differences in data acquisition requirements between dynamic signals (vibration spectra) and static signals (pressure, temperature, etc.) result in asynchronous data acquisition; the development of any correlation must then be based on some form of interpolated data. This interpolation can reproduce or distort the original measured quantity depending on the characteristics of the data and the interpolation technique. Relevant data characteristics, such as acquisition times, collection cycle times, compression method, storage rate, and the slew rate of the measured variable, are dependent both on the data handling and on the measured variable. Linear and staircase interpolation, along with the use of clustering and filtering, provide the necessary options to develop accurate correlations. The examples illustrate the appropriate application of these options
Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery.
Ratliff, Bradley M; LaCasse, Charles F; Tyo, J Scott
2009-05-25
Microgrid polarimeters are composed of an array of micro-polarizing elements overlaid upon an FPA sensor. In the past decade systems have been designed and built in all regions of the optical spectrum. These systems have rugged, compact designs and the ability to obtain a complete set of polarimetric measurements during a single image capture. However, these systems acquire the polarization measurements through spatial modulation and each measurement has a varying instantaneous field-of-view (IFOV). When these measurements are combined to estimate the polarization images, strong edge artifacts are present that severely degrade the estimated polarization imagery. These artifacts can be reduced when interpolation strategies are first applied to the intensity data prior to Stokes vector estimation. Here we formally study IFOV error and the performance of several bilinear interpolation strategies used for reducing it.
Bi-local baryon interpolating fields with two flavors
Energy Technology Data Exchange (ETDEWEB)
Dmitrasinovic, V. [Belgrade University, Institute of Physics, Pregrevica 118, Zemun, P.O. Box 57, Beograd (RS); Chen, Hua-Xing [Institutos de Investigacion de Paterna, Departamento de Fisica Teorica and IFIC, Centro Mixto Universidad de Valencia-CSIC, Valencia (Spain); Peking University, Department of Physics and State Key Laboratory of Nuclear Physics and Technology, Beijing (China)
2011-02-15
We construct bi-local interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We use the restrictions following from the Pauli principle to derive relations/identities among the baryon operators with identical quantum numbers. Such relations that follow from the combined spatial, Dirac, color, and isospin Fierz transformations may be called the (total/complete) Fierz identities. These relations reduce the number of independent baryon operators with any given spin and isospin. We also study the Abelian and non-Abelian chiral transformation properties of these fields and place them into baryon chiral multiplets. Thus we derive the independent baryon interpolating fields with given values of spin (Lorentz group representation), chiral symmetry (U{sub L}(2) x U{sub R}(2) group representation) and isospin appropriate for the first angular excited states of the nucleon. (orig.)
Analysis of velocity planning interpolation algorithm based on NURBS curve
Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng
2017-04-01
To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.
An Improved Rotary Interpolation Based on FPGA
Directory of Open Access Journals (Sweden)
Mingyu Gao
2014-08-01
Full Text Available This paper presents an improved rotary interpolation algorithm, which consists of a standard curve interpolation module and a rotary process module. Compared to the conventional rotary interpolation algorithms, the proposed rotary interpolation algorithm is simpler and more efficient. The proposed algorithm was realized on a FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe, which uses rotary ellipse and rotary parabolic as an example. According to the theoretical analysis and practical process validation, the algorithm has the following advantages: firstly, less arithmetic items is conducive for interpolation operation; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.
Spatial resolution in depth-controlled surface sensitive x-ray techniques
International Nuclear Information System (INIS)
Yun, W.B.; Viccaro, P.J.
1992-01-01
The spatial resolution along the surface normal and the total depth probed are two important parameters in depth-controlled surface sensitive X-ray techniques employing grazing incidence geometry. The two parameters are analyzed in terms of optical properties (refractive indices) of the media involved and parameters of the incident X-ray beam: beam divergence, X-ray energy, and spectral bandwidth. We derive analytical expressions of the required beam divergence and spectral bandwidth of the incident beam as a function of the two parameters. Sample calculations are made for X-ray energies between 0.1 and 100 keV and for solid Be, Cu, and Au, representing material matrices consisting of low, medium, and high atomic number elements. A brief discussion on obtaining the required beam divergence and spectral bandwidth from present X-ray sources and optics is given
Research on spatio-temporal database techniques for spatial information service
Zhao, Rong; Wang, Liang; Li, Yuxiang; Fan, Rongshuang; Liu, Ping; Li, Qingyuan
2007-06-01
Geographic data should be described by spatial, temporal and attribute components, but the spatio-temporal queries are difficult to be answered within current GIS. This paper describes research into the development and application of spatio-temporal data management system based upon GeoWindows GIS software platform which was developed by Chinese Academy of Surveying and Mapping (CASM). Faced the current and practical requirements of spatial information application, and based on existing GIS platform, one kind of spatio-temporal data model which integrates vector and grid data together was established firstly. Secondly, we solved out the key technique of building temporal data topology, successfully developed a suit of spatio-temporal database management system adopting object-oriented methods. The system provides the temporal data collection, data storage, data management and data display and query functions. Finally, as a case study, we explored the application of spatio-temporal data management system with the administrative region data of multi-history periods of China as the basic data. With all the efforts above, the GIS capacity of management and manipulation in aspect of time and attribute of GIS has been enhanced, and technical reference has been provided for the further development of temporal geographic information system (TGIS).
Energy Technology Data Exchange (ETDEWEB)
Hess, Nancy J.; Paša-Tolić, Ljiljana; Bailey, Vanessa L.; Dohnalkova, Alice C.
2017-06-01
Understanding the role played by microorganisms within soil systems is challenged by the unique intersection of physics, chemistry, mineralogy and biology in fostering habitat for soil microbial communities. To address these challenges will require observations across multiple spatial and temporal scales to capture the dynamics and emergent behavior from complex and interdependent processes. The heterogeneity and complexity of the rhizosphere require advanced techniques that press the simultaneous frontiers of spatial resolution, analyte sensitivity and specificity, reproducibility, large dynamic range, and high throughput. Fortunately many exciting technical advancements are now available to inform and guide the development of new hypotheses. The aim of this Special issue is to provide a holistic view of the rhizosphere in the perspective of modern molecular biology methodologies that enabled a highly-focused, detailed view on the processes in the rhizosphere, including numerous, strong and complex interactions between plant roots, soil constituents and microorganisms. We discuss the current rhizosphere research challenges and knowledge gaps, as well as perspectives and approaches using newly available state-of-the-art toolboxes. These new approaches and methodologies allow the study of rhizosphere processes and properties, and rhizosphere as a central component of ecosystems and biogeochemical cycles.
Reinhardt, Katja; Samimi, Cyrus
2018-01-01
While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and
Interferometric interpolation of sparse marine data
Hanafy, Sherif M.
2013-10-11
We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.
Linear and Quadratic Interpolators Using Truncated-Matrix Multipliers and Squarers
Directory of Open Access Journals (Sweden)
E. George Walters III
2015-11-01
Full Text Available This paper presents a technique for designing linear and quadratic interpolators for function approximation using truncated multipliers and squarers. Initial coefficient values are found using a Chebyshev-series approximation and then adjusted through exhaustive simulation to minimize the maximum absolute error of the interpolator output. This technique is suitable for any function and any precision up to 24 bits (IEEE single precision. Designs for linear and quadratic interpolators that implement the 1/x, 1/ √ x, log2(1+2x, log2(x and 2x functions are presented and analyzed as examples. Results show that a proposed 24-bit interpolator computing 1/x with a design specification of ±1 unit in the last place of the product (ulp error uses 16.4% less area and 15.3% less power than a comparable standard interpolator with the same error specification. Sixteen-bit linear interpolators for other functions are shown to use up to 17.3% less area and 12.1% less power, and 16-bit quadratic interpolators are shown to use up to 25.8% less area and 24.7% less power.
Digital x-ray tomosynthesis with interpolated projection data for thin slab objects
Ha, S.; Yun, J.; Kim, H. K.
2017-11-01
In relation with a thin slab-object inspection, we propose a digital tomosynthesis reconstruction with fewer numbers of measured projections in combinations with additional virtual projections, which are produced by interpolating the measured projections. Hence we can reconstruct tomographic images with less few-view artifacts. The projection interpolation assumes that variations in cone-beam ray path-lengths through an object are negligible and the object is rigid. The interpolation is performed in the projection-space domain. Pixel values in the interpolated projection are the weighted sum of pixel values of the measured projections considering their projection angles. The experimental simulation shows that the proposed method can enhance the contrast-to-noise performance in reconstructed images while sacrificing the spatial resolving power.
Wan, Yongshan; Qian, Yun; Migliaccio, Kati White; Li, Yuncong; Conrad, Cecilia
2014-03-01
Most studies using multivariate techniques for pollution source evaluation are conducted in free-flowing rivers with distinct point and nonpoint sources. This study expanded on previous research to a managed "canal" system discharging into the Indian River Lagoon, Florida, where water and land management is the single most important anthropogenic factor influencing water quality. Hydrometric and land use data of four drainage basins were uniquely integrated into the analysis of 25 yr of monthly water quality data collected at seven stations to determine the impact of water and land management on the spatial variability of water quality. Cluster analysis (CA) classified seven monitoring stations into four groups (CA groups). All water quality parameters identified by discriminant analysis showed distinct spatial patterns among the four CA groups. Two-step principal component analysis/factor analysis (PCA/FA) was conducted with (i) water quality data alone and (ii) water quality data in conjunction with rainfall, flow, and land use data. The results indicated that PCA/FA of water quality data alone was unable to identify factors associated with management activities. The addition of hydrometric and land use data into PCA/FA revealed close associations of nutrients and color with land management and storm-water retention in pasture and citrus lands; total suspended solids, turbidity, and NO + NO with flow and Lake Okeechobee releases; specific conductivity with supplemental irrigation supply; and dissolved O with wetland preservation. The practical implication emphasizes the importance of basin-specific land and water management for ongoing pollutant loading reduction and ecosystem restoration programs. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Spatial and temporal predictions of agricultural land prices using DSM techniques.
Carré, F.; Grandgirard, D.; Diafas, I.; Reuter, H. I.; Julien, V.; Lemercier, B.
2009-04-01
Agricultural land prices highly impacts land accessibility to farmers and by consequence the evolution of agricultural landscapes (crop changes, land conversion to urban infrastructures…) which can turn to irreversible soil degradation. The economic value of agricultural land has been studied spatially, in every one of the 374 French Agricultural Counties, and temporally- from 1995 to 2007, by using data of the SAFER Institute. To this aim, agricultural land price was considered as a digital soil property. The spatial and temporal predictions were done using Digital Soil Mapping techniques combined with tools mainly used for studying temporal financial behaviors. For making both predictions, a first classification of the Agricultural Counties was done for the 1995-2006 periods (2007 was excluded and served as the date of prediction) using a fuzzy k-means clustering. The Agricultural Counties were then aggregated according to land price at the different times. The clustering allows for characterizing the counties by their memberships to each class centroid. The memberships were used for the spatial prediction, whereas the centroids were used for the temporal prediction. For the spatial prediction, from the 374 Agricultural counties, three fourths were used for modeling and one fourth for validating. Random sampling was done by class to ensure that all classes are represented by at least one county in the modeling and validation datasets. The prediction was done for each class by testing the relationships between the memberships and the following factors: (i) soil variable (organic matter from the French BDAT database), (ii) soil covariates (land use classes from CORINE LANDCOVER, bioclimatic zones from the WorldClim Database, landform attributes and landform classes from the SRTM, major roads and hydrographic densities from EUROSTAT, average field sizes estimated by automatic classification of remote sensed images) and (iii) socio-economic factors (population
Halim, N. Z. A.; Sulaiman, S. A.; Talib, K.; Ng, E. G.
2018-02-01
This paper explains the process carried out in identifying the relevant features of the National Digital Cadastral Database (NDCDB) for spatial analysis. The research was initially a part of a larger research exercise to identify the significance of NDCDB from the legal, technical, role and land-based analysis perspectives. The research methodology of applying the Delphi technique is substantially discussed in this paper. A heterogeneous panel of 14 experts was created to determine the importance of NDCDB from the technical relevance standpoint. Three statements describing the relevant features of NDCDB for spatial analysis were established after three rounds of consensus building. It highlighted the NDCDB’s characteristics such as its spatial accuracy, functions, and criteria as a facilitating tool for spatial analysis. By recognising the relevant features of NDCDB for spatial analysis in this study, practical application of NDCDB for various analysis and purpose can be widely implemented.
Interpolation of daily rainfall using spatiotemporal models and clustering
Militino, A. F.
2014-06-11
Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.
Interpolation of daily rainfall using spatiotemporal models and clustering
Militino, A. F.; Ugarte, M. D.; Goicoa, T.; Genton, Marc G.
2014-01-01
Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.
DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA
Ledbetter, F. E.
1994-01-01
Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With
Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation
Murarasu, Alin
2012-12-01
The well-known power wall resulting in multi-cores requires special techniques for speeding up applications. In this sense, parallelization plays a crucial role. Besides standard serial optimizations, techniques such as input specialization can also bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation is an inherently hierarchical method of interpolation employed for example in computational steering applications for decompressing highdimensional simulation data. In this context, improving the speedup is essential for real-time visualization. Using input specialization, we report a speedup of up to 9x over the nonspecialized version. The paper covers the steps we took to reach this speedup by means of input adaptivity. Our algorithms will be integrated in fastsg, a library for fast sparse grid interpolation. © 2012 IEEE.
Computing Diffeomorphic Paths for Large Motion Interpolation.
Seo, Dohyung; Jeffrey, Ho; Vemuri, Baba C
2013-06-01
In this paper, we introduce a novel framework for computing a path of diffeomorphisms between a pair of input diffeomorphisms. Direct computation of a geodesic path on the space of diffeomorphisms Diff (Ω) is difficult, and it can be attributed mainly to the infinite dimensionality of Diff (Ω). Our proposed framework, to some degree, bypasses this difficulty using the quotient map of Diff (Ω) to the quotient space Diff ( M )/ Diff ( M ) μ obtained by quotienting out the subgroup of volume-preserving diffeomorphisms Diff ( M ) μ . This quotient space was recently identified as the unit sphere in a Hilbert space in mathematics literature, a space with well-known geometric properties. Our framework leverages this recent result by computing the diffeomorphic path in two stages. First, we project the given diffeomorphism pair onto this sphere and then compute the geodesic path between these projected points. Second, we lift the geodesic on the sphere back to the space of diffeomerphisms, by solving a quadratic programming problem with bilinear constraints using the augmented Lagrangian technique with penalty terms. In this way, we can estimate the path of diffeomorphisms, first, staying in the space of diffeomorphisms, and second, preserving shapes/volumes in the deformed images along the path as much as possible. We have applied our framework to interpolate intermediate frames of frame-sub-sampled video sequences. In the reported experiments, our approach compares favorably with the popular Large Deformation Diffeomorphic Metric Mapping framework (LDDMM).
NOAA Optimum Interpolation (OI) SST V2
National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...
Kuu plaat : Interpol Antics. Plaadid kauplusest Lasering
2005-01-01
Heliplaatidest: "Interpol Antics", Scooter "Mind the Gap", Slide-Fifty "The Way Ahead", Psyhhoterror "Freddy, löö esimesena!", Riho Sibul "Must", Bossacucanova "Uma Batida Diferente", "Biscantorat - Sound of the spirit from Glenstal Abbey"
Revisiting Veerman’s interpolation method
DEFF Research Database (Denmark)
Christiansen, Peter; Bay, Niels Oluf
2016-01-01
and (c) FEsimulations. A comparison of the determined forming limits yields insignificant differences in the limit strain obtainedwith Veerman’s method or exact Lagrangian interpolation for the two sheet metal forming processes investigated. Theagreement with the FE-simulations is reasonable.......This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established...... for comparison. Bulge testing and tensile testing of aluminium sheets containingelectro-chemically etched circle grids are performed to experimentally determine the forming limit of the sheet material.The forming limit is determined using (a) Veerman’s interpolation method, (b) exact Lagrangian interpolation...
NOAA Daily Optimum Interpolation Sea Surface Temperature
National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA 1/4° daily Optimum Interpolation Sea Surface Temperature (or daily OISST) is an analysis constructed by combining observations from different platforms...
Advantage of Fast Fourier Interpolation for laser modeling
International Nuclear Information System (INIS)
Epatko, I.V.; Serov, R.V.
2006-01-01
The abilities of a new algorithm: the 2-dimensional Fast Fourier Interpolation (FFI) with magnification factor (zoom) 2 n whose purpose is to improve the spatial resolution when necessary, are analyzed in details. FFI procedure is useful when diaphragm/aperture size is less than half of the current simulation scale. The computation noise due to FFI procedure is less than 10 -6 . The additional time for FFI is approximately equal to one Fast Fourier Transform execution time. For some applications using FFI procedure, the execution time decreases by a 10 4 factor compared with other laser simulation codes. (authors)
Integration and interpolation of sampled waveforms
International Nuclear Information System (INIS)
Stearns, S.D.
1978-01-01
Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed
Wideband DOA Estimation through Projection Matrix Interpolation
Selva, J.
2017-01-01
This paper presents a method to reduce the complexity of the deterministic maximum likelihood (DML) estimator in the wideband direction-of-arrival (WDOA) problem, which is based on interpolating the array projection matrix in the temporal frequency variable. It is shown that an accurate interpolator like Chebyshev's is able to produce DML cost functions comprising just a few narrowband-like summands. Actually, the number of such summands is far smaller (roughly by factor ten in the numerical ...
Interpolation for a subclass of H
Indian Academy of Sciences (India)
|g(zm)| ≤ c |zm − zm |, ∀m ∈ N. Thus it is natural to pose the following interpolation problem for H. ∞. : DEFINITION 4. We say that (zn) is an interpolating sequence in the weak sense for H. ∞ if given any sequence of complex numbers (λn) verifying. |λn| ≤ c ψ(zn,z. ∗ n) |zn − zn |, ∀n ∈ N,. (4) there exists a product fg ∈ H.
International Nuclear Information System (INIS)
Gao Wen-Wu; Wang Zhi-Gang
2014-01-01
Based on the multiquadric trigonometric B-spline quasi-interpolant, this paper proposes a meshless scheme for some partial differential equations whose solutions are periodic with respect to the spatial variable. This scheme takes into account the periodicity of the analytic solution by using derivatives of a periodic quasi-interpolant (multiquadric trigonometric B-spline quasi-interpolant) to approximate the spatial derivatives of the equations. Thus, it overcomes the difficulties of the previous schemes based on quasi-interpolation (requiring some additional boundary conditions and yielding unwanted high-order discontinuous points at the boundaries in the spatial domain). Moreover, the scheme also overcomes the difficulty of the meshless collocation methods (i.e., yielding a notorious ill-conditioned linear system of equations for large collocation points). The numerical examples that are presented at the end of the paper show that the scheme provides excellent approximations to the analytic solutions. (general)
Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI
Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.
2015-01-01
Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085
Calculation of electromagnetic parameter based on interpolation algorithm
International Nuclear Information System (INIS)
Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan
2015-01-01
Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment
Spatial analysis techniques applied to uranium prospecting in Chihuahua State, Mexico
Hinojosa de la Garza, Octavio R.; Montero Cabrera, María Elena; Sanín, Luz H.; Reyes Cortés, Manuel; Martínez Meyer, Enrique
2014-07-01
To estimate the distribution of uranium minerals in Chihuahua, the advanced statistical model "Maximun Entropy Method" (MaxEnt) was applied. A distinguishing feature of this method is that it can fit more complex models in case of small datasets (x and y data), as is the location of uranium ores in the State of Chihuahua. For georeferencing uranium ores, a database from the United States Geological Survey and workgroup of experts in Mexico was used. The main contribution of this paper is the proposal of maximum entropy techniques to obtain the mineral's potential distribution. For this model were used 24 environmental layers like topography, gravimetry, climate (worldclim), soil properties and others that were useful to project the uranium's distribution across the study area. For the validation of the places predicted by the model, comparisons were done with other research of the Mexican Service of Geological Survey, with direct exploration of specific areas and by talks with former exploration workers of the enterprise "Uranio de Mexico". Results. New uranium areas predicted by the model were validated, finding some relationship between the model predictions and geological faults. Conclusions. Modeling by spatial analysis provides additional information to the energy and mineral resources sectors.
A robust spatial filtering technique for multisource localization and geoacoustic inversion.
Stotts, S A
2005-07-01
Geoacoustic inversion and source localization using beamformed data from a ship of opportunity has been demonstrated with a bottom-mounted array. An alternative approach, which lies within a class referred to as spatial filtering, transforms element level data into beam data, applies a bearing filter, and transforms back to element level data prior to performing inversions. Automation of this filtering approach is facilitated for broadband applications by restricting the inverse transform to the degrees of freedom of the array, i.e., the effective number of elements, for frequencies near or below the design frequency. A procedure is described for nonuniformly spaced elements that guarantees filter stability well above the design frequency. Monitoring energy conservation with respect to filter output confirms filter stability. Filter performance with both uniformly spaced and nonuniformly spaced array elements is discussed. Vertical (range and depth) and horizontal (range and bearing) ambiguity surfaces are constructed to examine filter performance. Examples that demonstrate this filtering technique with both synthetic data and real data are presented along with comparisons to inversion results using beamformed data. Examinations of cost functions calculated within a simulated annealing algorithm reveal the efficacy of the approach.
Spatial Air Quality Modelling Using Chemometrics Techniques: A Case Study in Peninsular Malaysia
International Nuclear Information System (INIS)
Azman Azid; Hafizan Juahir; Mohammad Azizi Amran; Zarizal Suhaili; Mohamad Romizan Osman; Asyaari Muhamad; Asyaari Muhamad; Ismail Zainal Abidin; Nur Hishaam Sulaiman; Ahmad Shakir Mohd Saudi
2015-01-01
This study shows the effectiveness of hierarchical agglomerative cluster analysis (HACA), discriminant analysis (DA), principal component analysis (PCA), and multiple linear regressions (MLR) for assessment of air quality data and recognition of air pollution sources. 12 months data (January-December 2007) consisting of 14 stations in Peninsular Malaysia with 14 parameters were applied. Three significant clusters - low pollution source (LPS), moderate pollution source (MPS), and slightly high pollution source (SHPS) were generated via HACA. Forward stepwise of DA managed to discriminate eight variables, whereas backward stepwise of DA managed to discriminate nine variables out of fourteen variables. The PCA and FA results show the main contributor of air pollution in Peninsular Malaysia is the combustion of fossil fuel from industrial activities, transportation and agriculture systems. Four MLR models show that PM_1_0 account as the most and the highest pollution contributor to Malaysian air quality. From the study, it can be stipulated that the application of chemometrics techniques can disclose meaningful information on the spatial variability of a large and complex air quality data. A clearer review about the air quality and a novelty design of air quality monitoring network for better management of air pollution can be achieved via these methods. (author)
MODIS Snow Cover Recovery Using Variational Interpolation
Tran, H.; Nguyen, P.; Hsu, K. L.; Sorooshian, S.
2017-12-01
Cloud obscuration is one of the major problems that limit the usages of satellite images in general and in NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) global Snow-Covered Area (SCA) products in particular. Among the approaches to resolve the problem, the Variational Interpolation (VI) algorithm method, proposed by Xia et al., 2012, obtains cloud-free dynamic SCA images from MODIS. This method is automatic and robust. However, computational deficiency is a main drawback that degrades applying the method for larger scales (i.e., spatial and temporal scales). To overcome this difficulty, this study introduces an improved version of the original VI. The modified VI algorithm integrates the MINimum RESidual (MINRES) iteration (Paige and Saunders., 1975) to prevent the system from breaking up when applied to much broader scales. An experiment was done to demonstrate the crash-proof ability of the new algorithm in comparison with the original VI method, an ability that is obtained when maintaining the distribution of the weights set after solving the linear system. After that, the new VI algorithm was applied to the whole Contiguous United States (CONUS) over four winter months of 2016 and 2017, and validated using the snow station network (SNOTEL). The resulting cloud free images have high accuracy in capturing the dynamical changes of snow in contrast with the MODIS snow cover maps. Lastly, the algorithm was applied to create a Cloud free images dataset from March 10, 2000 to February 28, 2017, which is able to provide an overview of snow trends over CONUS for nearly two decades. ACKNOWLEDGMENTSWe would like to acknowledge NASA, NOAA Office of Hydrologic Development (OHD) National Weather Service (NWS), Cooperative Institute for Climate and Satellites (CICS), Army Research Office (ARO), ICIWaRM, and UNESCO for supporting this research.
Interpolated sagittal and coronal reconstruction of CT images in the screening of neck abnormalities
International Nuclear Information System (INIS)
Koga, Issei
1983-01-01
Recontructed sagittal and coronal images were analyzed for their usefulness during clinical applications and to determine the correct use of recontruction techniques. Recontructed stereoscopic images can be formed by continuous or interrupted image reconstruction using interpolation. This study showed that lesions less than 10 mm in diameter should be made continuously and recontructed with uninterrupted technique. However, 5 mm interrupted distances are acceptable for interpolated reconstruction except in cases of lesions less than 10 mm in diameter. Clinically, interpolated reconstruction is not adequated for semicircular lesions less than 10 mm. Blood vessels and linear lesions are good condiated for the application of interpolated recontruction. Reconstruction of images using interrupted interpolation is therefore recommended for screening and for demonstrating correct stereoscopic information, except cases of small lesions less than 10 mm in diameter. Results of this study underscore the fact that obscure information in transverse CT images should be routinely utilized by interporating recontruction techniques, if transverse images are not made continuously. Interpolated recontruction may be helpful in obtaining stereoscopic information. (author)
Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware
International Nuclear Information System (INIS)
Nakata, Susumu
2008-01-01
This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.
Energy Technology Data Exchange (ETDEWEB)
Sutter, John P., E-mail: john.sutter@diamond.ac.uk; Dolbnya, Igor P.; Collins, Stephen P. [Diamond Light Source Ltd, Harwell Science and Innovation Campus, Chilton, Didcot, Oxfordshire OX11 0DE (United Kingdom); Harris, Kenneth D. M., E-mail: HarrisKDM@cardiff.ac.uk; Edwards-Gau, Gregory R.; Kariuki, Benson M. [School of Chemistry, Cardiff University, Park Place, Cardiff CF10 3AT (United Kingdom); Palmer, Benjamin A. [Department of Structural Biology, Weizmann Institute of Science, 234 Herzl St., Rehovot 7610001 (Israel)
2016-07-27
Birefringence has been observed in anisotropic materials transmitting linearly polarized X-ray beams tuned close to an absorption edge of a specific element in the material. Synchrotron bending magnets provide X-ray beams of sufficiently high brightness and cross section for spatially resolved measurements of birefringence. The recently developed X-ray Birefringence Imaging (XBI) technique has been successfully applied for the first time using the versatile test beamline B16 at Diamond Light Source. Orientational distributions of the C–Br bonds of brominated “guest” molecules within crystalline “host” tunnel structures (in thiourea or urea inclusion compounds) have been studied using linearly polarized incident X-rays near the Br K-edge. Imaging of domain structures, changes in C–Br bond orientations associated with order-disorder phase transitions, and the effects of dynamic averaging of C–Br bond orientations have been demonstrated. The XBI setup uses a vertically deflecting high-resolution double-crystal monochromator upstream from the sample and a horizontally deflecting single-crystal polarization analyzer downstream, with a Bragg angle as close as possible to 45°. In this way, the ellipticity and rotation angle of the polarization of the beam transmitted through the sample is measured as in polarizing optical microscopy. The theoretical instrumental background calculated from the elliptical polarization of the bending-magnet X-rays, the imperfect polarization discrimination of the analyzer, and the correlation between vertical position and photon energy introduced by the monochromator agrees well with experimental observations. The background is calculated analytically because the region of X-ray phase space selected by this setup is sampled inefficiently by standard methods.
Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong
2016-01-01
Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.
Edge-detect interpolation for direct digital periapical images
International Nuclear Information System (INIS)
Song, Nam Kyu; Koh, Kwang Joon
1998-01-01
The purpose of this study was to aid in the use of the digital images by edge-detect interpolation for direct digital periapical images using edge-deted interpolation. This study was performed by image processing of 20 digital periapical images; pixel replication, linear non-interpolation, linear interpolation, and edge-sensitive interpolation. The obtained results were as follows ; 1. Pixel replication showed blocking artifact and serious image distortion. 2. Linear interpolation showed smoothing effect on the edge. 3. Edge-sensitive interpolation overcame the smoothing effect on the edge and showed better image.
Interpolating precipitation and its relation to runoff and non-point source pollution.
Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L
2005-01-01
When rainfall spatially varies, complete rainfall data for each region with different rainfall characteristics are very important. Numerous interpolation methods have been developed for estimating unknown spatial characteristics. However, no interpolation method is suitable for all circumstances. In this study, several methods, including the arithmetic average method, the Thiessen Polygons method, the traditional inverse distance method, and the modified inverse distance method, were used to interpolate precipitation. The modified inverse distance method considers not only horizontal distances but also differences between the elevations of the region with no rainfall records and of its surrounding rainfall stations. The results show that when the spatial variation of rainfall is strong, choosing a suitable interpolation method is very important. If the rainfall is uniform, the precipitation estimated using any interpolation method would be quite close to the actual precipitation. When rainfall is heavy in locations with high elevation, the rainfall changes with the elevation. In this situation, the modified inverse distance method is much more effective than any other method discussed herein for estimating the rainfall input for WinVAST to estimate runoff and non-point source pollution (NPSP). When the spatial variation of rainfall is random, regardless of the interpolation method used to yield rainfall input, the estimation errors of runoff and NPSP are large. Moreover, the relationship between the relative error of the predicted runoff and predicted pollutant loading of SS is high. However, the pollutant concentration is affected by both runoff and pollutant export, so the relationship between the relative error of the predicted runoff and the predicted pollutant concentration of SS may be unstable.
Schwibbe, Anja; Kothe, Christian; Hampe, Wolfgang; Konradt, Udo
2016-01-01
Sixty years of research have not added up to a concordant evaluation of the influence of spatial and manual abilities on dental skill acquisition. We used Ackerman's theory of ability determinants of skill acquisition to explain the influence of spatial visualization and manual dexterity on the task performance of dental students in two…
Directory of Open Access Journals (Sweden)
Huaiqing Zhang
2014-01-01
Full Text Available The spectral leakage has a harmful effect on the accuracy of harmonic analysis for asynchronous sampling. This paper proposed a time quasi-synchronous sampling algorithm which is based on radial basis function (RBF interpolation. Firstly, a fundamental period is evaluated by a zero-crossing technique with fourth-order Newton’s interpolation, and then, the sampling sequence is reproduced by the RBF interpolation. Finally, the harmonic parameters can be calculated by FFT on the synchronization of sampling data. Simulation results showed that the proposed algorithm has high accuracy in measuring distorted and noisy signals. Compared to the local approximation schemes as linear, quadric, and fourth-order Newton interpolations, the RBF is a global approximation method which can acquire more accurate results while the time-consuming is about the same as Newton’s.
Validation of China-wide interpolated daily climate variables from 1960 to 2011
Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang
2015-02-01
Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based
Yuval, Yuval; Rimon, Yaara; Graber, Ellen R; Furman, Alex
2014-08-01
A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanisation often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data is thus an important tool for supplementing monitoring observations. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range of values (up to a few orders of magnitude) in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. A local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. The inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the coastal aquifer along the Israeli
Yuval; Rimon, Y.; Graber, E. R.; Furman, A.
2013-07-01
A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanization often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data between points is thus an important tool for supplementing measured data. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range (up to a few orders of magnitude) of values in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. Local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. That inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the Coastal aquifer along the Israeli
Optimal Interpolation scheme to generate reference crop evapotranspiration
Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco
2018-05-01
We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.
Discrete Orthogonal Transforms and Neural Networks for Image Interpolation
Directory of Open Access Journals (Sweden)
J. Polec
1999-09-01
Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.
International Nuclear Information System (INIS)
Ruprecht, V.
2010-01-01
Fluorescence microscopy techniques are currently among the most important experimental tools to study cellular processes. Ultra-sensitive detection devices nowadays allow for measuring even individual farnesylacetate labeled target molecules with nanometer spatial accuracy and millisecond time resolution. The emergence of single molecule fluorescence techniques especially contributed to the field of membrane biology and provided basic knowledge on structural and dynamic features of the cellular plasma membrane. However, we are still confronted with a rather fragmentary understanding of the complex architecture and functional interrelations of membrane constituents. In this thesis new concepts in one- and dual-color single molecule fluorescence techniques are presented that allow for addressing organization principles and interaction dynamics in the live cell plasma membrane. Two complementary experimental strategies are described which differ in their detection principle: single molecule fluorescence imaging and fluorescence correlation spectroscopy. The presented methods are discussed in terms of their implementation, accuracy, quantitative and statistical data analysis, as well as live cell applications. State-of-the-art dual color single molecule imaging is introduced as the most direct experimental approach to study interaction dynamics between differently labeled target molecules. New analytical estimates for robust data analysis are presented that facilitate quantitative recording and identification of co localizations in dual color single molecule images. A novel dual color illumination scheme is further described that profoundly extends the current range and sensitivity of conventional dual color single molecule experiments. The method enables working at high surface densities of fluorescent molecules - a feature typically incommensurable with single molecule imaging - and is especially suited for the detection of rare interactions by tracking co localized
Bhardwaj, Kaushal; Patra, Swarnajyoti
2018-04-01
Inclusion of spatial information along with spectral features play a significant role in classification of remote sensing images. Attribute profiles have already proved their ability to represent spatial information. In order to incorporate proper spatial information, multiple attributes are required and for each attribute large profiles need to be constructed by varying the filter parameter values within a wide range. Thus, the constructed profiles that represent spectral-spatial information of an hyperspectral image have huge dimension which leads to Hughes phenomenon and increases computational burden. To mitigate these problems, this work presents an unsupervised feature selection technique that selects a subset of filtered image from the constructed high dimensional multi-attribute profile which are sufficiently informative to discriminate well among classes. In this regard the proposed technique exploits genetic algorithms (GAs). The fitness function of GAs are defined in an unsupervised way with the help of mutual information. The effectiveness of the proposed technique is assessed using one-against-all support vector machine classifier. The experiments conducted on three hyperspectral data sets show the robustness of the proposed method in terms of computation time and classification accuracy.
Summary on several key techniques in 3D geological modeling.
Mei, Gang
2014-01-01
Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.
New families of interpolating type IIB backgrounds
Minasian, Ruben; Petrini, Michela; Zaffaroni, Alberto
2010-04-01
We construct new families of interpolating two-parameter solutions of type IIB supergravity. These correspond to D3-D5 systems on non-compact six-dimensional manifolds which are mathbb{T}2 fibrations over Eguchi-Hanson and multi-center Taub-NUT spaces, respectively. One end of the interpolation corresponds to a solution with only D5 branes and vanishing NS three-form flux. A topology changing transition occurs at the other end, where the internal space becomes a direct product of the four-dimensional surface and the two-torus and the complexified NS-RR three-form flux becomes imaginary self-dual. Depending on the choice of the connections on the torus fibre, the interpolating family has either mathcal{N}=2 or mathcal{N}=1 supersymmetry. In the mathcal{N}=2 case it can be shown that the solutions are regular.
Quadratic Interpolation and Linear Lifting Design
Directory of Open Access Journals (Sweden)
Joel Solé
2007-03-01
Full Text Available A quadratic image interpolation method is stated. The formulation is connected to the optimization of lifting steps. This relation triggers the exploration of several interpolation possibilities within the same context, which uses the theory of convex optimization to minimize quadratic functions with linear constraints. The methods consider possible knowledge available from a given application. A set of linear equality constraints that relate wavelet bases and coefficients with the underlying signal is introduced in the formulation. As a consequence, the formulation turns out to be adequate for the design of lifting steps. The resulting steps are related to the prediction minimizing the detail signal energy and to the update minimizing the l2-norm of the approximation signal gradient. Results are reported for the interpolation methods in terms of PSNR and also, coding results are given for the new update lifting steps.
Optimized Quasi-Interpolators for Image Reconstruction.
Sacht, Leonardo; Nehab, Diego
2015-12-01
We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.
Wall, Conrad., III
1999-01-01
In addition to adapting to microgravity, major neurovestibular problems of space flight include postflight difficulties with standing, walking, turning corners, and other activities that require stable upright posture and gaze stability. These difficulties inhibit astronauts' ability to stand or escape from their vehicle during emergencies. The long-ter7n goal of the NSBRI is the development of countermeasures to ameliorate the effects of long duration space flight. These countermeasures must be tested with valid and reliable tools. This project aims to develop quantitative, parametric approaches for assessing gaze stability and spatial orientation during normal gait and when gait is perturbed. Two of this year's most important findings concern head fixation distance and ideal trajectory analysis. During a normal cycle of walking the head moves up and down linearly. A simultaneous angular pitching motion of the head keeps it aligned toward an imaginary point in space at a distance of about one meter in front of a subject and along the line of march. This distance is called the head fixation distance. Head fixation distance provides the fundamental framework necessary for understanding the functional significance of the vestibular reflexes that couple head motion to eye motion. This framework facilitates the intelligent design of counter-measures for the effects of exposure to microgravity upon the vestibular ocular reflexes. Ideal trajectory analysis is a simple candidate countermeasure based upon quantifying body sway during repeated up and down stair stepping. It provides one number that estimates the body sway deviation from an ideal sinusoidal body sway trajectory normalized on the subject's height. This concept has been developed with NSBRI funding in less than one year. These findings are explained in more detail below. Compared to assessments of the vestibuo-ocular reflex, analysis of vestibular effects on locomotor function is relatively less well developed
Positivity Preserving Interpolation Using Rational Bicubic Spline
Directory of Open Access Journals (Sweden)
Samsul Ariffin Abdul Karim
2015-01-01
Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.
Interpolation algorithm for asynchronous ADC-data
Directory of Open Access Journals (Sweden)
S. Bramburger
2017-09-01
Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.
Spatial interpolation of gamma dose in radioactive waste storage facility
Harun, Nazran; Fathi Sujan, Muhammad; Zaidi Ibrahim, Mohd
2018-01-01
External radiation measurement for a radioactive waste storage facility in Malaysian Nuclear Agency is a part of Class G License requirement under Atomic Licensing Energy Board (AELB). The objectives of this paper are to obtain the distribution of radiation dose, create dose database and generate dose map in the storage facility. The radiation dose measurement is important to fulfil the radiation protection requirement to ensure the safety of the workers. There are 118 sampling points that had been recorded in the storage facility. The highest and lowest reading for external radiation recorded is 651 microSv/hr and 0.648 microSv/hour respectively. The calculated annual dose shows the highest and lowest reading is 1302 mSv/year and 1.3 mSv/year while the highest and lowest effective dose reading is 260.4 mSv/year and 0.26 mSv/year. The result shows that the ALARA concept along time, distance and shield principles shall be adopted to ensure the dose for the workers is kept below the dose limit regulated by AELB which is 20 mSv/year for radiation workers. This study is important for the improvement of planning and the development of shielding design for the facility.
Trends in Continuity and Interpolation for Computer Graphics.
Gonzalez Garcia, Francisco
2015-01-01
In every computer graphics oriented application today, it is a common practice to texture 3D models as a way to obtain realistic material. As part of this process, mesh texturing, deformation, and visualization are all key parts of the computer graphics field. This PhD dissertation was completed in the context of these three important and related fields in computer graphics. The article presents techniques that improve on existing state-of-the-art approaches related to continuity and interpolation in texture space (texturing), object space (deformation), and screen space (rendering).
Tahara, Tatsuki; Kaku, Toru; Arai, Yasuhiko
2014-12-01
Single-shot digital holography based on multiwavelength spatial-bandwidth-extended capturing-technique using a reference arm (Multi-SPECTRA) is proposed. Both amplitude and quantitative phase distributions of waves containing multiple wavelengths are simultaneously recorded with a single reference arm in a single monochromatic image. Then, multiple wavelength information is separately extracted in the spatial frequency domain. The crosstalk between the object waves with different wavelengths is avoided and the number of wavelengths recorded with both a single-shot exposure and no crosstalk can be increased, by a large spatial carrier that causes the aliasing, and/or by use of a grating. The validity of Multi-SPECTRA is quantitatively, numerically, and experimentally confirmed.
Directory of Open Access Journals (Sweden)
Qin Guo-jie
2014-08-01
Full Text Available Sample-time errors can greatly degrade the dynamic range of a time-interleaved sampling system. In this paper, a novel correction technique employing a cubic spline interpolation is proposed for inter-channel sample-time error compensation. The cubic spline interpolation compensation filter is developed in the form of a finite-impulse response (FIR filter structure. The correction method of the interpolation compensation filter coefficients is deduced. A 4GS/s two-channel, time-interleaved ADC prototype system has been implemented to evaluate the performance of the technique. The experimental results showed that the correction technique is effective to attenuate the spurious spurs and improve the dynamic performance of the system.
Khatibi, Siamak; Allansson, Louise; Gustavsson, Tomas; Blomstrand, Fredrik; Hansson, Elisabeth; Olsson, Torsten
1999-05-01
Cell volume changes are often associated with important physiological and pathological processes in the cell. These changes may be the means by which the cell interacts with its surrounding. Astroglial cells change their volume and shape under several circumstances that affect the central nervous system. Following an incidence of brain damage, such as a stroke or a traumatic brain injury, one of the first events seen is swelling of the astroglial cells. In order to study this and other similar phenomena, it is desirable to develop technical instrumentation and analysis methods capable of detecting and characterizing dynamic cell shape changes in a quantitative and robust way. We have developed a technique to monitor and to quantify the spatial and temporal volume changes in a single cell in primary culture. The technique is based on two- and three-dimensional fluorescence imaging. The temporal information is obtained from a sequence of microscope images, which are analyzed in real time. The spatial data is collected in a sequence of images from the microscope, which is automatically focused up and down through the specimen. The analysis of spatial data is performed off-line and consists of photobleaching compensation, focus restoration, filtering, segmentation and spatial volume estimation.
Seidel, Wolfgang
2004-01-01
In photothermal beam deflection spectroscopy (PTBD) generating and detection of thermal waves occur generally in the sub-millimeter length scale. Therefore, PTBD provides spatial information about the surface of the sample and permits imaging and/or microspectrometry. Recent results of PTBD experiments are presented with a high spatial resolution which is near the diffraction limit of the infrared pump beam (CLIO-FEL). We investigated germanium substrates showing restricted O+-doped regions with an infrared absorption line at a wavelength around 11.6 microns. The spatial resolution was obtained by strongly focusing the probe beam (i.e. a HeNe laser) on a sufficiently small spot. The strong divergence makes it necessary to refocus the probe beam in front of the position detector. The influence of the focusing elements on spatial resolution and signal-to-noise ratio is discussed. In future studies we expect an enhanced spatial resolution due to an extreme focusing of the probe beam leading to a highly sensitive...
Accurate B-spline-based 3-D interpolation scheme for digital volume correlation
Ren, Maodong; Liang, Jin; Wei, Bin
2016-12-01
An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.
ERRORS MEASUREMENT OF INTERPOLATION METHODS FOR GEOID MODELS: STUDY CASE IN THE BRAZILIAN REGION
Directory of Open Access Journals (Sweden)
Daniel Arana
Full Text Available Abstract: The geoid is an equipotential surface regarded as the altimetric reference for geodetic surveys and it therefore, has several practical applications for engineers. In recent decades the geodetic community has concentrated efforts on the development of highly accurate geoid models through modern techniques. These models are supplied through regular grids which users need to make interpolations. Yet, little information can be obtained regarding the most appropriate interpolation method to extract information from the regular grid of geoidal models. The use of an interpolator that does not represent the geoid surface appropriately can impair the quality of geoid undulations and consequently the height transformation. This work aims to quantify the magnitude of error that comes from a regular mesh of geoid models. The analysis consisted of performing a comparison between the interpolation of the MAPGEO2015 program and three interpolation methods: bilinear, cubic spline and neural networks Radial Basis Function. As a result of the experiments, it was concluded that 2.5 cm of the 18 cm error of the MAPGEO2015 validation is caused by the use of interpolations in the 5'x5' grid.
Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.
Sidek, Khairul Azami; Khalil, Ibrahim
2013-01-01
Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.
Claverie, Martin; Vermote, Eric; Franch, Belen; He, Tao; Hagolle, Olivier; Kadiri, Mohamed; Masek, Jeff
2015-01-01
High-resolution sensor Surface Reflectance (SR) data are affected by surface anisotropy but are difficult to adjust because of the low temporal frequency of the acquisitions and the low angular sampling. This paper evaluates five high spatial resolution Bidirectional Reflectance Distribution Function (BRDF) adjustment techniques. The evaluation is based on the noise level of the SR Time Series (TS) corrected to a normalized geometry (nadir view, 45° sun zenith angle) extracted from the multi-...
Multiscale empirical interpolation for solving nonlinear PDEs
Calo, Victor M.; Efendiev, Yalchin R.; Galvis, Juan; Ghommem, Mehdi
2014-01-01
residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully
Fast image interpolation via random forests.
Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui
2015-10-01
This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.
Spectral Compressive Sensing with Polar Interpolation
DEFF Research Database (Denmark)
Fyhn, Karsten; Dadkhahi, Hamid; F. Duarte, Marco
2013-01-01
. In this paper, we introduce a greedy recovery algorithm that leverages a band-exclusion function and a polar interpolation function to address these two issues in spectral compressive sensing. Our algorithm is geared towards line spectral estimation from compressive measurements and outperforms most existing...
Musical Applications and Design Techniques for the Gametrak Tethered Spatial Position Controller
DEFF Research Database (Denmark)
Freed, Adrian; Overholt, Daniel; Hansen, Anne-Marie
2009-01-01
The Gametrak spatial position controller has been saved from the fate of so many discontinued gaming controllers to become an attractive and increasingly popular platform for experimental musical controllers, math and science manipulatives, large scale interactive installations and as a playful...... tangible gaming interface that promotes inter-generational creative play and discovery . After introducing the peculiarities of the GameTrak and comparing it to related spatial position sensing systems we survey musical applications of the device. The short paper format cannot do justice to the depth...
Hybrid kriging methods for interpolating sparse river bathymetry point data
Directory of Open Access Journals (Sweden)
Pedro Velloso Gomes Batista
Full Text Available ABSTRACT Terrain models that represent riverbed topography are used for analyzing geomorphologic changes, calculating water storage capacity, and making hydrologic simulations. These models are generated by interpolating bathymetry points. River bathymetry is usually surveyed through cross-sections, which may lead to a sparse sampling pattern. Hybrid kriging methods, such as regression kriging (RK and co-kriging (CK employ the correlation with auxiliary predictors, as well as inter-variable correlation, to improve the predictions of the target variable. In this study, we use the orthogonal distance of a (x, y point to the river centerline as a covariate for RK and CK. Given that riverbed elevation variability is abrupt transversely to the flow direction, it is expected that the greater the Euclidean distance of a point to the thalweg, the greater the bed elevation will be. The aim of this study was to evaluate if the use of the proposed covariate improves the spatial prediction of riverbed topography. In order to asses such premise, we perform an external validation. Transversal cross-sections are used to make the spatial predictions, and the point data surveyed between sections are used for testing. We compare the results from CK and RK to the ones obtained from ordinary kriging (OK. The validation indicates that RK yields the lowest RMSE among the interpolators. RK predictions represent the thalweg between cross-sections, whereas the other methods under-predict the river thalweg depth. Therefore, we conclude that RK provides a simple approach for enhancing the quality of the spatial prediction from sparse bathymetry data.
Efficient Enhancement for Spatial Scalable Video Coding Transmission
Directory of Open Access Journals (Sweden)
Mayada Khairy
2017-01-01
Full Text Available Scalable Video Coding (SVC is an international standard technique for video compression. It is an extension of H.264 Advanced Video Coding (AVC. In the encoding of video streams by SVC, it is suitable to employ the macroblock (MB mode because it affords superior coding efficiency. However, the exhaustive mode decision technique that is usually used for SVC increases the computational complexity, resulting in a longer encoding time (ET. Many other algorithms were proposed to solve this problem with imperfection of increasing transmission time (TT across the network. To minimize the ET and TT, this paper introduces four efficient algorithms based on spatial scalability. The algorithms utilize the mode-distribution correlation between the base layer (BL and enhancement layers (ELs and interpolation between the EL frames. The proposed algorithms are of two categories. Those of the first category are based on interlayer residual SVC spatial scalability. They employ two methods, namely, interlayer interpolation (ILIP and the interlayer base mode (ILBM method, and enable ET and TT savings of up to 69.3% and 83.6%, respectively. The algorithms of the second category are based on full-search SVC spatial scalability. They utilize two methods, namely, full interpolation (FIP and the full-base mode (FBM method, and enable ET and TT savings of up to 55.3% and 76.6%, respectively.
International Nuclear Information System (INIS)
Williams, M.L.; Engle, W.W.
1977-01-01
A method is introduced for determining streaming paths through a non-multiplying medium. The concepts of a ''response continuum'' and a pseudo-particle called a contribution are developed to describe the spatial channels through which response flows from a source to a detector. An example application of channel theory to complex shield analysis is cited
Modelling the potential spatial distribution of mosquito species using three different techniques
Cianci, D.; Hartemink, N.; Ibáñez-Justicia, A.
2015-01-01
Background: Models for the spatial distribution of vector species are important tools in the assessment of the risk of establishment and subsequent spread of vector-borne diseases. The aims of this study are to define the environmental conditions suitable for several mosquito species through species
Directory of Open Access Journals (Sweden)
Jiahu Zhao
2011-11-01
Full Text Available Aquatic ecoregions were increasingly used as spatial units for aquatic ecosystem management at the watershed scale. In this paper, the principle of including land area, comprehensiveness and dominance, conjugation and hierarchy were selected as regionalizing principles. Elevation and drainage density were selected as the regionalizing indicators for the delineation of level I aquatic ecoregions, and percent of construction land area, percent of cultivated land area, soil type and slope for the level II. Under the support of GIS technology, the spatial distribution maps of the two indicators for level I and the four indicators for level II aquatic ecoregion delineation were generated from the raster data based on the 1,107 subwatersheds. River subbasin taxonomy concept, two-step spatial clustering analysis approach and manual-assisted method were used to regionalize aquatic ecosystems in the Taihu Lake watershed. Then the Taihu Lake watershed was divided into two level I aquatic ecoregions, including Ecoregion I1 and Ecoregion I2, and five level II aquatic subecoregions, including Subecoregion II11, Subecoregion II12, Subecoregion II21, Subecoregion II22 and Subecoregion II23. Moreover, the characteristics of the two level I aquatic ecoregions and five level II aquatic subecoregions in the Taihu Lake watershed were summarized, showing that there were significant differences in topography, socio-economic development, water quality and aquatic ecology, etc. The results of quantitative comparison of aquatic life also indicated that the dominant species of fish, benthic density, biomass, dominant species, Shannon-Wiener diversity index, Margalef species richness index, Pielou evenness index and ecological dominance showed great spatial variability between the two level I aquatic ecoregions and five level II aquatic subecoregions. It reflected the spatial heterogeneities and the uneven natures of aquatic ecosystems in the Taihu Lake watershed.
MAGIC: A Tool for Combining, Interpolating, and Processing Magnetograms
Allred, Joel
2012-01-01
Transients in the solar coronal magnetic field are ultimately the source of space weather. Models which seek to track the evolution of the coronal field require magnetogram images to be used as boundary conditions. These magnetograms are obtained by numerous instruments with different cadences and resolutions. A tool is required which allows modelers to fmd all available data and use them to craft accurate and physically consistent boundary conditions for their models. We have developed a software tool, MAGIC (MAGnetogram Interpolation and Composition), to perform exactly this function. MAGIC can manage the acquisition of magneto gram data, cast it into a source-independent format, and then perform the necessary spatial and temporal interpolation to provide magnetic field values as requested onto model-defined grids. MAGIC has the ability to patch magneto grams from different sources together providing a more complete picture of the Sun's field than is possible from single magneto grams. In doing this, care must be taken so as not to introduce nonphysical current densities along the seam between magnetograms. We have designed a method which minimizes these spurious current densities. MAGIC also includes a number of post-processing tools which can provide additional information to models. For example, MAGIC includes an interface to the DA VE4VM tool which derives surface flow velocities from the time evolution of surface magnetic field. MAGIC has been developed as an application of the KAMELEON data formatting toolkit which has been developed by the CCMC.
A New Interpolation Approach for Linearly Constrained Convex Optimization
Espinoza, Francisco
2012-08-01
In this thesis we propose a new class of Linearly Constrained Convex Optimization methods based on the use of a generalization of Shepard\\'s interpolation formula. We prove the properties of the surface such as the interpolation property at the boundary of the feasible region and the convergence of the gradient to the null space of the constraints at the boundary. We explore several descent techniques such as steepest descent, two quasi-Newton methods and the Newton\\'s method. Moreover, we implement in the Matlab language several versions of the method, particularly for the case of Quadratic Programming with bounded variables. Finally, we carry out performance tests against Matab Optimization Toolbox methods for convex optimization and implementations of the standard log-barrier and active-set methods. We conclude that the steepest descent technique seems to be the best choice so far for our method and that it is competitive with other standard methods both in performance and empirical growth order.
Che, Yonglu; Khavari, Paul A
2017-12-01
Interactions between proteins are essential for fundamental cellular processes, and the diversity of such interactions enables the vast variety of functions essential for life. A persistent goal in biological research is to develop assays that can faithfully capture different types of protein interactions to allow their study. A major step forward in this direction came with a family of methods that delineates spatial proximity of proteins as an indirect measure of protein-protein interaction. A variety of enzyme- and DNA ligation-based methods measure protein co-localization in space, capturing novel interactions that were previously too transient or low affinity to be identified. Here we review some of the methods that have been successfully used to measure spatially proximal protein-protein interactions. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Spatial analysis of binary health indicators with local smoothing techniques The Viadana study.
Girardi, Paolo; Marcon, Alessandro; Rava, Marta; Pironi, Vanda; Ricci, Paolo; de Marco, Roberto
2012-01-01
When pollution data from a monitoring network is not available, mapping the spatial distribution of disease can be useful to identify populations at risk and to suggest a potential role for suspected emission sources. We aimed at obtaining a continuous spatial representation of the prevalence of symptoms that are potentially associated with the exposure to the pollutants emitted from the wood factories in the children who live in the district of Viadana (Northern Italy). In 2006, all the parents of the children aged 3-14 years residing in the Viadana district (n = 3854), filled in a questionnaire on respiratory symptoms, irritation symptoms of the eyes and skin, use of health services. The children's residential addresses were also collected and geocoded. Generalized additive models and local weighted regression (LOWESS) were used to estimate the distribution of the symptoms, to test for spatial trends of the symptoms' prevalence and to control for potential confounders. Permutation tests were used to identify the areas of significantly increased risk ("hot spots"). The prevalence of respiratory symptoms, eye symptoms and the use of health services showed a statistically significant spatial variation (p big chipboard industries were located. Hot spots were identified fairly near to one of the two chipboard industries in the district. The north-to-south trend in the prevalence of respiratory and eye symptoms, but not of skin symptoms, as well as the location of hot spots, are consistent with the potential exposure to air pollutants both emitted by the wood factories and related to traffic. In these "high risk areas" monitoring of pollution and preventive actions are clearly needed. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
Kisaka, M. Oscar; Mucheru-Muna, M.; Ngetich, F. K.; Mugwe, J.; Mugendi, D.; Mairura, F.; Shisanya, C.; Makokha, G. L.
2016-04-01
Drier parts of Kenya's Central Highlands endure persistent crop failure and declining agricultural productivity. These have, in part, attributed to high temperatures, prolonged dry spells and erratic rainfall. Understanding spatial-temporal variability of climatic indices such as rainfall at seasonal level is critical for optimal rain-fed agricultural productivity and natural resource management in the study area. However, the predominant setbacks in analysing hydro-meteorological events are occasioned by either lack, inadequate, or inconsistent meteorological data. Like in most other places, the sole sources of climatic data in the study region are scarce and only limited to single stations, yet with persistent missing/unrecorded data making their utilization a challenge. This study examined seasonal anomalies and variability in rainfall, drought occurrence and the efficacy of interpolation techniques in the drier regions of eastern Kenyan. Rainfall data from five stations (Machang'a, Kiritiri, Kiambere and Kindaruma and Embu) were sourced from both the Kenya Meteorology Department and on-site primary recording. Owing to some experimental work ongoing, automated recording for primary dailies in Machang'a have been ongoing since the year 2000 to date; thus, Machang'a was treated as reference (for period of record) station for selection of other stations in the region. The other stations had data sets of over 15 years with missing data of less than 10 % as required by the world meteorological organization whose quality check is subject to the Centre for Climate Systems Modeling (C2SM) through MeteoSwiss and EMPA bodies. The dailies were also subjected to homogeneity testing to evaluate whether they came from the same population. Rainfall anomaly index, coefficients of variance and probability were utilized in the analyses of rainfall variability. Spline, kriging and inverse distance weighting interpolation techniques were assessed using daily rainfall data and
International Nuclear Information System (INIS)
Santana, Priscila do C.; Gomes, Danielle S.; Oliveira, Marcio A.; Oliveira, Paulo Marcio C. de; Meira-Belo, Luiz C.; Nogueira-Tavares, Maria S.
2011-01-01
In this work, an automated methodology to evaluate digital and scanned images of a standard phantom (Phantom Mama) was studied. The Phantom Mama was used as an important tool to check the quality of mammographs. The scanned images were digitized using a ScanMaker 9800XL, with resolution of 900 dpi. The aim of this work is to test an automatic methodology for evaluation of spatial resolution and microcalcifications group of phantom mama images acquired with the same parameters in the same equipment. In order to analyze the images we have used the ImageJ software (in Java) which is public domain. We have used the Fast Fourier transform technique to evaluate the spatial resolution and used the ImageJ function Subtract Background and the Light Background plus Sliding Paraboloid on the evaluation of the five groups of microcalcifications on the breast phantom to assess the viability of using automated methods for both types of images. The methodology was adequate for evaluated the microcalcifications group and the spatial resolution in scanned and digital images, but the Phantom Mama doesn't provide sufficient parameters to evaluate the spatial resolution in this images. (author)
SAR image formation with azimuth interpolation after azimuth transform
Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM
2008-07-08
Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.
Basis set approach in the constrained interpolation profile method
International Nuclear Information System (INIS)
Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.
2003-07-01
We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)
Interpolation of fuzzy data | Khodaparast | Journal of Fundamental ...
African Journals Online (AJOL)
Considering the many applications of mathematical functions in different ways, it is essential to have a defining function. In this study, we used Fuzzy Lagrangian interpolation and natural fuzzy spline polynomials to interpolate the fuzzy data. In the current world and in the field of science and technology, interpolation issues ...
Interpolation of diffusion weighted imaging datasets
DEFF Research Database (Denmark)
Dyrby, Tim B; Lundell, Henrik; Burke, Mark W
2014-01-01
anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...
Some splines produced by smooth interpolation
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
2018-01-01
Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub
Some splines produced by smooth interpolation
Czech Academy of Sciences Publication Activity Database
Segeth, Karel
2018-01-01
Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub
Image Interpolation with Geometric Contour Stencils
Directory of Open Access Journals (Sweden)
Pascal Getreuer
2011-09-01
Full Text Available We consider the image interpolation problem where given an image vm,n with uniformly-sampled pixels vm,n and point spread function h, the goal is to find function u(x,y satisfying vm,n = (h*u(m,n for all m,n in Z. This article improves upon the IPOL article Image Interpolation with Contour Stencils. In the previous work, contour stencils are used to estimate the image contours locally as short line segments. This article begins with a continuous formulation of total variation integrated over a collection of curves and defines contour stencils as a consistent discretization. This discretization is more reliable than the previous approach and can effectively distinguish contours that are locally shaped like lines, curves, corners, and circles. These improved contour stencils sense more of the geometry in the image. Interpolation is performed using an extension of the method described in the previous article. Using the improved contour stencils, there is an increase in image quality while maintaining similar computational efficiency.
Institute of Scientific and Technical Information of China (English)
杨彦海; 侯博; 韦洪峰; 那达慕
2017-01-01
Temperature is one of the important indicators of the asphalt pavement climatic zoning,its accuracy of the interpolation results has great influence on the asphalt pavement climate zoning.Based on three general interpolation methods and three trend surface simulation and residual interpolation methods based on digital elevation model (DEM) respectively,interpolated annual average temperature in the hottest and coldest month of 14 years in Liaoning province.The results of the interpolation space distribution,precision and its impact on the climate zoning of asphalt pavement were analyzed and compared in detail.The results showed that:three trend surface simulation and residual interpolation methods based on DEM could reflect temperature change trend with elevation and topographic features,but three general interpolation methods couldnt be,and the accuracy of the former was about higher than the latter 1 times.At the same time,the accuracy of trend surface simulation and residual interpolation methods based on DEM in inverse distance weight was highest,the accuracy of general interpolation method in spline was lowest.Last but not the least,the higher the accuracy of the interpolation,the more close to the temperature data obtained by interpolation and the actual temperature data,the more accurate the zoning results.%温度是沥青路面气候分区的重要指标之一,其插值结果的精度对沥青路面气候分区影响极大.论文分别基于3种一般插值法及3种基于数字高程模型(DEM)的趋势面模拟+残差内插法对辽宁省14 a累年最热、冷月平均温度进行插值研究,对插值结果的空间分布、精度及其对沥青路面气候分区影响进行了详细的分析比较.研究结果表明:3种基于DEM的趋势面模拟+残值内插法均能反映温度随海拔高度等地形特征的变化趋势,而一般插值法则不能,且前者的精度约高于后者1倍,其中基于DEM的趋势面模拟+残值内插法中反距离
Diabat Interpolation for Polymorph Free-Energy Differences.
Kamat, Kartik; Peters, Baron
2017-02-02
Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.
Chang, Hsiao-Tung
2016-06-01
Urban Heat Island (UHI) has been becoming a key factor in deteriorating the urban ecological environment. Spatial-temporal analysis on its prototype of basin city's UHI and quantitatively evaluating effect from rapid urbanization will provide theoretical foundation for relieving UHI effect. Based on Landsat 8, ETM+ and TM images of Taipei basin areas from 1900 to 2015, this article has retrieved the land surface temperature (LST) at summer solstice of each year, and then analysed spatial-temporal pattern and evolution characters of UHI in Taipei basin in this decade. The results showed that the expansion built district, UHI area constantly expanded from centre city to the suburb areas. The prototype of UHI in Taipei basin that showed in addition to higher temperatures in the centre city also were relatively high temperatures gathered boundaries surrounded by foot of mountains side. It calls "sinking heat island". From 1900 to 2000, the higher UHI areas were different land use type change had obvious difference by public infrastructure works. And then, in next 15 years till 2015, building density of urban area has been increasing gradually. It has the trend that UHI flooding raises follow urban land use density. Hot spot of UHI in Taipei basin also has the same characteristics. The results suggest that anthropogenic heat release probably plays a significant role in the UHI effect, and must be considered in urban planning adaptation strategies.
Directory of Open Access Journals (Sweden)
H.-T. Chang
2016-06-01
Full Text Available Urban Heat Island (UHI has been becoming a key factor in deteriorating the urban ecological environment. Spatial-temporal analysis on its prototype of basin city’s UHI and quantitatively evaluating effect from rapid urbanization will provide theoretical foundation for relieving UHI effect. Based on Landsat 8, ETM+ and TM images of Taipei basin areas from 1900 to 2015, this article has retrieved the land surface temperature (LST at summer solstice of each year, and then analysed spatial-temporal pattern and evolution characters of UHI in Taipei basin in this decade. The results showed that the expansion built district, UHI area constantly expanded from centre city to the suburb areas. The prototype of UHI in Taipei basin that showed in addition to higher temperatures in the centre city also were relatively high temperatures gathered boundaries surrounded by foot of mountains side. It calls “sinking heat island”. From 1900 to 2000, the higher UHI areas were different land use type change had obvious difference by public infrastructure works. And then, in next 15 years till 2015, building density of urban area has been increasing gradually. It has the trend that UHI flooding raises follow urban land use density. Hot spot of UHI in Taipei basin also has the same characteristics. The results suggest that anthropogenic heat release probably plays a significant role in the UHI effect, and must be considered in urban planning adaptation strategies.
Bugaychuk, Svitlana A.; Gnatovskyy, Vladimir O.; Sidorenko, Andrey V.; Pryadko, Igor I.; Negriyko, Anatoliy M.
2015-11-01
New approach for the correlation technique, which is based on multiple periodic structures to create a controllable angular spectrum, is proposed and investigated both theoretically and experimentally. The transformation of an initial laser beam occurs due to the actions of consecutive phase periodic structures, which may differ by their parameters. Then, after the Fourier transformation of a complex diffraction field, the output diffraction orders will be changed both by their intensities and by their spatial position. The controllable change of output angular spectrum is carried out by a simple control of the parameters of the periodic structures. We investigate several simple examples of such management.
Ambient Ozone Exposure in Czech Forests: A GIS-Based Approach to Spatial Distribution Assessment
Hůnová, I.; Horálek, J.; Schreiberová, M.; Zapletal, M.
2012-01-01
Ambient ozone (O3) is an important phytotoxic pollutant, and detailed knowledge of its spatial distribution is becoming increasingly important. The aim of the paper is to compare different spatial interpolation techniques and to recommend the best approach for producing a reliable map for O3 with respect to its phytotoxic potential. For evaluation we used real-time ambient O3 concentrations measured by UV absorbance from 24 Czech rural sites in the 2007 and 2008 vegetation seasons. We considered eleven approaches for spatial interpolation used for the development of maps for mean vegetation season O3 concentrations and the AOT40F exposure index for forests. The uncertainty of maps was assessed by cross-validation analysis. The root mean square error (RMSE) of the map was used as a criterion. Our results indicate that the optimal interpolation approach is linear regression of O3 data and altitude with subsequent interpolation of its residuals by ordinary kriging. The relative uncertainty of the map of O3 mean for the vegetation season is less than 10%, using the optimal method as for both explored years, and this is a very acceptable value. In the case of AOT40F, however, the relative uncertainty of the map is notably worse, reaching nearly 20% in both examined years. PMID:22566757
A study of interpolation method in diagnosis of carpal tunnel syndrome
Directory of Open Access Journals (Sweden)
Alireza Ashraf
2013-01-01
Full Text Available Context: The low correlation between the patients′ signs and symptoms of carpal tunnel syndrome (CTS and results of electrodiagnostic tests makes the diagnosis challenging in mild cases. Interpolation is a mathematical method for finding median nerve conduction velocity (NCV exactly at carpal tunnel site. Therefore, it may be helpful in diagnosis of CTS in patients with equivocal test results. Aim: The aim of this study is to evaluate interpolation method as a CTS diagnostic test. Settings and Design: Patients with two or more clinical symptoms and signs of CTS in a median nerve territory with 3.5 ms ≤ distal median sensory latency <4.6 ms from those who came to our electrodiagnostic clinics and also, age matched healthy control subjects were recruited in the study. Materials and Methods: Median compound motor action potential and median sensory nerve action potential latencies were measured by a MEDLEC SYNERGY VIASIS electromyography and conduction velocities were calculated by both routine method and interpolation technique. Statistical Analysis Used: Chi-square and Student′s t-test were used for comparing group differences. Cut-off points were calculated using receiver operating characteristic curve. Results: A sensitivity of 88%, specificity of 67%, positive predictive value (PPV and negative predictive value (NPV of 70.8% and 84.7% were obtained for median motor NCV and a sensitivity of 98.3%, specificity of 91.7%, PPV and NPV of 91.9% and 98.2% were obtained for median sensory NCV with interpolation technique. Conclusions: Median motor interpolation method is a good technique, but it has less sensitivity and specificity than median sensory interpolation method.
Image Interpolation Scheme based on SVM and Improved PSO
Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.
2018-01-01
In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.
Interpolation functions and the Lions-Peetre interpolation construction
International Nuclear Information System (INIS)
Ovchinnikov, V I
2014-01-01
The generalization of the Lions-Peetre interpolation method of means considered in the present survey is less general than the generalizations known since the 1970s. However, our level of generalization is sufficient to encompass spaces that are most natural from the point of view of applications, like the Lorentz spaces, Orlicz spaces, and their analogues. The spaces φ(X 0 ,X 1 ) p 0 ,p 1 considered here have three parameters: two positive numerical parameters p 0 and p 1 of equal standing, and a function parameter φ. For p 0 ≠p 1 these spaces can be regarded as analogues of Orlicz spaces under the real interpolation method. Embedding criteria are established for the family of spaces φ(X 0 ,X 1 ) p 0 ,p 1 , together with optimal interpolation theorems that refine all the known interpolation theorems for operators acting on couples of weighted spaces L p and that extend these theorems beyond scales of spaces. The main specific feature is that the function parameter φ can be an arbitrary natural functional parameter in the interpolation. Bibliography: 43 titles
Huang, Ai-Mei; Nguyen, Truong
2009-04-01
In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.
3D Interpolation Method for CT Images of the Lung
Directory of Open Access Journals (Sweden)
Noriaki Asada
2003-06-01
Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.
Interpolation methods for creating a scatter radiation exposure map
Energy Technology Data Exchange (ETDEWEB)
Gonçalves, Elicardo A. de S., E-mail: elicardo.goncalves@ifrj.edu.br [Instituto Federal do Rio de Janeiro (IFRJ), Paracambi, RJ (Brazil); Gomes, Celio S.; Lopes, Ricardo T. [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F. [Universidade do Estado do Rio de Janeiro (UFRJ), RJ (Brazil). Instituto de Física
2017-07-01
A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)
Interpolation methods for creating a scatter radiation exposure map
International Nuclear Information System (INIS)
Gonçalves, Elicardo A. de S.; Gomes, Celio S.; Lopes, Ricardo T.; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F.
2017-01-01
A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)
Energy Technology Data Exchange (ETDEWEB)
Davenport, C. M.
1977-02-01
The mathematical basis for an ultraprecise digital differential analyzer circuit for use as a parabolic interpolator on numerically controlled machines has been established, and scaling and other error-reduction techniques have been developed. An exact computer model is included, along with typical results showing tracking to within an accuracy of one part per million.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model׳s parameters and transformed according to a specific
Turning Avatar into Realistic Human Expression Using Linear and Bilinear Interpolations
Hazim Alkawaz, Mohammed; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul
2014-06-01
The facial animation in term of 3D facial data has accurate research support of the laser scan and advance 3D tools for complex facial model production. However, the approach still lacks facial expression based on emotional condition. Though, facial skin colour is required to offers an effect of facial expression improvement, closely related to the human emotion. This paper presents innovative techniques for facial animation transformation using the facial skin colour based on linear interpolation and bilinear interpolation. The generated expressions are almost same to the genuine human expression and also enhance the facial expression of the virtual human.
Workload Balancing on Heterogeneous Systems: A Case Study of Sparse Grid Interpolation
Muraraşu, Alin
2012-01-01
Multi-core parallelism and accelerators are becoming common features of today’s computer systems, as they allow for computational power without sacrificing energy efficiency. Due to heterogeneity, tuning for each type of compute unit and adequate load balancing is essential. This paper proposes static and dynamic solutions for load balancing in the context of an application for visualizing high-dimensional simulation data. The application relies on the sparse grid technique for data compression. Its performance critical part is the interpolation routine used for decompression. Results show that our load balancing scheme allows for an efficient acceleration of interpolation on heterogeneous systems containing multi-core CPUs and GPUs.
International Nuclear Information System (INIS)
Shibasaki, Toshiro; Seno Masafumi; Takoi, Kunihiro; Sato, Hirofumi; Hino, Tsuyoshi
2003-01-01
In this study of 3D contrast enhanced MR angiography of the renal artery using the array spatial sensitivity encoding technique (ASSET), the acquisition time per 1 phase shortened fairly. And using the technique of spectral inversion at lipids (SPECIAL) together with ASSET, the quality of image was improved by emphasizing the contrast. The timing of acquisition was determined by the test injection. We started acquiring the MR angiography 2 seconds after the arrival of maximum enhancement of the test injection at the upper abdominal aorta near the renal artery. As a result parenchymal enhancement was not visible and depiction of the segmental artery was possible in 14 (82%) of 17 patients. At the present time we consider it better not to use the Fractional number of excitation (NEX) together with ASSET, as it may cause various artifacts. (author)
The Canadian Precipitation Analysis (CaPA): Evaluation of the statistical interpolation scheme
Evans, Andrea; Rasmussen, Peter; Fortin, Vincent
2013-04-01
CaPA (Canadian Precipitation Analysis) is a data assimilation system which employs statistical interpolation to combine observed precipitation with gridded precipitation fields produced by Environment Canada's Global Environmental Multiscale (GEM) climate model into a final gridded precipitation analysis. Precipitation is important in many fields and applications, including agricultural water management projects, flood control programs, and hydroelectric power generation planning. Precipitation is a key input to hydrological models, and there is a desire to have access to the best available information about precipitation in time and space. The principal goal of CaPA is to produce this type of information. In order to perform the necessary statistical interpolation, CaPA requires the estimation of a semi-variogram. This semi-variogram is used to describe the spatial correlations between precipitation innovations, defined as the observed precipitation amounts minus the GEM forecasted amounts predicted at the observation locations. Currently, CaPA uses a single isotropic variogram across the entire analysis domain. The present project investigates the implications of this choice by first conducting a basic variographic analysis of precipitation innovation data across the Canadian prairies, with specific interest in identifying and quantifying potential anisotropy within the domain. This focus is further expanded by identifying the effect of storm type on the variogram. The ultimate goal of the variographic analysis is to develop improved semi-variograms for CaPA that better capture the spatial complexities of precipitation over the Canadian prairies. CaPA presently applies a Box-Cox data transformation to both the observations and the GEM data, prior to the calculation of the innovations. The data transformation is necessary to satisfy the normal distribution assumption, but introduces a significant bias. The second part of the investigation aims at devising a bias
TU-CD-304-08: Feasibility of a VMAT-Based Spatially Fractionated Grid Therapy Technique
Energy Technology Data Exchange (ETDEWEB)
Zhao, B; Liu, M; Huang, Y; Kim, J; Brown, S; Siddiqui, F; Chetty, I; Wen, N [Henry Ford Health System, Detroit, MI (United States); Jin, J [Georgia Regents University, Augusta, GA (Georgia)
2015-06-15
Purpose: Grid therapy (GT) uses spatially modulated radiation doses to treat large tumors without significant toxicities. Incorporating 3D conformal-RT or IMRT improved single-field GT by reducing dose to normal tissues spatially through the use of multiple fields. The feasibility of a MLC-based, inverse-planned multi-field GT technique has been demonstrated. Volumetric modulated arc therapy (VMAT) provides conformal dose distributions with the additional potential advantage of reduced treatment times. In this study, we characterize a new VMAT-based GT (VMAT-GT) technique with respect to its deliverability and dosimetric accuracy. Methods: A lattice of 5mm-diameter spheres was created as the boost volume within a large treatment target. A simultaneous boost VMAT (RapidArc) plan with 8Gy to the target and 20Gy to the boost volume was generated using the Eclipse treatment planning system (AAA-v11). The linac utilized HD120 MLC and 6MV flattening-filter free beam. Four non-coplanar arcs, with couch angles at 0, 45, 90 and 317° were used. Collimator angles were at 45 and 315°. The plan was mapped to a phantom. Calibrated Gafchromic EBT3 films were used to measure the delivered dose. Results: The VMAT plan generated a highly spatially modulated dose distribution in the target. D95%, D50%, D5% for the spheres and the targets in Gy were 18.9, 20.6, 23 and 8.0, 9.6, 14.8, respectively. D50% for a 1cm ring 1cm outside the target was 3.0Gy. The peak-to-valley ratio of this technique is comparable to previously proposed techniques, but the MUs were reduced by almost 50%. Film dosimetry showed good agreement between calculated and delivered dose, with an overall gamma passing rate of >98% (3% and 1mm). The point dose differences at sphere centers varied from 2–8%. Conclusion: The deliverability and dose calculation accuracy of the proposed VMAT-GT technique demonstrates that ablative radiation doses are deliverable to large tumors safely and efficiently.
Directory of Open Access Journals (Sweden)
M. Soleimanpour-moghadam
2013-06-01
Full Text Available This paper devotes itself to the study of secret message delivery using cover image and introduces a novel steganographic technique based on genetic algorithm to find a near-optimum structure for the pair-wise least-significant-bit (LSB matching scheme. A survey of the related literatures shows that the LSB matching method developed by Mielikainen, employs a binary function to reduce the number of changes of LSB values. This method verifiably reduces the probability of detection and also improves the visual quality of stego images. So, our proposal draws on the Mielikainen's technique to present an enhanced dual-state scoring model, structured upon genetic algorithm which assesses the performance of different orders for LSB matching and searches for a near-optimum solution among all the permutation orders. Experimental results confirm superiority of the new approach compared to the Mielikainen’s pair-wise LSB matching scheme.
Nugent, Allison C; Luber, Bruce; Carver, Frederick W; Robinson, Stephen E; Coppola, Richard; Zarate, Carlos A
2017-02-01
Recently, independent components analysis (ICA) of resting state magnetoencephalography (MEG) recordings has revealed resting state networks (RSNs) that exhibit fluctuations of band-limited power envelopes. Most of the work in this area has concentrated on networks derived from the power envelope of beta bandpass-filtered data. Although research has demonstrated that most networks show maximal correlation in the beta band, little is known about how spatial patterns of correlations may differ across frequencies. This study analyzed MEG data from 18 healthy subjects to determine if the spatial patterns of RSNs differed between delta, theta, alpha, beta, gamma, and high gamma frequency bands. To validate our method, we focused on the sensorimotor network, which is well-characterized and robust in both MEG and functional magnetic resonance imaging (fMRI) resting state data. Synthetic aperture magnetometry (SAM) was used to project signals into anatomical source space separately in each band before a group temporal ICA was performed over all subjects and bands. This method preserved the inherent correlation structure of the data and reflected connectivity derived from single-band ICA, but also allowed identification of spatial spectral modes that are consistent across subjects. The implications of these results on our understanding of sensorimotor function are discussed, as are the potential applications of this technique. Hum Brain Mapp 38:779-791, 2017. © 2016 Wiley Periodicals, Inc. Published 2016. This article is a U.S. Government work and is in the public domain in the USA.
Hot Routes: Developing a New Technique for the Spatial Analysis of Crime
Tompson, L.; Partridge, H.; Shepherd, N.
2009-01-01
The use of hotspot mapping techniques such as KDE to represent the geographical spread of linear events can be problematic. Network-constrained data (for example transport-related crime) require a different approach to visualize concentration. We propose a methodology called Hot Routes, which measures the risk distribution of crime along a linear network by calculating the rate of crimes per section of road. This method has been designed for everyday crime analysts, and requires only a Geogra...
Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan
2013-01-01
Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.
Spatial Analysis for Potential Water Catchment Areas using GIS: Weighted Overlay Technique
Awanda, Disyacitta; Anugrah Nurul, H.; Musfiroh, Zahrotul; Dinda Dwi, N. P.
2017-12-01
The development of applied GIS is growing rapidly and has been widely applied in various fields. Preparation of a model to obtain information is one of the benefits of GIS. Obtaining information for water resources such as water catchment areas is one part of GIS modelling. Water catchment model can be utilized to see the distribution of potential and ability of a region in water absorbing. The use of overlay techniques with the weighting obtained from the literature from previous research is used to build the model. Model builder parameters are obtained through remote sensing interpretation techniques such as land use, landforms, and soil texture. Secondary data such as rock type maps are also used as water catchment model parameters. The location of this research is in the upstream part of the Opak river basin. The purpose of this research is to get information about potential distribution of water catchment area with overlay technique. The results of this study indicate the potential of water catchment areas with excellent category, good, medium, poor and very poor. These results may indicate that the Upper river basin is either good or in bad condition, so it can be used for better water resources management policy determination.
Seismic Experiment at North Arizona To Locate Washington Fault - 3D Data Interpolation
Hanafy, Sherif M.
2008-10-01
The recorded data is interpolated using sinc technique to create the following two data sets 1. Data Set # 1: Here, we interpolated only in the receiver direction to regularize the receiver interval to 1 m, however, the source locations are the same as the original data (2 and 4 m source intervals). Now the data contains 6 lines, each line has 121 receivers and a total of 240 shot gathers. 2. Data Set # 2: Here, we used the result from the previous step, and interpolated only in the shot direction to regularize the shot interval to 1 m. Now, both shot and receivers has 1 m interval. The data contains 6 lines, each line has 121 receivers and a total of 726 shot gathers.
Gong, Gordon; Mattevada, Sravan; O'Bryant, Sid E
2014-04-01
Exposure to arsenic causes many diseases. Most Americans in rural areas use groundwater for drinking, which may contain arsenic above the currently allowable level, 10µg/L. It is cost-effective to estimate groundwater arsenic levels based on data from wells with known arsenic concentrations. We compared the accuracy of several commonly used interpolation methods in estimating arsenic concentrations in >8000 wells in Texas by the leave-one-out-cross-validation technique. Correlation coefficient between measured and estimated arsenic levels was greater with inverse distance weighted (IDW) than kriging Gaussian, kriging spherical or cokriging interpolations when analyzing data from wells in the entire Texas (pgroundwater arsenic level depends on both interpolation methods and wells' geographic distributions and characteristics in Texas. Taking well depth and elevation into regression analysis as covariates significantly increases the accuracy in estimating groundwater arsenic level in Texas with IDW in particular. Published by Elsevier Inc.
Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.
2013-09-01
The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its
Distribution of petrophysical properties for sandy-clayey reservoirs by fractal interpolation
Directory of Open Access Journals (Sweden)
M. Lozada-Zumaeta
2012-04-01
Full Text Available The sandy-clayey hydrocarbon reservoirs of the Upper Paleocene and Lower Eocene located to the north of Veracruz State, Mexico, present highly complex geological and petrophysical characteristics. These reservoirs, which consist of sandstone and shale bodies within a depth interval ranging from 500 to 2000 m, were characterized statistically by means of fractal modeling and geostatistical tools. For 14 wells within an area of study of approximately 6 km^{2}, various geophysical well logs were initially edited and further analyzed to establish a correlation between logs and core data. The fractal modeling based on the R/S (rescaled range methodology and the interpolation method by successive random additions were used to generate pseudo-well logs between observed wells. The application of geostatistical tools, sequential Gaussian simulation and exponential model variograms contributed to estimate the spatial distribution of petrophysical properties such as effective porosity (PHIE, permeability (K and shale volume (VSH. From the analysis and correlation of the information generated in the present study, it can be said, from a general point of view, that the results not only are correlated with already reported information but also provide significant characterization elements that would be hardly obtained by means of conventional techniques.
International Nuclear Information System (INIS)
Hopkins, Brittney C.; Hepner, Mark J.; Hopkins, William A.
2013-01-01
Mercury (Hg) is a globally ubiquitous pollutant that has received much attention due to its toxicity to humans and wildlife. The development of non-destructive sampling techniques is a critical step for sustainable monitoring of Hg accumulation. We evaluated the efficacy of non-destructive sampling techniques and assessed spatial, temporal, and demographic factors that influence Hg bioaccumulation in turtles. We collected muscle, blood, nail, and eggs from snapping turtles (Chelydra serpentina) inhabiting an Hg contaminated river. As predicted, all Hg tissue concentrations strongly and positively correlated with each other. Additionally, we validated our mathematical models against two additional Hg contaminated locations and found that tissue relationships developed from the validation sites did not significantly differ from those generated from the original sampling site. The models provided herein will be useful for a wide array of systems where biomonitoring of Hg in turtles needs to be accomplished in a conservation-minded fashion. -- Highlights: ► Non-lethal sampling is critical for sustainable monitoring of mercury in wildlife. ► We evaluated the efficacy of non-lethal sampling techniques in turtles. ► We created mathematical models between egg, muscle, blood, and nail tissues. ► Mathematical tissue models were applicable to other mercury contaminated areas. ► Non-lethal techniques will be useful for monitoring contamination in other systems. -- We developed and validated mathematical models that will be useful for biomonitoring Hg accumulation in turtles in a conservation-minded fashion
Kryklywy, James H; Macpherson, Ewan A; Mitchell, Derek G V
2018-04-01
Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory 'what' but not 'where' processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.
Gelatin-based laser direct-write technique for the precise spatial patterning of cells.
Schiele, Nathan R; Chrisey, Douglas B; Corr, David T
2011-03-01
Laser direct-writing provides a method to pattern living cells in vitro, to study various cell-cell interactions, and to build cellular constructs. However, the materials typically used may limit its long-term application. By utilizing gelatin coatings on the print ribbon and growth surface, we developed a new approach for laser cell printing that overcomes the limitations of Matrigel™. Gelatin is free of growth factors and extraneous matrix components that may interfere with cellular processes under investigation. Gelatin-based laser direct-write was able to successfully pattern human dermal fibroblasts with high post-transfer viability (91% ± 3%) and no observed double-strand DNA damage. As seen with atomic force microscopy, gelatin offers a unique benefit in that it is present temporarily to allow cell transfer, but melts and is removed with incubation to reveal the desired application-specific growth surface. This provides unobstructed cellular growth after printing. Monitoring cell location after transfer, we show that melting and removal of gelatin does not affect cellular placement; cells maintained registry within 5.6 ± 2.5 μm to the initial pattern. This study demonstrates the effectiveness of gelatin in laser direct-writing to create spatially precise cell patterns with the potential for applications in tissue engineering, stem cell, and cancer research.
Parris, B. A.; Egbert, G. D.; Key, K.; Livelybrooks, D.
2016-12-01
Magnetotellurics (MT) is an electromagnetic technique used to model the inner Earth's electrical conductivity structure. MT data can be analyzed using iterative, linearized inversion techniques to generate models imaging, in particular, conductive partial melts and aqueous fluids that play critical roles in subduction zone processes and volcanism. For example, the Magnetotelluric Observations of Cascadia using a Huge Array (MOCHA) experiment provides amphibious data useful for imaging subducted fluids from trench to mantle wedge corner. When using MOD3DEM(Egbert et al. 2012), a finite difference inversion package, we have encountered problems inverting, particularly, sea floor stations due to the strong, nearby conductivity gradients. As a work-around, we have found that denser, finer model grids near the land-sea interface produce better inversions, as characterized by reduced data residuals. This is partly to be due to our ability to more accurately capture topography and bathymetry. We are experimenting with improved interpolation schemes that more accurately track EM fields across cell boundaries, with an eye to enhancing the accuracy of the simulated responses and, thus, inversion results. We are adapting how MOD3DEM interpolates EM fields in two ways. The first seeks to improve weighting functions for interpolants to better address current continuity across grid boundaries. Electric fields are interpolated using a tri-linear spline technique, where the eight nearest electrical field estimates are each given weights determined by the technique, a kind of weighted average. We are modifying these weights to include cross-boundary conductivity ratios to better model current continuity. We are also adapting some of the techniques discussed in Shantsev et al (2014) to enhance the accuracy of the interpolated fields calculated by our forward solver, as well as to better approximate the sensitivities passed to the software's Jacobian that are used to generate a new
Directory of Open Access Journals (Sweden)
Martin Claverie
2015-09-01
Full Text Available High-resolution sensor Surface Reflectance (SR data are affected by surface anisotropy but are difficult to adjust because of the low temporal frequency of the acquisitions and the low angular sampling. This paper evaluates five high spatial resolution Bidirectional Reflectance Distribution Function (BRDF adjustment techniques. The evaluation is based on the noise level of the SR Time Series (TS corrected to a normalized geometry (nadir view, 45° sun zenith angle extracted from the multi-angular acquisitions of SPOT4 over three study areas (one in Arizona, two in France during the five-month SPOT4 (Take5 experiment. Two uniform techniques (Cst, for Constant, and Av, for Average, relying on the Vermote–Justice–Bréon (VJB BRDF method, assume no variation in space of the BRDF shape. Two methods (VI-dis, for NDVI-based disaggregation and LC-dis, for Land-Cover based disaggregation are based on disaggregation of the MODIS-derived BRDF VJB parameters using vegetation index and land cover, respectively. The last technique (LUM, for Look-Up Map relies on the MCD43 MODIS BRDF products and a crop type data layer. The VI-dis technique produced the lowest level of noise corresponding to the most effective adjustment: reduction from directional to normalized SR TS noises by 40% and 50% on average, for red and near-infrared bands, respectively. The uniform techniques displayed very good results, suggesting that a simple and uniform BRDF-shape assumption is good enough to adjust the BRDF in such geometric configuration (the view zenith angle varies from nadir to 25°. The most complex techniques relying on land cover (LC-dis and LUM displayed contrasting results depending on the land cover.
Generation of nuclear data banks through interpolation
International Nuclear Information System (INIS)
Castillo M, J.A.
1999-01-01
Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, used to generate Nuclear Data Banks employing bi cubic polynomial interpolation, taking as independent variables the uranium and gadolinium percents. Two proposals were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed to obtain the interpolating polynomial and later, the corresponding linear equations system. In the solution of this system the Gaussian elimination method with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validations test, a comparison was made between the values obtained with INTPOLBI and INTERTEG (created at the Instituto de Investigaciones Electricas with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks. (Author)
Nuclear data banks generation by interpolation
International Nuclear Information System (INIS)
Castillo M, J. A.
1999-01-01
Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks
Calculation of reactivity without Lagrange interpolation
International Nuclear Information System (INIS)
Suescun D, D.; Figueroa J, J. H.; Rodriguez R, K. C.; Villada P, J. P.
2015-09-01
A new method to solve numerically the inverse equation of punctual kinetics without using Lagrange interpolating polynomial is formulated; this method uses a polynomial approximation with N points based on a process of recurrence for simulating different forms of nuclear power. The results show a reliable accuracy. Furthermore, the method proposed here is suitable for real-time measurements of reactivity, with step sizes of calculations greater that Δt = 0.3 s; due to its precision can be used to implement a digital meter of reactivity in real time. (Author)
Solving the Schroedinger equation using Smolyak interpolants
International Nuclear Information System (INIS)
Avila, Gustavo; Carrington, Tucker Jr.
2013-01-01
In this paper, we present a new collocation method for solving the Schroedinger equation. Collocation has the advantage that it obviates integrals. All previous collocation methods have, however, the crucial disadvantage that they require solving a generalized eigenvalue problem. By combining Lagrange-like functions with a Smolyak interpolant, we device a collocation method that does not require solving a generalized eigenvalue problem. We exploit the structure of the grid to develop an efficient algorithm for evaluating the matrix-vector products required to compute energy levels and wavefunctions. Energies systematically converge as the number of points and basis functions are increased
Topics in multivariate approximation and interpolation
Jetter, Kurt
2005-01-01
This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr
Differential maps, difference maps, interpolated maps, and long term prediction
International Nuclear Information System (INIS)
Talman, R.
1988-06-01
Mapping techniques may be thought to be attractive for the long term prediction of motion in accelerators, especially because a simple map can approximately represent an arbitrarily complicated lattice. The intention of this paper is to develop prejudices as to the validity of such methods by applying them to a simple, exactly solveable, example. It is shown that a numerical interpolation map, such as can be generated in the accelerator tracking program TEAPOT, predicts the evolution more accurately than an analytically derived differential map of the same order. Even so, in the presence of ''appreciable'' nonlinearity, it is shown to be impractical to achieve ''accurate'' prediction beyond some hundreds of cycles of oscillation. This suggests that the value of nonlinear maps is restricted to the parameterization of only the ''leading'' deviation from linearity. 41 refs., 6 figs
Evaluation of Interpolants in Their Ability to Fit Seismometric Time Series
Directory of Open Access Journals (Sweden)
Kanadpriya Basu
2015-08-01
Full Text Available This article is devoted to the study of the ASARCO demolition seismic data. Two different classes of modeling techniques are explored: First, mathematical interpolation methods and second statistical smoothing approaches for curve fitting. We estimate the characteristic parameters of the propagation medium for seismic waves with multiple mathematical and statistical techniques, and provide the relative advantages of each approach to address fitting of such data. We conclude that mathematical interpolation techniques and statistical curve fitting techniques complement each other and can add value to the study of one dimensional time series seismographic data: they can be use to add more data to the system in case the data set is not large enough to perform standard statistical tests.
Directory of Open Access Journals (Sweden)
F. Mahmoodi
2016-02-01
Full Text Available Introduction: Use of remote sensing for soil assessment and monitoring started with the launch of the first Landsat satellite. Since then many other polar orbiting Earth-observation satellites such as the Landsat series, have been launched and their imagery have been used for a wide range of soil mapping. The broad swaths and regular revisit frequencies of these multispectral satellites mean that they can be used to rapidly detect changes in soil properties. Arid and semi-arid lands cover more than 70 percent of Iran and are very prone to desertification. Due to the broadness, remoteness, and harsh condition of these lands, soil studies using ground-based techniques appear to be limited. Remote sensing imagery with its cost and time-effectiveness has been suggested and used as an alternative approach for more than four decades. Flood irrigation is one of the most common techniques in Isfahan province in which 70% of water is lost through evaporation. This system has increased soil salinization and desert-like conditions in the region. For principled decision making on agricultural product management, combating desertification and its consequences and better use of production resources to achieve sustainable development; understanding and knowledge of the origin, amount and area of salinity, the percentage of calcite, gypsum and other mineral of soil in each region is essential. Therefore, this study aimed to map the physical and chemical characteristics of soils in Vazaneh region of Isfahan province, Iran. Materials and Methods : Varzaneh region with 75000 ha located in central Iran and lies between latitudes 3550234 N and 3594309 N and longitudes 626530 E to 658338 E. The climate in the study area is characterized by hot summers and cold winters. The mean daily maximum temperature ranges from 35°C in summer to approximately 17°C in winter and mean daily minimum temperature ranges from 5°C in summer to about -24.5°C in winter. The mean
Compensation of spatial system response in SPECT with conjugate gradient reconstruction technique
International Nuclear Information System (INIS)
Formiconi, A.R.; Pupi, A.; Passeri, A.
1989-01-01
A procedure for determination of the system matrix in single photon emission tomography (SPECT) is described which use a conjugate gradient reconstruction technique to take into account the variable system resolution of a camera equipped with parallel-hole collimators. The procedure involves acquisition of system line spread functions (LSF) in the region occupied by the object studied. Those data are used to generate a set of weighting factors based on the assumption that the LSFs of the collimated camera are of Gaussian shape with full width at half maximum (FWHM) linearly dependent on source depth in the span of image space. Factors are stored on a disc file for subsequent use in reconstruction. Afterwards reconstruction is performed using the conjugate gradient method with the system matrix modified by incorporation of these precalculated factors to take into account variable geometrical system response. The set of weighting factors is regenerated whenever acquisition conditions are changed (collimator, radius of rotation) with an ultra high resolution (UHR) collimator 2000 weighting factors need to be calculated. (author)
International Nuclear Information System (INIS)
Maragliano, C; Heskes, D; Stefancich, M; Chiesa, M; Souier, T
2013-01-01
The need to resolve the electrical properties of confined structures (CNTs, quantum dots, nanorods, etc) is becoming increasingly important in the field of electronic and optoelectronic devices. Here we propose an approach based on amplitude modulated electrostatic force microscopy to obtain measurements at small tip–sample distances, where highly nonlinear forces are present. We discuss how this improves the lateral resolution of the technique and allows probing of the electrical and surface properties. The complete force field at different tip biases is employed to derive the local work function difference. Then, by appropriately biasing the tip–sample system, short-range forces are reconstructed. The short-range component is then separated from the generic tip–sample force in order to recover the pure electrostatic contribution. This data can be employed to derive the tip–sample capacitance curve and the sample dielectric constant. After presenting a theoretical model that justifies the need for probing the electrical properties of the sample in the vicinity of the surface, the methodology is presented in detail and verified experimentally. (paper)
Randomized interpolative decomposition of separated representations
Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory
2015-01-01
We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.
Size-Dictionary Interpolation for Robot's Adjustment
Directory of Open Access Journals (Sweden)
Morteza eDaneshmand
2015-05-01
Full Text Available This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically-inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps towards completion of the whole realistic online fitting package is provided.
Wang, J.; Emile-Geay, J.; Guillot, D.
2011-12-01
. Joos, D. S. Schimel, B. L. Otto-Bliesner, and R. A. Tomas (2007), Solar influence on climate during the past millennium: Results from transient simulations with the NCAR Climate System Model, Proc. Natl. Acad. Sci. U. S. A., 104, 3713-3718, doi:10.1073/pnas.0605064103. Mann, M. E., R. S. Bradley, and M. K. Hughes (1998), Global-scale temperaturepatterns and climate forcing over the past six centuries, Nature, 392, 779-787, doi:10.1038/33859. Mann, M. E., S. Rutherford, E. Wahl, and C. Ammann (2007), Robustness of proxy-based climate field reconstruction methods, J. Geophys. Res., 112, D12109, doi:10.1029/2006JD008272. Mann, M. E., et al. (2008), Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia, Proc. Natl. Acad. Sci. U. S. A., 105, 13,252-13,257, doi:10.1073/pnas.0805721105. Schneider, T. (2001), Analysis of incomplete climate data: Estimation of mean values and covariance matrices and imputation of missing values, J. Clim., 14, 853-871, doi:10.1175/1520-0442(2001)0142.0.CO;2. Smerdon, J. E., A. Kaplan, E. Zorita, J. F. González-Rouco, and M. N. Evans (2011), Spatial performance of four climate field reconstruction methods targeting the Common Era, Geophys. Res. Lett., 38, L11705, doi:10.1029/2011GL047372.
Directory of Open Access Journals (Sweden)
Kimmel Veljo
2009-03-01
Full Text Available Abstract Background Health impact assessments (HIA use information on exposure, baseline mortality/morbidity and exposure-response functions from epidemiological studies in order to quantify the health impacts of existing situations and/or alternative scenarios. The aim of this study was to improve HIA methods for air pollution studies in situations where exposures can be estimated using GIS with high spatial resolution and dispersion modeling approaches. Methods Tallinn was divided into 84 sections according to neighborhoods, with a total population of approx. 390 000 persons. Actual baseline rates for total mortality and hospitalization with cardiovascular and respiratory diagnosis were identified. The exposure to fine particles (PM2.5 from local emissions was defined as the modeled annual levels. The model validation and morbidity assessment were based on 2006 PM10 or PM2.5 levels at 3 monitoring stations. The exposure-response coefficients used were for total mortality 6.2% (95% CI 1.6–11% per 10 μg/m3 increase of annual mean PM2.5 concentration and for the assessment of respiratory and cardiovascular hospitalizations 1.14% (95% CI 0.62–1.67% and 0.73% (95% CI 0.47–0.93% per 10 μg/m3 increase of PM10. The direct costs related to morbidity were calculated according to hospital treatment expenses in 2005 and the cost of premature deaths using the concept of Value of Life Year (VOLY. Results The annual population-weighted-modeled exposure to locally emitted PM2.5 in Tallinn was 11.6 μg/m3. Our analysis showed that it corresponds to 296 (95% CI 76528 premature deaths resulting in 3859 (95% CI 10236636 Years of Life Lost (YLL per year. The average decrease in life-expectancy at birth per resident of Tallinn was estimated to be 0.64 (95% CI 0.17–1.10 years. While in the polluted city centre this may reach 1.17 years, in the least polluted neighborhoods it remains between 0.1 and 0.3 years. When dividing the YLL by the number of
Evaluation of Interpolants in Their Ability to Fit Seismometric Time Series
Basu, Kanadpriya; Mariani, Maria; Serpa, Laura; Sinha, Ritwik
2015-01-01
This article is devoted to the study of the ASARCO demolition seismic data. Two different classes of modeling techniques are explored: First, mathematical interpolation methods and second statistical smoothing approaches for curve fitting. We estimate the characteristic parameters of the propagation medium for seismic waves with multiple mathematical and statistical techniques, and provide the relative advantages of each approach to address fitting of such data. We conclude that mathematical ...
Chowdhry, Bhawani Shankar; White, Neil M.; Jeswani, Jai Kumar; Dayo, Khalil; Rathi, Manorma
2009-07-01
Disasters affecting infrastructure, such as the 2001 earthquakes in India, 2005 in Pakistan, 2008 in China and the 2004 tsunami in Asia, provide a common need for intelligent buildings and smart civil structures. Now, imagine massive reductions in time to get the infrastructure working again, realtime information on damage to buildings, massive reductions in cost and time to certify that structures are undamaged and can still be operated, reductions in the number of structures to be rebuilt (if they are known not to be damaged). Achieving these ideas would lead to huge, quantifiable, long-term savings to government and industry. Wireless sensor networks (WSNs) can be deployed in buildings to make any civil structure both smart and intelligent. WSNs have recently gained much attention in both public and research communities because they are expected to bring a new paradigm to the interaction between humans, environment, and machines. This paper presents the deployment of WSN nodes in the Top Quality Centralized Instrumentation Centre (TQCIC). We created an ad hoc networking application to collect real-time data sensed from the nodes that were randomly distributed throughout the building. If the sensors are relocated, then the application automatically reconfigures itself in the light of the new routing topology. WSNs are event-based systems that rely on the collective effort of several micro-sensor nodes, which are continuously observing a physical phenomenon. WSN applications require spatially dense sensor deployment in order to achieve satisfactory coverage. The degree of spatial correlation increases with the decreasing inter-node separation. Energy consumption is reduced dramatically by having only those sensor nodes with unique readings transmit their data. We report on an algorithm based on a spatial correlation technique that assures high QoS (in terms of SNR) of the network as well as proper utilization of energy, by suppressing redundant data transmission
Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation
Directory of Open Access Journals (Sweden)
Hezerul Abdul Karim
2004-09-01
Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.
Bulliner, E. A., IV; Erwin, S. O.; Anderson, B. J.; Wilson, H.; Jacobson, R. B.
2016-12-01
The transition from endogenous to exogenous feeding is an important life-stage transition for many riverine fish larvae. On the Missouri River, U.S., riverine alteration has decreased connectivity between the navigation channel and complex, food-producing and foraging areas on the channel margins, namely shallow side channels and sandbar complexes. A favored hypothesis, the interception hypothesis, for recruitment failure of pallid sturgeon is that drifting larvae are not able to exit the highly engineered navigation channel, and therefore starve. We present work exploring measures of hydraulic connectivity between the navigation channel and channel margins using multiple data-collection protocols with acoustic Doppler current profilers (ADCPs). As ADCP datasets alone often do not have high enough spatial resolution to characterize interception and connectivity sufficiently at the scale of drifting sturgeon larvae, they are often supplemented with physical and empirical models. Using boat-mounted ADCPs, we collected 3-dimensional current velocities with a variety of driving techniques (specifically, regularly spaced transects, reciprocal transects, and irregular patterns) around areas of potential larval interception. We then used toolkits based in Python to interpolate 3-dimensional velocity fields at spatial scales finer than the original measurements, and visualized resultant velocity vectors and flowlines in the software package Paraview. Using these visualizations, we investigated the necessary resolution of field measurements required to model connectivity with channel margin areas on large, highly engineered river ecosystems such as the Missouri River. We anticipate that results from this work will be used to help inform models of larval interception under current conditions. Furthermore, results from this work will be useful in developing monitoring strategies to evaluate the restoration of channel complexity to support ecological functions.
Energy Technology Data Exchange (ETDEWEB)
Schwalm, Michael
2011-06-16
Today's raising demand for energy relies to a degree of 85% on the consumption of fossil fuels. A change to regenerative forms of energy is an important and inevitable step in order to face the challenges of climate change and fading natural resources. Photovoltaic's (PV) plays a special role within the various forms of renewable energy since it converts sunlight, our most important and virtually endless energy source, directly into electricity. However, currently available PV-systems are still very expensive and, in combination with their relatively low performance, can hardly or cannot compete with conventional sources of energy from an economical point of view. One possibility to overcome this problem is the combination of highly efficient multi junction solar cells with cost-efficient concentrator optics that focus the incident sunlight to a small spot. The material system (GaIn)(NAs) is envisioned to play an important role in a future generation of multi junction solar cells for concentrator applications being a further development of existing device concepts. However, especially the carrier diffusion lengths in (GaIn)(NAs)-based solar cell layers are currently to low for the fabrication of highly efficient PV-structures. In this work, two novel techniques for the characterization of solar cells are developed and evaluated by experiments on test structures and numerical simulations. Both are based on the measurement of laser-induced currents. Spatially-resolved photocurrent spectroscopy (SRPS) allows a spatially-resolved determination of locally induced photocurrents at a fixed bias voltage while spatially-resolved IV-characteristics (SRIV) are measurements of local I-V-characteristics at a certain position. It is found that SRPS and SRIV allow for a reliable and meaningful characterization of solar cell prototypes with a high spatial resolution. Especially the local p-n-parameters of the sample become accessible. These are the short circuit current
Oléron Evans, Thomas P; Bishop, Steven R
2014-08-01
We present a simple mathematical model to replicate the key features of the sterile insect technique (SIT) for controlling pest species, with particular reference to the mosquito Aedes aegypti, the main vector of dengue fever. The model differs from the majority of those studied previously in that it is simultaneously spatially explicit and involves pulsed, rather than continuous, sterile insect releases. The spatially uniform equilibria of the model are identified and analysed. Simulations are performed to analyse the impact of varying the number of release sites, the interval between pulsed releases and the overall volume of sterile insect releases on the effectiveness of SIT programmes. Results show that, given a fixed volume of available sterile insects, increasing the number of release sites and the frequency of releases increases the effectiveness of SIT programmes. It is also observed that programmes may become completely ineffective if the interval between pulsed releases is greater that a certain threshold value and that, beyond a certain point, increasing the overall volume of sterile insects released does not improve the effectiveness of SIT. It is also noted that insect dispersal drives a rapid recolonisation of areas in which the species has been eradicated and we argue that understanding the density dependent mortality of released insects is necessary to develop efficient, cost-effective SIT programmes. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Zhimin Lao
2012-08-01
Full Text Available Mosaic mutant analysis, the study of cellular defects in scattered mutant cells in a wild-type environment, is a powerful approach for identifying critical functions of genes and has been applied extensively to invertebrate model organisms. A highly versatile technique has been developed in mouse: MASTR (mosaic mutant analysis with spatial and temporal control of recombination, which utilizes the increasing number of floxed alleles and simultaneously combines conditional gene mutagenesis and cell marking for fate analysis. A targeted allele (R26MASTR was engineered; the allele expresses a GFPcre fusion protein following FLP-mediated recombination, which serves the dual function of deleting floxed alleles and marking mutant cells with GFP. Within 24 hr of tamoxifen administration to R26MASTR mice carrying an inducible FlpoER transgene and a floxed allele, nearly all GFP-expressing cells have a mutant allele. The fate of single cells lacking FGF8 or SHH signaling in the developing hindbrain was analyzed using MASTR, and it was revealed that there is only a short time window when neural progenitors require FGFR1 for viability and that granule cell precursors differentiate rapidly when SMO is lost. MASTR is a powerful tool that provides cell-type-specific (spatial and temporal marking of mosaic mutant cells and is broadly applicable to developmental, cancer, and adult stem cell studies.
Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression
Energy Technology Data Exchange (ETDEWEB)
Brislawn, Christopher M. [Los Alamos National Laboratory
2012-08-13
How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementation techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.
Systems and methods for interpolation-based dynamic programming
Rockwood, Alyn
2013-01-03
Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.
Systems and methods for interpolation-based dynamic programming
Rockwood, Alyn
2013-01-01
Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.
Distance-two interpolation for parallel algebraic multigrid
International Nuclear Information System (INIS)
Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M
2007-01-01
In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers
Effect of raingage density, position and interpolation on rainfall-discharge modelling
Ly, S.; Sohier, C.; Charles, C.; Degré, A.
2012-04-01
Precipitation traditionally observed using raingages or weather stations, is one of the main parameters that have direct impact on runoff production. Precipitation data require a preliminary spatial interpolation prior to hydrological modeling. The accuracy of modelling result depends on the accuracy of the interpolated spatial rainfall which differs according to different interpolation methods. The accuracy of the interpolated spatial rainfall is usually determined by cross-validation method. The objective of this study is to assess the different interpolation methods of daily rainfall at the watershed scale through hydrological modelling and to explore the best methods that provide a good long term simulation. Four versions of geostatistics: Ordinary Kriging (ORK), Universal Kriging (UNK), Kriging with External Dridft (KED) and Ordinary Cokriging (OCK) and two types of deterministic methods: Thiessen polygon (THI) and Inverse Distance Weighting (IDW) are used to produce 30-year daily rainfall inputs for a distributed physically-based hydrological model (EPIC-GRID). This work is conducted in the Ourthe and Ambleve nested catchments, located in the Ardennes hilly landscape in the Walloon region, Belgium. The total catchment area is 2908 km2, lies between 67 and 693 m in elevation. The multivariate geostatistics (KED and OCK) are also used by incorporating elevation as external data to improve the rainfall prediction. This work also aims at analysing the effect of different raingage densities and position used for interpolation, on the stream flow modelled to get insight in terms of the capability and limitation of the geostatistical methods. The number of raingage varies from 70, 60, 50, 40, 30, 20, 8 to 4 stations located in and surrounding the catchment area. In the latter case, we try to use different positions: around the catchment and only a part of the catchment. The result shows that the simple method like THI fails to capture the rainfall and to produce
Geostatistical interpolation of available copper in orchard soil as influenced by planting duration.
Fu, Chuancheng; Zhang, Haibo; Tu, Chen; Li, Lianzhen; Luo, Yongming
2018-01-01
Mapping the spatial distribution of available copper (A-Cu) in orchard soils is important in agriculture and environmental management. However, data on the distribution of A-Cu in orchard soils is usually highly variable and severely skewed due to the continuous input of fungicides. In this study, ordinary kriging combined with planting duration (OK_PD) is proposed as a method for improving the interpolation of soil A-Cu. Four normal distribution transformation methods, namely, the Box-Cox, Johnson, rank order, and normal score methods, were utilized prior to interpolation. A total of 317 soil samples were collected in the orchards of the Northeast Jiaodong Peninsula. Moreover, 1472 orchards were investigated to obtain a map of planting duration using Voronoi tessellations. The soil A-Cu content ranged from 0.09 to 106.05 with a mean of 18.10 mg kg -1 , reflecting the high availability of Cu in the soils. Soil A-Cu concentrations exhibited a moderate spatial dependency and increased significantly with increasing planting duration. All the normal transformation methods successfully decreased the skewness and kurtosis of the soil A-Cu and the associated residuals, and also computed more robust variograms. OK_PD could generate better spatial prediction accuracy than ordinary kriging (OK) for all transformation methods tested, and it also provided a more detailed map of soil A-Cu. Normal score transformation produced satisfactory accuracy and showed an advantage in ameliorating smoothing effect derived from the interpolation methods. Thus, normal score transformation prior to kriging combined with planting duration (NSOK_PD) is recommended for the interpolation of soil A-Cu in this area.
Directory of Open Access Journals (Sweden)
Noella A Dietz
2011-01-01
Full Text Available Introduction: Smoking-attributable risks for lung, esophageal, and head and neck (H/N cancers range from 54% to 90%. Identifying areas with higher than average cancer risk and smoking rates, then targeting those areas for intervention, is one approach to more rapidly lower the overall tobacco disease burden in a given state. Our research team used spatial modeling techniques to identify areas in Florida with higher than expected tobacco-associated cancer incidence clusters. Materials and Methods: Geocoded tobacco-associated incident cancer data from 1998 to 2002 from the Florida Cancer Data System were used. Tobacco-associated cancers included lung, esophageal, and H/N cancers. SaTScan was used to identify geographic areas that had statistically significant (P<0.10 excess age-adjusted rates of tobacco-associated cancers. The Poisson-based spatial scan statistic was used. Phi correlation coefficients were computed to examine associations among block groups with/without overlapping cancer clusters. The logistic regression was used to assess associations between county-level smoking prevalence rates and being diagnosed within versus outside a cancer cluster. Community-level smoking rates were obtained from the 2002 Florida Behavioral Risk Factor Surveillance System (BRFSS. Analyses were repeated using 2007 BRFSS to examine the consistency of associations. Results: Lung cancer clusters were geographically larger for both squamous cell and adenocarcinoma cases in Florida from 1998 to 2002, than esophageal or H/N clusters. There were very few squamous cell and adenocarcinoma esophageal cancer clusters. H/N cancer mapping showed some squamous cell and a very small amount of adenocarcinoma cancer clusters. Phi correlations were generally weak to moderate in strength. The odds of having an invasive lung cancer cluster increased by 12% per increase in the county-level smoking rate. Results were inconsistent for esophageal and H/N cancers, with some
Radial Basis Function (RBF Interpolation and Investigating its Impact on Rainfall Duration Mapping
Directory of Open Access Journals (Sweden)
Hassan Derakhshan
2012-01-01
Full Text Available The missing data in database must be reproduced primarily by appropriate interpolation techniques. Radial basis function (RBF interpolators can play a significant role in data completion of precipitation mapping. Five RBF techniques were engaged to be employed in compensating the missing data in event-wised dataset of Upper Paramatta River Catchment in the western suburbs of Sydney, Australia. The related shape parameter, C, of RBFs was optimized for first event of database during a cross-validation process. The Normalized mean square error (NMSE, percent average estimation error (PAEE and coefficient of determination (R2 were the statistics used as validation tools. Results showed that the multiquadric RBF technique with the least error, best suits compensation of the related database.
refining of scintillation detector signals relying on interpolated wavelets on a FPGA prototype
International Nuclear Information System (INIS)
Aboshosha, A.; Sayed, M.; Ashour, M.; Safwat, A.
2010-01-01
in this article, a signal processing core based on field programmable gate arrays (FPGAs) is developed for processing of scintillation detector signals. this core is implemented to apply the forward wavelet transfrom and interpolation technique. the main purpose of that is to de-noise, compress and reconstruct these signals by which the processing speed and storage will be optimized. moreover, this technique gives us all important features of the acquired signals such as counting, shaping and pulse height. A new contribution of our framework arises from employing the interpolation techniques to reconstruct the signal where the mother wavelet and details are not required. The hardware design is implemented using hardware description language (HDL) and is implemented practically on the FPGA. The performance of the design has been tested in simulation mode on Model sim benchmark and in real time mode on XC2S 50 spartan- II FPGA.
Shape-based grey-level image interpolation
International Nuclear Information System (INIS)
Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh
1999-01-01
The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)
High-temperature behavior of a deformed Fermi gas obeying interpolating statistics.
Algin, Abdullah; Senay, Mustafa
2012-04-01
An outstanding idea originally introduced by Greenberg is to investigate whether there is equivalence between intermediate statistics, which may be different from anyonic statistics, and q-deformed particle algebra. Also, a model to be studied for addressing such an idea could possibly provide us some new consequences about the interactions of particles as well as their internal structures. Motivated mainly by this idea, in this work, we consider a q-deformed Fermi gas model whose statistical properties enable us to effectively study interpolating statistics. Starting with a generalized Fermi-Dirac distribution function, we derive several thermostatistical functions of a gas of these deformed fermions in the thermodynamical limit. We study the high-temperature behavior of the system by analyzing the effects of q deformation on the most important thermostatistical characteristics of the system such as the entropy, specific heat, and equation of state. It is shown that such a deformed fermion model in two and three spatial dimensions exhibits the interpolating statistics in a specific interval of the model deformation parameter 0 < q < 1. In particular, for two and three spatial dimensions, it is found from the behavior of the third virial coefficient of the model that the deformation parameter q interpolates completely between attractive and repulsive systems, including the free boson and fermion cases. From the results obtained in this work, we conclude that such a model could provide much physical insight into some interacting theories of fermions, and could be useful to further study the particle systems with intermediate statistics.
Directory of Open Access Journals (Sweden)
Tuan Pah Rokiah Syed Hussain
2011-09-01
Full Text Available In the recent decade, there are many government efforts to develop rural area as a step to curb vast economic discrepancy status within community in the nation. This effort is in line with National Development Policy promoted by government shifting from New Economic Policy. Therefore, this study area also has impact done by development activities. The enormous economic developments have encourage growth in urbanization, tourism and recreation, public facilities, housing and so on. Furthermore, the area of cultivation land uses and foliages are becoming shrinking due to development growth, which is development needs to shift land use pattern hence denotes that human beings infuriate the environment to meet the life needs. In response to that, this research delves into the level of land use changes using the Geographic Information System (GIS and Spatial Analyst to determine the actual area or vicinity and what is the type of rigorous changes in land use. This issue can be seen all the way through the study outcome via spatial analysis technique adapted from Patch Density & Size Metrics (Mean Patch Size, Edge Metrics (Total Edge (TE, Edge Density (ED, Mean Perimeter-Area Ratio (Mpar and Shannons Diversity Index (SHDI. Results of the study show that, land use changes have occurred significantly in the study area for the period of 20 years, wher, all types of analysis verify that there is an increase in patch for every statistical test. The increase in patch is a picture of current land use changes, land use edge density and land use area in study area. Moreover, this study investigates the relationship between land use with rising flood disaster frequency and intensity variable which has always happened lately in Kelantan River Basin.
Interpolation from Grid Lines: Linear, Transfinite and Weighted Method
DEFF Research Database (Denmark)
Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen
2017-01-01
When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid l...
Shape Preserving Interpolation Using C2 Rational Cubic Spline
Directory of Open Access Journals (Sweden)
Samsul Ariffin Abdul Karim
2016-01-01
Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.
Input variable selection for interpolating high-resolution climate ...
African Journals Online (AJOL)
Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...
An efficient interpolation filter VLSI architecture for HEVC standard
Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang
2015-12-01
The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.
Some observations on interpolating gauges and non-covariant gauges
Indian Academy of Sciences (India)
We discuss the viability of using interpolating gauges to deﬁne the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition deﬁning term. We show that the boundary condition needed to maintain gauge-invariance as the interpolating parameter ...
Convergence of trajectories in fractal interpolation of stochastic processes
International Nuclear Information System (INIS)
MaIysz, Robert
2006-01-01
The notion of fractal interpolation functions (FIFs) can be applied to stochastic processes. Such construction is especially useful for the class of α-self-similar processes with stationary increments and for the class of α-fractional Brownian motions. For these classes, convergence of the Minkowski dimension of the graphs in fractal interpolation of the Hausdorff dimension of the graph of original process was studied in [Herburt I, MaIysz R. On convergence of box dimensions of fractal interpolation stochastic processes. Demonstratio Math 2000;4:873-88.], [MaIysz R. A generalization of fractal interpolation stochastic processes to higher dimension. Fractals 2001;9:415-28.], and [Herburt I. Box dimension of interpolations of self-similar processes with stationary increments. Probab Math Statist 2001;21:171-8.]. We prove that trajectories of fractal interpolation stochastic processes converge to the trajectory of the original process. We also show that convergence of the trajectories in fractal interpolation of stochastic processes is equivalent to the convergence of trajectories in linear interpolation
Improved Interpolation Kernels for Super-resolution Algorithms
DEFF Research Database (Denmark)
Rasti, Pejman; Orlova, Olga; Tamberg, Gert
2016-01-01
Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....
Improving the visualization of electron-microscopy data through optical flow interpolation
Carata, Lucian
2013-01-01
Technical developments in neurobiology have reached a point where the acquisition of high resolution images representing individual neurons and synapses becomes possible. For this, the brain tissue samples are sliced using a diamond knife and imaged with electron-microscopy (EM). However, the technique achieves a low resolution in the cutting direction, due to limitations of the mechanical process, making a direct visualization of a dataset difficult. We aim to increase the depth resolution of the volume by adding new image slices interpolated from the existing ones, without requiring modifications to the EM image-capturing method. As classical interpolation methods do not provide satisfactory results on this type of data, the current paper proposes a re-framing of the problem in terms of motion volumes, considering the depth axis as a temporal axis. An optical flow method is adapted to estimate the motion vectors of pixels in the EM images, and this information is used to compute and insert multiple new images at certain depths in the volume. We evaluate the visualization results in comparison with interpolation methods currently used on EM data, transforming the highly anisotropic original dataset into a dataset with a larger depth resolution. The interpolation based on optical flow better reveals neurite structures with realistic undistorted shapes, and helps to easier map neuronal connections. © 2011 ACM.
Dikbas, Salih; Altunbasak, Yucel
2013-08-01
In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.
Kumari, Madhuri; Singh, Chander Kumar; Bakimchandra, Oinam; Basistha, Ashoke
2017-10-01
In mountainous region with heterogeneous topography, the geostatistical modeling of the rainfall using global data set may not confirm to the intrinsic hypothesis of stationarity. This study was focused on improving the precision of the interpolated rainfall maps by spatial stratification in complex terrain. Predictions of the normal annual rainfall data were carried out by ordinary kriging, universal kriging, and co-kriging, using 80-point observations in the Indian Himalayas extending over an area of 53,484 km2. A two-step spatial clustering approach is proposed. In the first step, the study area was delineated into two regions namely lowland and upland based on the elevation derived from the digital elevation model. The delineation was based on the natural break classification method. In the next step, the rainfall data was clustered into two groups based on its spatial location in lowland or upland. The terrain ruggedness index (TRI) was incorporated as a co-variable in co-kriging interpolation algorithm. The precision of the kriged and co-kriged maps was assessed by two accuracy measures, root mean square error and Chatfield's percent better. It was observed that the stratification of rainfall data resulted in 5-20 % of increase in the performance efficiency of interpolation methods. Co-kriging outperformed the kriging models at annual and seasonal scale. The result illustrates that the stratification of the study area improves the stationarity characteristic of the point data, thus enhancing the precision of the interpolated rainfall maps derived using geostatistical methods.
Scalable Intersample Interpolation Architecture for High-channel-count Beamformers
DEFF Research Database (Denmark)
Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt
2011-01-01
Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....
Fractional Delayer Utilizing Hermite Interpolation with Caratheodory Representation
Directory of Open Access Journals (Sweden)
Qiang DU
2018-04-01
Full Text Available Fractional delay is indispensable for many sorts of circuits and signal processing applications. Fractional delay filter (FDF utilizing Hermite interpolation with an analog differentiator is a straightforward way to delay discrete signals. This method has a low time-domain error, but a complicated sampling module than the Shannon sampling scheme. A simplified scheme, which is based on Shannon sampling and utilizing Hermite interpolation with a digital differentiator, will lead a much higher time-domain error when the signal frequency approaches the Nyquist rate. In this letter, we propose a novel fractional delayer utilizing Hermite interpolation with Caratheodory representation. The samples of differential signal are obtained by Caratheodory representation from the samples of the original signal only. So, only one sampler is needed and the sampling module is simple. Simulation results for four types of signals demonstrate that the proposed method has significantly higher interpolation accuracy than Hermite interpolation with digital differentiator.
Statistical analysis and interpolation of compositional data in materials science.
Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M
2015-02-09
Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.
Insect brains use image interpolation mechanisms to recognise rotated objects.
Directory of Open Access Journals (Sweden)
Adrian G Dyer
Full Text Available Recognising complex three-dimensional objects presents significant challenges to visual systems when these objects are rotated in depth. The image processing requirements for reliable individual recognition under these circumstances are computationally intensive since local features and their spatial relationships may significantly change as an object is rotated in the horizontal plane. Visual experience is known to be important in primate brains learning to recognise rotated objects, but currently it is unknown how animals with comparatively simple brains deal with the problem of reliably recognising objects when seen from different viewpoints. We show that the miniature brain of honeybees initially demonstrate a low tolerance for novel views of complex shapes (e.g. human faces, but can learn to recognise novel views of stimuli by interpolating between or 'averaging' views they have experienced. The finding that visual experience is also important for bees has important implications for understanding how three dimensional biologically relevant objects like flowers are recognised in complex environments, and for how machine vision might be taught to solve related visual problems.
Subsurface temperature maps in French sedimentary basins: new data compilation and interpolation
International Nuclear Information System (INIS)
Bonte, D.; Guillou-Frottier, L.; Garibaldi, C.; Bourgine, B.; Lopez, S.; Bouchot, V.; Garibaldi, C.; Lucazeau, F.
2010-01-01
Assessment of the underground geothermal potential requires the knowledge of deep temperatures (1-5 km). Here, we present new temperature maps obtained from oil boreholes in the French sedimentary basins. Because of their origin, the data need to be corrected, and their local character necessitates spatial interpolation. Previous maps were obtained in the 1970's using empirical corrections and manual interpolation. In this study, we update the number of measurements by using values collected during the last thirty years, correct the temperatures for transient perturbations and carry out statistical analyses before modelling the 3D distribution of temperatures. This dataset provides 977 temperatures corrected for transient perturbations in 593 boreholes located in the French sedimentary basins. An average temperature gradient of 30.6 deg. C/km is obtained for a representative surface temperature of 10 deg. C. When surface temperature is not accounted for, deep measurements are best fitted with a temperature gradient of 25.7 deg. C/km. We perform a geostatistical analysis on a residual temperature dataset (using a drift of 25.7 deg. C/km) to constrain the 3D interpolation kriging procedure with horizontal and vertical models of variograms. The interpolated residual temperatures are added to the country-scale averaged drift in order to get a three dimensional thermal structure of the French sedimentary basins. The 3D thermal block enables us to extract isothermal surfaces and 2D sections (iso-depth maps and iso-longitude cross-sections). A number of anomalies with a limited depth and spatial extension have been identified, from shallow in the Rhine graben and Aquitanian basin, to deep in the Provence basin. Some of these anomalies (Paris basin, Alsace, south of the Provence basin) may be partly related to thick insulating sediments, while for some others (southwestern Aquitanian basin, part of the Provence basin) large-scale fluid circulation may explain superimposed
Saini, Harsh; Raicar, Gaurav; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok
2015-12-07
Protein subcellular localization is an important topic in proteomics since it is related to a protein׳s overall function, helps in the understanding of metabolic pathways, and in drug design and discovery. In this paper, a basic approximation technique from natural language processing called the linear interpolation smoothing model is applied for predicting protein subcellular localizations. The proposed approach extracts features from syntactical information in protein sequences to build probabilistic profiles using dependency models, which are used in linear interpolation to determine how likely is a sequence to belong to a particular subcellular location. This technique builds a statistical model based on maximum likelihood. It is able to deal effectively with high dimensionality that hinders other traditional classifiers such as Support Vector Machines or k-Nearest Neighbours without sacrificing performance. This approach has been evaluated by predicting subcellular localizations of Gram positive and Gram negative bacterial proteins. Copyright © 2015 Elsevier Ltd. All rights reserved.
Velasquez, N.; Ochoa, A.; Castillo, S.; Hoyos Ortiz, C. D.
2017-12-01
The skill of river discharge simulation using hydrological models strongly depends on the quality and spatio-temporal representativeness of precipitation during storm events. All precipitation measurement strategies have their own strengths and weaknesses that translate into discharge simulation uncertainties. Distributed hydrological models are based on evolving rainfall fields in the same time scale as the hydrological simulation. In general, rainfall measurements from a dense and well maintained rain gauge network provide a very good estimation of the total volume for each rainfall event, however, the spatial structure relies on interpolation strategies introducing considerable uncertainty in the simulation process. On the other hand, rainfall retrievals from radar reflectivity achieve a better spatial structure representation but with higher uncertainty in the surface precipitation intensity and volume depending on the vertical rainfall characteristics and radar scan strategy. To assess the impact of both rainfall measurement methodologies on hydrological simulations, and in particular the effects of the rainfall spatio-temporal variability, a numerical modeling experiment is proposed including the use of a novel QPE (Quantitative Precipitation Estimation) method based on disdrometer data in order to estimate surface rainfall from radar reflectivity. The experiment is based on the simulation of 84 storms, the hydrological simulations are carried out using radar QPE and two different interpolation methods (IDW and TIN), and the assessment of simulated peak flow. Results show significant rainfall differences between radar QPE and the interpolated fields, evidencing a poor representation of storms in the interpolated fields, which tend to miss the precise location of the intense precipitation cores, and to artificially generate rainfall in some areas of the catchment. Regarding streamflow modelling, the potential improvement achieved by using radar QPE depends on
International Nuclear Information System (INIS)
Schneiders, Jan F G; Sciacchitano, Andrea
2017-01-01
The track benchmarking method (TBM) is proposed for uncertainty quantification of particle tracking velocimetry (PTV) data mapped onto a regular grid. The method provides statistical uncertainty for a velocity time-series and can in addition be used to obtain instantaneous uncertainty at increased computational cost. Interpolation techniques are typically used to map velocity data from scattered PTV (e.g. tomographic PTV and Shake-the-Box) measurements onto a Cartesian grid. Recent examples of these techniques are the FlowFit and VIC+ methods. The TBM approach estimates the random uncertainty in dense velocity fields by performing the velocity interpolation using a subset of typically 95% of the particle tracks and by considering the remaining tracks as an independent benchmarking reference. In addition, also a bias introduced by the interpolation technique is identified. The numerical assessment shows that the approach is accurate when particle trajectories are measured over an extended number of snapshots, typically on the order of 10. When only short particle tracks are available, the TBM estimate overestimates the measurement error. A correction to TBM is proposed and assessed to compensate for this overestimation. The experimental assessment considers the case of a jet flow, processed both by tomographic PIV and by VIC+. The uncertainty obtained by TBM provides a quantitative evaluation of the measurement accuracy and precision and highlights the regions of high error by means of bias and random uncertainty maps. In this way, it is possible to quantify the uncertainty reduction achieved by advanced interpolation algorithms with respect to standard correlation-based tomographic PIV. The use of TBM for uncertainty quantification and comparison of different processing techniques is demonstrated. (paper)
Functions with disconnected spectrum sampling, interpolation, translates
Olevskii, Alexander M
2016-01-01
The classical sampling problem is to reconstruct entire functions with given spectrum S from their values on a discrete set L. From the geometric point of view, the possibility of such reconstruction is equivalent to determining for which sets L the exponential system with frequencies in L forms a frame in the space L^2(S). The book also treats the problem of interpolation of discrete functions by analytic ones with spectrum in S and the problem of completeness of discrete translates. The size and arithmetic structure of both the spectrum S and the discrete set L play a crucial role in these problems. After an elementary introduction, the authors give a new presentation of classical results due to Beurling, Kahane, and Landau. The main part of the book focuses on recent progress in the area, such as construction of universal sampling sets, high-dimensional and non-analytic phenomena. The reader will see how methods of harmonic and complex analysis interplay with various important concepts in different areas, ...
Barman, S.; Bhattacharjya, R. K.
2017-12-01
The River Subansiri is the major north bank tributary of river Brahmaputra. It originates from the range of Himalayas beyond the Great Himalayan range at an altitude of approximately 5340m. Subansiri basin extends from tropical to temperate zones and hence exhibits a great diversity in rainfall characteristics. In the Northern and Central Himalayan tracts, precipitation is scarce on account of high altitudes. On the other hand, Southeast part of the Subansiri basin comprising the sub-Himalayan and the plain tract in Arunachal Pradesh and Assam, lies in the tropics. Due to Northeast as well as Southwest monsoon, precipitation occurs in this region in abundant quantities. Particularly, Southwest monsoon causes very heavy precipitation in the entire Subansiri basin during May to October. In this study, the rainfall over Subansiri basin has been studied at 24 different locations by multiple linear and non-linear regression based statistical downscaling techniques and by Artificial Neural Network based model. APHRODITE's gridded rainfall data of 0.25˚ x 0.25˚ resolutions and climatic parameters of HadCM3 GCM of resolution 2.5˚ x 3.75˚ (latitude by longitude) have been used in this study. It has been found that multiple non-linear regression based statistical downscaling technique outperformed the other techniques. Using this method, the future rainfall pattern over the Subansiri basin has been analyzed up to the year 2099 for four different time periods, viz., 2020-39, 2040-59, 2060-79, and 2080-99 at all the 24 locations. On the basis of historical rainfall, the months have been categorized as wet months, months with moderate rainfall and dry months. The spatial changes in rainfall patterns for all these three types of months have also been analyzed over the basin. Potential decrease of rainfall in the wet months and months with moderate rainfall and increase of rainfall in the dry months are observed for the future rainfall pattern of the Subansiri basin.
International Nuclear Information System (INIS)
Portu, A; Carpano, M; Dagrosa, A; Pozzi, E; Thorp, S; Curotto, P; Cabrini, R L; Saint Martin, G
2012-01-01
The Boron Neutron Capture Therapy (BNCT) is a modality for the treatment of cancer, based on the capture reaction 10 B(n,α) 7 Li. The emitted particles are highly transferred linear of energy and have a short range in tissue (10 μ). Therefore, if the boron is selectively accumulates in tumor cellulo, the damage will be limited to preserving normal cellulo. Thus, the knowledge of the location of 10 B in the different structures of biological tissues as tumor and surrounding tissue, is essential when considering BNCT treatment (Barth et al., 2005). Neutron autoradiography is one of the few methods that allow studying the distribution spatial of elements emitters in a material containing such. As part of BNCT, the first step in performing autoradiography involves placing a freeze tissue section on a nuclear track detector (SSNTD) (Wittig et al., 2008). For this purpose, tissue samples are fixed in N 2 (liq) when they are resected after infusion boronated compound. The sample-detector arrangement is irradiated with thermal neutrons and elements cast in the capture reaction zones produce latent damage SSNTD. Chemically attacking the detector, this latent trace level can be amplified by optical microscopy. Thus, the distribution of 10 B in biological samples can be evaluated, so that this technique is suitable for studying the uptake of boron compounds for the different histological structures. In our laboratory, we have developed neutron autoradiography and has been applied to the study of different biological models (Portu et al., 2011a). In particular, the study conducted by the micro-distribution 10 B in tumors from nude mice model of cutaneous melanomas injected with boronophenylalanine (BPA) (Carpano et al, 2010;. Portu et al, 2011b.). Still using means of support for the sample to be cut, as OCTTM, the lack of structure of necrotic areas of tumors such causes tearing of these regions in the cutting process, which prevents achieving adequate for analysis sections
Research of Cubic Bezier Curve NC Interpolation Signal Generator
Directory of Open Access Journals (Sweden)
Shijun Ji
2014-08-01
Full Text Available Interpolation technology is the core of the computer numerical control (CNC system, and the precision and stability of the interpolation algorithm directly affect the machining precision and speed of CNC system. Most of the existing numerical control interpolation technology can only achieve circular arc interpolation, linear interpolation or parabola interpolation, but for the numerical control (NC machining of parts with complicated surface, it needs to establish the mathematical model and generate the curved line and curved surface outline of parts and then discrete the generated parts outline into a large amount of straight line or arc to carry on the processing, which creates the complex program and a large amount of code, so it inevitably introduce into the approximation error. All these factors affect the machining accuracy, surface roughness and machining efficiency. The stepless interpolation of cubic Bezier curve controlled by analog signal is studied in this paper, the tool motion trajectory of Bezier curve can be directly planned out in CNC system by adjusting control points, and then these data were put into the control motor which can complete the precise feeding of Bezier curve. This method realized the improvement of CNC trajectory controlled ability from the simple linear and circular arc to the complex project curve, and it provides a new way for economy realizing the curve surface parts with high quality and high efficiency machining.
[An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].
Xu, Yonghong; Gao, Shangce; Hao, Xiaofei
2016-04-01
Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.
Shape-based interpolation of multidimensional grey-level images
International Nuclear Information System (INIS)
Grevera, G.J.; Udupa, J.K.
1996-01-01
Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. In this paper, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n + 1)-dimensional [(n + 1)-D] space. The binary shape-based method is then applied to this image to create an (n + 1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation
On Multiple Interpolation Functions of the -Genocchi Polynomials
Directory of Open Access Journals (Sweden)
Jin Jeong-Hee
2010-01-01
Full Text Available Abstract Recently, many mathematicians have studied various kinds of the -analogue of Genocchi numbers and polynomials. In the work (New approach to q-Euler, Genocchi numbers and their interpolation functions, "Advanced Studies in Contemporary Mathematics, vol. 18, no. 2, pp. 105–112, 2009.", Kim defined new generating functions of -Genocchi, -Euler polynomials, and their interpolation functions. In this paper, we give another definition of the multiple Hurwitz type -zeta function. This function interpolates -Genocchi polynomials at negative integers. Finally, we also give some identities related to these polynomials.
Steady State Stokes Flow Interpolation for Fluid Control
DEFF Research Database (Denmark)
Bhatacharya, Haimasree; Nielsen, Michael Bang; Bridson, Robert
2012-01-01
— suffer from a common problem. They fail to capture the rotational components of the velocity field, although extrapolation in the normal direction does consider the tangential component. We address this problem by casting the interpolation as a steady state Stokes flow. This type of flow captures......Fluid control methods often require surface velocities interpolated throughout the interior of a shape to use the velocity as a feedback force or as a boundary condition. Prior methods for interpolation in computer graphics — velocity extrapolation in the normal direction and potential flow...
C1 Rational Quadratic Trigonometric Interpolation Spline for Data Visualization
Directory of Open Access Journals (Sweden)
Shengjun Liu
2015-01-01
Full Text Available A new C1 piecewise rational quadratic trigonometric spline with four local positive shape parameters in each subinterval is constructed to visualize the given planar data. Constraints are derived on these free shape parameters to generate shape preserving interpolation curves for positive and/or monotonic data sets. Two of these shape parameters are constrained while the other two can be set free to interactively control the shape of the curves. Moreover, the order of approximation of developed interpolant is investigated as O(h3. Numeric experiments demonstrate that our method can construct nice shape preserving interpolation curves efficiently.
Chiral properties of baryon interpolating fields
International Nuclear Information System (INIS)
Nagata, Keitaro; Hosaka, Atsushi; Dmitrasinovic, V.
2008-01-01
We study the chiral transformation properties of all possible local (non-derivative) interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We derive and use the relations/identities among the baryon operators with identical quantum numbers that follow from the combined color, Dirac and isospin Fierz transformations. These relations reduce the number of independent baryon operators with any given spin and isospin. The Fierz identities also effectively restrict the allowed baryon chiral multiplets. It turns out that the non-derivative baryons' chiral multiplets have the same dimensionality as their Lorentz representations. For the two independent nucleon operators the only permissible chiral multiplet is the fundamental one, ((1)/(2),0)+(0,(1)/(2)). For the Δ, admissible Lorentz representations are (1,(1)/(2))+((1)/(2),1) and ((3)/(2),0)+(0,(3)/(2)). In the case of the (1,(1)/(2))+((1)/(2),1) chiral multiplet, the I(J)=(3)/(2)((3)/(2)) Δ field has one I(J)=(1)/(2)((3)/(2)) chiral partner; otherwise it has none. We also consider the Abelian (U A (1)) chiral transformation properties of the fields and show that each baryon comes in two varieties: (1) with Abelian axial charge +3; and (2) with Abelian axial charge -1. In case of the nucleon these are the two Ioffe fields; in case of the Δ, the (1,(1)/(2))+((1)/(2),1) multiplet has an Abelian axial charge -1 and the ((3)/(2),0)+(0,(3)/(2)) multiplet has an Abelian axial charge +3. (orig.)
Comparison of two fractal interpolation methods
Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo
2017-03-01
As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has
Interpolation Routines Assessment in ALS-Derived Digital Elevation Models for Forestry Applications
Directory of Open Access Journals (Sweden)
Antonio Luis Montealegre
2015-07-01
Full Text Available Airborne Laser Scanning (ALS is capable of estimating a variety of forest parameters using different metrics extracted from the normalized heights of the point cloud using a Digital Elevation Model (DEM. In this study, six interpolation routines were tested over a range of land cover and terrain roughness in order to generate a collection of DEMs with spatial resolution of 1 and 2 m. The accuracy of the DEMs was assessed twice, first using a test sample extracted from the ALS point cloud, second using a set of 55 ground control points collected with a high precision Global Positioning System (GPS. The effects of terrain slope, land cover, ground point density and pulse penetration on the interpolation error were examined stratifying the study area with these variables. In addition, a Classification and Regression Tree (CART analysis allowed the development of a prediction uncertainty map to identify in which areas DEMs and Airborne Light Detection and Ranging (LiDAR derived products may be of low quality. The Triangulated Irregular Network (TIN to raster interpolation method produced the best result in the validation process with the training data set while the Inverse Distance Weighted (IDW routine was the best in the validation with GPS (RMSE of 2.68 cm and RMSE of 37.10 cm, respectively.
The Atmospheric Data Acquisition And Interpolation Process For Center-TRACON Automation System
Jardin, M. R.; Erzberger, H.; Denery, Dallas G. (Technical Monitor)
1995-01-01
The Center-TRACON Automation System (CTAS), an advanced new air traffic automation program, requires knowledge of spatial and temporal atmospheric conditions such as the wind speed and direction, the temperature and the pressure in order to accurately predict aircraft trajectories. Real-time atmospheric data is available in a grid format so that CTAS must interpolate between the grid points to estimate the atmospheric parameter values. The atmospheric data grid is generally not in the same coordinate system as that used by CTAS so that coordinate conversions are required. Both the interpolation and coordinate conversion processes can introduce errors into the atmospheric data and reduce interpolation accuracy. More accurate algorithms may be computationally expensive or may require a prohibitively large amount of data storage capacity so that trade-offs must be made between accuracy and the available computational and data storage resources. The atmospheric data acquisition and processing employed by CTAS will be outlined in this report. The effects of atmospheric data processing on CTAS trajectory prediction will also be analyzed, and several examples of the trajectory prediction process will be given.
National Research Council Canada - National Science Library
Ingel, R
1999-01-01
... (which require derivative information) interpolation functions as well as standard Lagrangian functions, which can be linear, quadratic or cubic, have been used to construct the interpolation windows...
Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.
2017-12-01
Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method
Rhie-Chow interpolation in strong centrifugal fields
Bogovalov, S. V.; Tronin, I. V.
2015-10-01
Rhie-Chow interpolation formulas are derived from the Navier-Stokes and continuity equations. These formulas are generalized to gas dynamics in strong centrifugal fields (as high as 106 g) occurring in gas centrifuges.
Efficient Algorithms and Design for Interpolation Filters in Digital Receiver
Directory of Open Access Journals (Sweden)
Xiaowei Niu
2014-05-01
Full Text Available Based on polynomial functions this paper introduces a generalized design method for interpolation filters. The polynomial-based interpolation filters can be implemented efficiently by using a modified Farrow structure with an arbitrary frequency response, the filters allow many pass- bands and stop-bands, and for each band the desired amplitude and weight can be set arbitrarily. The optimization coefficients of the interpolation filters in time domain are got by minimizing the weighted mean squared error function, then converting to solve the quadratic programming problem. The optimization coefficients in frequency domain are got by minimizing the maxima (MiniMax of the weighted mean squared error function. The degree of polynomials and the length of interpolation filter can be selected arbitrarily. Numerical examples verified the proposed design method not only can reduce the hardware cost effectively but also guarantee an excellent performance.
[Multimodal medical image registration using cubic spline interpolation method].
He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan
2007-12-01
Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.
Interpolating and sampling sequences in finite Riemann surfaces
Ortega-Cerda, Joaquim
2007-01-01
We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.
Illumination estimation via thin-plate spline interpolation.
Shi, Lilong; Xiong, Weihua; Funt, Brian
2011-05-01
Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.
Fast image interpolation for motion estimation using graphics hardware
Kelly, Francis; Kokaram, Anil
2004-05-01
Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.
3D Medical Image Interpolation Based on Parametric Cubic Convolution
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.
Interpolation and sampling in spaces of analytic functions
Seip, Kristian
2004-01-01
The book is about understanding the geometry of interpolating and sampling sequences in classical spaces of analytic functions. The subject can be viewed as arising from three classical topics: Nevanlinna-Pick interpolation, Carleson's interpolation theorem for H^\\infty, and the sampling theorem, also known as the Whittaker-Kotelnikov-Shannon theorem. The book aims at clarifying how certain basic properties of the space at hand are reflected in the geometry of interpolating and sampling sequences. Key words for the geometric descriptions are Carleson measures, Beurling densities, the Nyquist rate, and the Helson-Szegő condition. The book is based on six lectures given by the author at the University of Michigan. This is reflected in the exposition, which is a blend of informal explanations with technical details. The book is essentially self-contained. There is an underlying assumption that the reader has a basic knowledge of complex and functional analysis. Beyond that, the reader should have some familiari...
Phase Center Interpolation Algorithm for Airborne GPS through the Kalman Filter
Directory of Open Access Journals (Sweden)
Edson A. Mitishita
2005-12-01
Full Text Available The aerial triangulation is a fundamental step in any photogrammetric project. The surveying of the traditional control points, depending on region to be mapped, still has a high cost. The distribution of control points at the block, and its positional quality, influence directly in the resulting precisions of the aero triangulation processing. The airborne GPS technique has as key objectives cost reduction and quality improvement of the ground control in the modern photogrammetric projects. Nowadays, in Brazil, the greatest photogrammetric companies are acquiring airborne GPS systems, but those systems are usually presenting difficulties in the operation, due to the need of human resources for the operation, because of the high technology involved. Inside the airborne GPS technique, one of the fundamental steps is the interpolation of the position of the phase center of the GPS antenna, in the photo shot instant. Traditionally, low degree polynomials are used, but recent studies show that those polynomials is reduced in turbulent flights, which are quite common, mainly in great scales flights. This paper has as objective to present a solution for that problem, through an algorithm based on the Kalman Filter, which takes into account the dynamic aspect of the problem. At the end of the paper, the results of a comparison between experiments done with the proposed methodology and a common linear interpolator are shown. These results show a significant accuracy gain at the procedure of linear interpolation, when the Kalman filter is used.
The Convergence Acceleration of Two-Dimensional Fourier Interpolation
Directory of Open Access Journals (Sweden)
Anry Nersessian
2008-07-01
Full Text Available Hereby, the convergence acceleration of two-dimensional trigonometric interpolation for a smooth functions on a uniform mesh is considered. Together with theoretical estimates some numerical results are presented and discussed that reveal the potential of this method for application in image processing. Experiments show that suggested algorithm allows acceleration of conventional Fourier interpolation even for sparse meshes that can lead to an efficient image compression/decompression algorithms and also to applications in image zooming procedures.
Survey: interpolation methods for whole slide image processing.
Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T
2017-02-01
Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Comparing interpolation schemes in dynamic receive ultrasound beamforming
DEFF Research Database (Denmark)
Kortbek, Jacob; Andresen, Henrik; Nikolov, Svetoslav
2005-01-01
In medical ultrasound interpolation schemes are of- ten applied in receive focusing for reconstruction of image points. This paper investigates the performance of various interpolation scheme by means of ultrasound simulations of point scatterers in Field II. The investigation includes conventional...... B-mode imaging and synthetic aperture (SA) imaging using a 192-element, 7 MHz linear array transducer with λ pitch as simulation model. The evaluation consists primarily of calculations of the side lobe to main lobe ratio, SLMLR, and the noise power of the interpolation error. When using...... conventional B-mode imaging and linear interpolation, the difference in mean SLMLR is 6.2 dB. With polynomial interpolation the ratio is in the range 6.2 dB to 0.3 dB using 2nd to 5th order polynomials, and with FIR interpolation the ratio is in the range 5.8 dB to 0.1 dB depending on the filter design...
DEFF Research Database (Denmark)
Azri, Suhaibah; Ujang, Uznir; Rahman, Alias Abdul
2014-01-01
In the last few years, 3D urban data and its information are rapidly increased due to the growth of urban area and urbanization phenomenon. These datasets are then maintain and manage in 3D spatial database system. However, performance deterioration is likely to happen due to the massiveness of 3D...... datasets. As a solution, 3D spatial index structure is used as a booster to increase the performance of data retrieval. In commercial database, commonly and widely used index structure for 3D spatial database is 3D R-Tree. This is due to its simplicity and promising method in handling spatial data. However......D geospatial data clustering to be used in the construction of 3D R-Tree and respectively could reduce the overlapping among nodes. The proposed method is tested on 3D urban dataset for the application of urban infill development. By using several cases of data updating operations such as building...
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Energy Technology Data Exchange (ETDEWEB)
Baak, M., E-mail: max.baak@cern.ch [CERN, CH-1211 Geneva 23 (Switzerland); Gadatsch, S., E-mail: stefan.gadatsch@nikhef.nl [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands); Harrington, R. [School of Physics and Astronomy, University of Edinburgh, Mayfield Road, Edinburgh, EH9 3JZ, Scotland (United Kingdom); Verkerke, W. [Nikhef, PO Box 41882, 1009 DB Amsterdam (Netherlands)
2015-01-21
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
International Nuclear Information System (INIS)
Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2014-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
Directory of Open Access Journals (Sweden)
Elmira Ashpazzadeh
2018-04-01
Full Text Available A numerical technique based on the Hermite interpolant multiscaling functions is presented for the solution of Convection-diusion equations. The operational matrices of derivative, integration and product are presented for multiscaling functions and are utilized to reduce the solution of linear Convection-diusion equation to the solution of algebraic equations. Because of sparsity of these matrices, this method is computationally very attractive and reduces the CPU time and computer memory. Illustrative examples are included to demonstrate the validity and applicability of the new technique.
Ferreira, M. C.; Ferreira, M. F. M.
2016-06-01
Leptospirosis is a zoonosis caused by Leptospira genus bacteria. Rodents, especially Rattus norvegicus, are the most frequent hosts of this microorganism in the cities. The human transmission occurs by contact with urine, blood or tissues of the rodent and contacting water or mud contaminated by rodent urine. Spatial patterns of concentration of leptospirosis are related to the multiple environmental and socioeconomic factors, like housing near flooding areas, domestic garbage disposal sites and high-density of peoples living in slums located near river channels. We used geospatial techniques and geographical information system (GIS) to analysing spatial relationship between the distribution of leptospirosis cases and distance from rivers, river density in the census sector and terrain slope factors, in Sao Paulo County, Brazil. To test this methodology we used a sample of 183 geocoded leptospirosis cases confirmed in 2007, ASTER GDEM2 data, hydrography and census sectors shapefiles. Our results showed that GIS and geospatial analysis techniques improved the mapping of the disease and permitted identify the spatial pattern of association between location of cases and spatial distribution of the environmental variables analyzed. This study showed also that leptospirosis cases might be more related to the census sectors located on higher river density areas and households situated at shorter distances from rivers. In the other hand, it was not possible to assert that slope terrain contributes significantly to the location of leptospirosis cases.
Effect of the precipitation interpolation method on the performance of a snowmelt runoff model
Jacquin, Alexandra
2014-05-01
Uncertainties on the spatial distribution of precipitation seriously affect the reliability of the discharge estimates produced by watershed models. Although there is abundant research evaluating the goodness of fit of precipitation estimates obtained with different gauge interpolation methods, few studies have focused on the influence of the interpolation strategy on the response of watershed models. The relevance of this choice may be even greater in the case of mountain catchments, because of the influence of orography on precipitation. This study evaluates the effect of the precipitation interpolation method on the performance of conceptual type snowmelt runoff models. The HBV Light model version 4.0.0.2, operating at daily time steps, is used as a case study. The model is applied in Aconcagua at Chacabuquito catchment, located in the Andes Mountains of Central Chile. The catchment's area is 2110[Km2] and elevation ranges from 950[m.a.s.l.] to 5930[m.a.s.l.] The local meteorological network is sparse, with all precipitation gauges located below 3000[m.a.s.l.] Precipitation amounts corresponding to different elevation zones are estimated through areal averaging of precipitation fields interpolated from gauge data. Interpolation methods applied include kriging with external drift (KED), optimal interpolation method (OIM), Thiessen polygons (TP), multiquadratic functions fitting (MFF) and inverse distance weighting (IDW). Both KED and OIM are able to account for the existence of a spatial trend in the expectation of precipitation. By contrast, TP, MFF and IDW, traditional methods widely used in engineering hydrology, cannot explicitly incorporate this information. Preliminary analysis confirmed that these methods notably underestimate precipitation in the study catchment, while KED and OIM are able to reduce the bias; this analysis also revealed that OIM provides more reliable estimations than KED in this region. Using input precipitation obtained by each method
Resolution enhancement in integral microscopy by physical interpolation.
Llavador, Anabel; Sánchez-Ortiga, Emilio; Barreiro, Juan Carlos; Saavedra, Genaro; Martínez-Corral, Manuel
2015-08-01
Integral-imaging technology has demonstrated its capability for computing depth images from the microimages recorded after a single shot. This capability has been shown in macroscopic imaging and also in microscopy. Despite the possibility of refocusing different planes from one snap-shot is crucial for the study of some biological processes, the main drawback in integral imaging is the substantial reduction of the spatial resolution. In this contribution we report a technique, which permits to increase the two-dimensional spatial resolution of the computed depth images in integral microscopy by a factor of √2. This is made by a double-shot approach, carried out by means of a rotating glass plate, which shifts the microimages in the sensor plane. We experimentally validate the resolution enhancement as well as we show the benefit of applying the technique to biological specimens.
Peuquet, Donna J.
1987-01-01
A new approach to building geographic data models that is based on the fundamental characteristics of the data is presented. An overall theoretical framework for representing geographic data is proposed. An example of utilizing this framework in a Geographic Information System (GIS) context by combining artificial intelligence techniques with recent developments in spatial data processing techniques is given. Elements of data representation discussed include hierarchical structure, separation of locational and conceptual views, and the ability to store knowledge at variable levels of completeness and precision.
Directory of Open Access Journals (Sweden)
John H.R. Burns
2016-12-01
Full Text Available Ten annotated 3D reconstructions of Montipora capitata coral colonies contain x,y,z coordinates for all growth anomaly (GA lesions affecting these corals. The 3D reconstructions are available as Virtual Reality Modeling Language (VRML files, and the GA lesions coordinates are in accompanying text files. The VRML models and GA lesion coordinates can be spatially analyzed using Matlab. Matlab scripts are provided for three spatial statistical procedures in order to assess clustering of the GA lesions across the coral colony surfaces in a 3D framework: Ripley׳s K, Moran׳s I, and the Kolmogorov–Smirnov test. Please see the research article, “Investigating the spatial distribution of Growth Anomalies affecting Montipora capitata corals in a 3-dimensional framework” (J.H.R. Burns, T. Alexandrov, E. Ovchinnikova, R.D. Gates, M. Takabayashi, 2016 [1], for further interpretation and discussion of the data.
Study on Scattered Data Points Interpolation Method Based on Multi-line Structured Light
International Nuclear Information System (INIS)
Fan, J Y; Wang, F G; W, Y; Zhang, Y L
2006-01-01
Aiming at the range image obtained through multi-line structured light, a regional interpolation method is put forward in this paper. This method divides interpolation into two parts according to the memory format of the scattered data, one is interpolation of the data on the stripes, and the other is interpolation of data between the stripes. Trend interpolation method is applied to the data on the stripes, and Gauss wavelet interpolation method is applied to the data between the stripes. Experiments prove regional interpolation method feasible and practical, and it also promotes the speed and precision
International Nuclear Information System (INIS)
Reyes Lopez, Y.; Yervilla Herrera, H.; Viamontes Esquivel, A.; Recarey Morfa, C. A.
2009-01-01
In the following paper we developed a new method to interpolate large volumes of scattered data, focused mainly on the results of the Mesh free Methods, Points Methods and the Particles Methods application. Through this one, we use local radial basis function as interpolating functions. We also use over-tree as the data structure that allows to accelerate the localization of the data that influences to interpolate the values at a new point, speeding up the application of scientific visualization techniques to generate images from large data volumes from the application of Mesh-free Methods, Points and Particle Methods, in the resolution of diverse models of physics-mathematics. As an example, the results obtained after applying this method using the local interpolation functions of Shepard are shown. (Author) 22 refs
Zhu, Mengchen; Salcudean, Septimiu E
2011-07-01
In this paper, we propose an interpolation-based method for simulating rigid needles in B-mode ultrasound images in real time. We parameterize the needle B-mode image as a function of needle position and orientation. We collect needle images under various spatial configurations in a water-tank using a needle guidance robot. Then we use multidimensional tensor-product interpolation to simulate images of needles with arbitrary poses and positions using collected images. After further processing, the interpolated needle and seed images are superimposed on top of phantom or tissue image backgrounds. The similarity between the simulated and the real images is measured using a correlation metric. A comparison is also performed with in vivo images obtained during prostate brachytherapy. Our results, carried out for both the convex (transverse plane) and linear (sagittal/para-sagittal plane) arrays of a trans-rectal transducer indicate that our interpolation method produces good results while requiring modest computing resources. The needle simulation method we present can be extended to the simulation of ultrasound images of other wire-like objects. In particular, we have shown that the proposed approach can be used to simulate brachytherapy seeds.
Hosseini, Seyedeh Sona
The solar system presents a challenge to spectroscopic observers, because it is an astrophysically low energy environment populated with often angularly extended targets (e.g, interplanetary medium, comets, planetary upper atmospheres, and planet and satellite near space environments). Spectroscopy is a proven tool for determining compositional and other properties of remote objects. Narrow band imaging and low resolving spectroscopic measurements provide information about composition, photochemical evolution, energy distribution and density. The extension to high resolving power provides further access to temperature, velocity, isotopic ratios, separation of blended sources, and opacity effects. The drawback of high-resolution spectroscopy comes from the instrumental limitations of lower throughput, the necessity of small entrance apertures, sensitivity, field of view, and large physical instrumental size. These limitations quickly become definitive for faint and/or extended targets and for spacecraft encounters. An emerging technique with promise for the study of faint, extended sources at high resolving power is the all-reflective form of the Spatial Heterodyne Spectrometer (SHS). SHS instruments are compact and naturally possess both high etendue and high resolving power. To achieve similar spectral grasp, grating spectrometers require big telescopes. SHS is a common-path beam Fourier transform interferometer that produces Fizeau fringe pattern for all other wavelengths except the tuned wavelength. Compared to similar Fourier transform Spectrometers (FTS), SHS has considerably relaxed optical tolerances that make it easier to use in the visible and UV spectral ranges. The large etendue of SHS instruments makes them ideal for observations of extended, low surface brightness, isolated emission line sources, while their intrinsically high spectral resolution enables the study of the dynamical and spectral characteristics described above. SHS also combines very
Sparse representation based image interpolation with nonlocal autoregressive modeling.
Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming
2013-04-01
Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.
Reducing Interpolation Artifacts for Mutual Information Based Image Registration
Soleimani, H.; Khosravifard, M.A.
2011-01-01
Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673
Inoculating against eyewitness suggestibility via interpolated verbatim vs. gist testing.
Pansky, Ainat; Tenenboim, Einat
2011-01-01
In real-life situations, eyewitnesses often have control over the level of generality in which they choose to report event information. In the present study, we adopted an early-intervention approach to investigate to what extent eyewitness memory may be inoculated against suggestibility, following two different levels of interpolated reporting: verbatim and gist. After viewing a target event, participants responded to interpolated questions that required reporting of target details at either the verbatim or the gist level. After 48 hr, both groups of participants were misled about half of the target details and were finally tested for verbatim memory of all the details. The findings were consistent with our predictions: Whereas verbatim testing was successful in completely inoculating against suggestibility, gist testing did not reduce it whatsoever. These findings are particularly interesting in light of the comparable testing effects found for these two modes of interpolated testing.
Interpolation-free scanning and sampling scheme for tomographic reconstructions
International Nuclear Information System (INIS)
Donohue, K.D.; Saniie, J.
1987-01-01
In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
Image interpolation used in three-dimensional range data compression.
Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian
2016-05-20
Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.
Importance of interpolation and coincidence errors in data fusion
Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana
2018-02-01
The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.
An adaptive interpolation scheme for molecular potential energy surfaces
Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa
2016-08-01
The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.
Oversampling of digitized images. [effects on interpolation in signal processing
Fischel, D.
1976-01-01
Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.
Scientific data interpolation with low dimensional manifold model
Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley
2018-01-01
We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.
Implementing fuzzy polynomial interpolation (FPI and fuzzy linear regression (LFR
Directory of Open Access Journals (Sweden)
Maria Cristina Floreno
1996-05-01
Full Text Available This paper presents some preliminary results arising within a general framework concerning the development of software tools for fuzzy arithmetic. The program is in a preliminary stage. What has been already implemented consists of a set of routines for elementary operations, optimized functions evaluation, interpolation and regression. Some of these have been applied to real problems.This paper describes a prototype of a library in C++ for polynomial interpolation of fuzzifying functions, a set of routines in FORTRAN for fuzzy linear regression and a program with graphical user interface allowing the use of such routines.
Scientific data interpolation with low dimensional manifold model
International Nuclear Information System (INIS)
Zhu, Wei; Wang, Bao; Barnard, Richard C.; Hauck, Cory D.
2017-01-01
Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.
Oh, Paul; Lee, Sukho; Kang, Moon Gi
2017-06-28
Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.
Directory of Open Access Journals (Sweden)
S. A. Voronov
2015-01-01
Full Text Available The article presents a literature review in simulation of grinding processes. It takes into consideration the statistical, energy based, and imitation approaches to simulation of grinding forces. Main stages of interaction between abrasive grains and machined surface are shown. The article describes main approaches to the geometry modeling of forming new surfaces when grinding. The review of approaches to the chip and pile up effect numerical modeling is shown. Advantages and disadvantages of grain-to-surface interaction by means of finite element method and molecular dynamics method are considered. The article points out that it is necessary to take into consideration the system dynamics and its effect on the finished surface. Structure of the complex imitation model of grinding process dynamics for flexible work-pieces with spatial surface geometry is proposed from the literature review. The proposed model of spatial grinding includes the model of work-piece dynamics, model of grinding wheel dynamics, phenomenological model of grinding forces based on 3D geometry modeling algorithm. Model gives the following results for spatial grinding process: vibration of machining part and grinding wheel, machined surface geometry, static deflection of the surface and grinding forces under various cutting conditions.
International Nuclear Information System (INIS)
Deupree, R.G.
1977-01-01
Finite difference techniques were used to examine the coupling of radial pulsation and convection in stellar models having comparable time scales. Numerical procedures are emphasized, including diagnostics to help determine the range of free parameters
Khalil, Zahid
2016-07-01
Decision making about identifying suitable sites for any project by considering different parameters, is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30 meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pair wise comparison method, also known as Analytical Hierarchy Process (AHP) is took into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision making about suitable sites analysis for small dams using geo-spatial data with minimal amount of ground data. This suitability maps can be helpful for water resource
Gas proportional detectors with interpolating cathode pad readout for high track multiplicities
International Nuclear Information System (INIS)
Yu, Bo.
1991-12-01
New techniques for position encoding in very high rate particle and photon detectors will be required in experiments planned for future particle accelerators such as the Superconducting Super Collider and new, high intensity, synchrotron sources. Studies of two interpolating cathode ''pad'' readout systems are described in this thesis. They are well suited for high multiplicity, two dimensional unambiguous position sensitive detection of minimum ionizing particles and heavy ions as well as detection of x-rays at high counting rates. One of the readout systems uses subdivided rows of pads interconnected by resistive strips as the cathode of a multiwire proportional chamber (MWPC). A position resolution of less than 100 μm rms, for 5.4 keV x-rays, and differential non-linearity of 12% have been achieved. Low mass (∼0.6% of a radiation length) detector construction techniques have been developed. The second readout system uses rows of chevron shaped cathode pads to perform geometrical charge division. Position resolution (FWHM) of about 1% of the readout spacing and differential non-linearity of 10% for 5.4 keV x-rays have been achieved. A review of other interpolating methods is included. Low mass cathode construction techniques are described. In conclusion, applications and future developments are discussed. 54 refs
Radar rainfall image repair techniques
Directory of Open Access Journals (Sweden)
Stephen M. Wesson
2004-01-01
Full Text Available There are various quality problems associated with radar rainfall data viewed in images that include ground clutter, beam blocking and anomalous propagation, to name a few. To obtain the best rainfall estimate possible, techniques for removing ground clutter (non-meteorological echoes that influence radar data quality on 2-D radar rainfall image data sets are presented here. These techniques concentrate on repairing the images in both a computationally fast and accurate manner, and are nearest neighbour techniques of two sub-types: Individual Target and Border Tracing. The contaminated data is estimated through Kriging, considered the optimal technique for the spatial interpolation of Gaussian data, where the 'screening effect' that occurs with the Kriging weighting distribution around target points is exploited to ensure computational efficiency. Matrix rank reduction techniques in combination with Singular Value Decomposition (SVD are also suggested for finding an efficient solution to the Kriging Equations which can cope with near singular systems. Rainfall estimation at ground level from radar rainfall volume scan data is of interest and importance in earth bound applications such as hydrology and agriculture. As an extension of the above, Ordinary Kriging is applied to three-dimensional radar rainfall data to estimate rainfall rate at ground level. Keywords: ground clutter, data infilling, Ordinary Kriging, nearest neighbours, Singular Value Decomposition, border tracing, computation time, ground level rainfall estimation
Directory of Open Access Journals (Sweden)
Prus Barbara
2016-12-01
Full Text Available The aim of the paper is to propose methodical solutions concerning synthetic agricultural analysis of production space which consists in combined (synthetic – in spatial and statistical contexts – analysis and evaluation of quality and farming utility of soils in connection with soils erosive risk level. The paper is aimed at presentation of methodology useful in such type of analyses as well as demonstration to what extent the areas of farming production space being subject to restrictive protection are exposed to destructive effect of surface water erosion. Own factor (HDSP.E was suggested, which is a high degree synthesis of soil protection in connection with degrees of surface water erosion risk. The proposed methodology was used for detailed spatial analyses performed for Tomice – the Małopolska rural commune (case study. The area model elaborated for the proposed methodology’s purpose faced with soils mechanical composition allowed to make a model of surface water erosion in five-grade scale. Synthetic evaluation (product of spatial objects on numerous thematic layers of quality and farming utility of soils and also zones of surface water erosion risk allowed to assign spatial distribution of HDSP.E factor (abbreviation of high degree of soil protection combined with erosion. The analyses enabled to determine proportional contribution of the most valuable resources of farming production space that are subject to soil erosion negative phenomenon. Geoprocessing techniques used for the analyses of environmental elements of farming production space were applied in the paper. The analysis of spatial distribution of researched phenomena was elaborated in Quantum GIS programme.
Biased motion vector interpolation for reduced video artifacts.
2011-01-01
In a video processing system where motion vectors are estimated for a subset of the blocks of data forming a video frame, and motion vectors are interpolated for the remainder of the blocks of the frame, a method includes determining, for at least at least one block of the current frame for which a
Hybrid vehicle optimal control : Linear interpolation and singular control
Delprat, S.; Hofman, T.
2015-01-01
Hybrid vehicle energy management can be formulated as an optimal control problem. Considering that the fuel consumption is often computed using linear interpolation over lookup table data, a rigorous analysis of the necessary conditions provided by the Pontryagin Minimum Principle is conducted. For
Fast interpolation for Global Positioning System (GPS) satellite orbits
Clynch, James R.; Sagovac, Christopher Patrick; Danielson, D. A. (Donald A.); Neta, Beny
1995-01-01
In this report, we discuss and compare several methods for polynomial interpolation of Global Positioning Systems ephemeris data. We show that the use of difference tables is more efficient than the method currently in use to construct and evaluate the Lagrange polynomials.
Interpolation in computing science : the semantics of modularization
Renardel de Lavalette, Gerard R.
2008-01-01
The Interpolation Theorem, first formulated and proved by W. Craig fifty years ago for predicate logic, has been extended to many other logical frameworks and is being applied in several areas of computer science. We give a short overview, and focus on the theory of software systems and modules. An
LIP: The Livermore Interpolation Package, Version 1.6
Energy Technology Data Exchange (ETDEWEB)
Fritsch, F. N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-01-04
This report describes LIP, the Livermore Interpolation Package. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since it is a general-purpose package that need not be restricted to equation of state data, which uses variables ρ (density) and T (temperature).
Interpolation decoding method with variable parameters for fractal image compression
International Nuclear Information System (INIS)
He Chuanjiang; Li Gaoping; Shen Xiaona
2007-01-01
The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal
Functional Commutant Lifting and Interpolation on Generalized Analytic Polyhedra
Czech Academy of Sciences Publication Activity Database
Ambrozie, Calin-Grigore
2008-01-01
Roč. 34, č. 2 (2008), s. 519-543 ISSN 0362-1588 R&D Projects: GA ČR(CZ) GA201/06/0128 Institutional research plan: CEZ:AV0Z10190503 Keywords : intertwining lifting * interpolation * analytic functions Subject RIV: BA - General Mathematics Impact factor: 0.327, year: 2008
Interpolation solution of the single-impurity Anderson model
International Nuclear Information System (INIS)
Kuzemsky, A.L.
1990-10-01
The dynamical properties of the single-impurity Anderson model (SIAM) is studied using a novel Irreducible Green's Function method (IGF). The new solution for one-particle GF interpolating between the strong and weak correlation limits is obtained. The unified concept of relevant mean-field renormalizations is indispensable for strong correlation limit. (author). 21 refs
Two-dimensional interpolation with experimental data smoothing
International Nuclear Information System (INIS)
Trejbal, Z.
1989-01-01
A method of two-dimensional interpolation with smoothing of time statistically deflected points is developed for processing of magnetic field measurements at the U-120M field measurements at the U-120M cyclotron. Mathematical statement of initial requirements and the final result of relevant algebraic transformations are given. 3 refs
Recent developments in free-viewpoint interpolation for 3DTV
Zinger, S.; Do, Q.L.; With, de P.H.N.
2012-01-01
Current development of 3D technologies brings 3DTV within reach for the customers. We discuss in this article the recent advancements in free-viewpoint interpolation for 3D video. This technology is still a research topic and many efforts are dedicated to creation, evaluation and improvement of new
A temporal interpolation approach for dynamic reconstruction in perfusion CT
International Nuclear Information System (INIS)
Montes, Pau; Lauritsch, Guenter
2007-01-01
This article presents a dynamic CT reconstruction algorithm for objects with time dependent attenuation coefficient. Projection data acquired over several rotations are interpreted as samples of a continuous signal. Based on this idea, a temporal interpolation approach is proposed which provides the maximum temporal resolution for a given rotational speed of the CT scanner. Interpolation is performed using polynomial splines. The algorithm can be adapted to slow signals, reducing the amount of data acquired and the computational cost. A theoretical analysis of the approximations made by the algorithm is provided. In simulation studies, the temporal interpolation approach is compared with three other dynamic reconstruction algorithms based on linear regression, linear interpolation, and generalized Parker weighting. The presented algorithm exhibits the highest temporal resolution for a given sampling interval. Hence, our approach needs less input data to achieve a certain quality in the reconstruction than the other algorithms discussed or, equivalently, less x-ray exposure and computational complexity. The proposed algorithm additionally allows the possibility of using slow rotating scanners for perfusion imaging purposes
Limiting reiteration for real interpolation with slowly varying functions
Czech Academy of Sciences Publication Activity Database
Gogatishvili, Amiran; Opic, Bohumír; Trebels, W.
2005-01-01
Roč. 278, 1-2 (2005), s. 86-107 ISSN 0025-584X R&D Projects: GA ČR(CZ) GA201/01/0333 Institutional research plan: CEZ:AV0Z10190503 Keywords : real interpolation * K-functional * limiting reiteration Subject RIV: BA - General Mathematics Impact factor: 0.465, year: 2005
Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation
Gordon, Sheldon P.; Yang, Yajun
2017-01-01
This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…
Blind Authentication Using Periodic Properties ofInterpolation
Czech Academy of Sciences Publication Activity Database
Mahdian, Babak; Saic, Stanislav
2008-01-01
Roč. 3, č. 3 (2008), s. 529-538 ISSN 1556-6013 R&D Projects: GA ČR GA102/08/0470 Institutional research plan: CEZ:AV0Z10750506 Keywords : image forensics * digital forgery * image tampering * interpolation detection * resampling detection Subject RIV: IN - Informatics, Computer Science Impact factor: 2.230, year: 2008
Interpolation Inequalities and Spectral Estimates for Magnetic Operators
Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael
2018-05-01
We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.
Energy Technology Data Exchange (ETDEWEB)
Salabert, David; Leibacher, John W [National Solar Observatory, 950 North Cherry Avenue, Tucson, AZ 85719 (United States); Appourchaux, Thierry [Institut d' Astrophysique Spatiale, CNRS-Universite Paris XI UMR 8617, 91405 Orsay Cedex (France)], E-mail: dsalabert@nso.edu
2008-10-15
In order to take full advantage of the long time series collected by the GONG and MDI helioseismic projects, we present here an adaptation of the rotation-corrected m-averaged spectrum technique in order to observe low radial-order solar p modes. Modeled profiles of the solar rotation demonstrated the potential advantage of such a technique. Here we develop a new analysis procedure which finds the best estimates of the shift of each m of a given (n, {iota}) multiplet, commonly expressed as an expansion in a set of orthogonal polynomials, which yield the narrowest mode in the m-averaged spectrum. We apply the technique to the GONG data for modes with 1 {<=} {iota} {<=} 25 and show that it allows us to measure lower-frequency modes than with classic peak-fitting analysis of the individual-m spectra.
Contouring a guide to the analysis and display of spatial data
Watson, Debbie
2013-01-01
This unique book is the key to computer contouring, exploring in detail the practice and principles using a personal computer. Contouring allows a three dimensional view in two dimensions and is a fundamental technique to represent spatial data. All aspects of this type of representation are covered including data preparation, selecting contour intervals, interpolation and griding, computing volumes and output and display. Formulated for both the novice and the experienced user, this book initially conducts the reader through a step by step explanation of PC software and its application to per
Kamali, Arash; Zhang, Caroline C; Riascos, Roy F; Tandon, Nitin; Bonafante-Mejia, Eliana E; Patel, Rajan; Lincoln, John A; Rabiei, Pejman; Ocasio, Laura; Younes, Kyan; Hasan, Khader M
2018-03-27
The mammillary bodies as part of the hypothalamic nuclei are in the central limbic circuitry of the human brain. The mammillary bodies are shown to be directly or indirectly connected to the amygdala, hippocampus, and thalami as the major gray matter structures of the human limbic system. Although it is not primarily considered as part of the human limbic system, the thalamus is shown to be involved in many limbic functions of the human brain. The major direct connection of the thalami with the hypothalamic nuclei is known to be through the mammillothalamic tract. Given the crucial role of the mammillothalamic tracts in memory functions, diffusion tensor imaging may be helpful in better visualizing the surgical anatomy of this pathway noninvasively. This study aimed to investigate the utility of high spatial resolution diffusion tensor tractography for mapping the trajectory of the mammillothalamic tract in the human brain. Fifteen healthy adults were studied after obtaining written informed consent. We used high spatial resolution diffusion tensor imaging data at 3.0 T. We delineated, for the first time, the detailed trajectory of the mammillothalamic tract of the human brain using deterministic diffusion tensor tractography.
Directory of Open Access Journals (Sweden)
Pang Fubin
2015-09-01
Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.
Mahmoudzadeh, Amir Pasha; Kashou, Nasser H
2013-01-01
Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR) grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order) and to compare the effect of cost functions (least squares (LS), normalized mutual information (NMI), normalized cross correlation (NCC), and correlation ratio (CR)) for optimized automatic image registration (OAIR) on 3D spoiled gradient recalled (SPGR) magnetic resonance images (MRI) of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.
Directory of Open Access Journals (Sweden)
Amir Pasha Mahmoudzadeh
2013-01-01
Full Text Available Interpolation has become a default operation in image processing and medical imaging and is one of the important factors in the success of an intensity-based registration method. Interpolation is needed if the fractional unit of motion is not matched and located on the high resolution (HR grid. The purpose of this work is to present a systematic evaluation of eight standard interpolation techniques (trilinear, nearest neighbor, cubic Lagrangian, quintic Lagrangian, hepatic Lagrangian, windowed Sinc, B-spline 3rd order, and B-spline 4th order and to compare the effect of cost functions (least squares (LS, normalized mutual information (NMI, normalized cross correlation (NCC, and correlation ratio (CR for optimized automatic image registration (OAIR on 3D spoiled gradient recalled (SPGR magnetic resonance images (MRI of the brain acquired using a 3T GE MR scanner. Subsampling was performed in the axial, sagittal, and coronal directions to emulate three low resolution datasets. Afterwards, the low resolution datasets were upsampled using different interpolation methods, and they were then compared to the high resolution data. The mean squared error, peak signal to noise, joint entropy, and cost functions were computed for quantitative assessment of the method. Magnetic resonance image scans and joint histogram were used for qualitative assessment of the method.
In this study, Geographic Information Systems (GIS) and remote sensing mapping techniques were developed to identify the locations of isolated wetlands in Alachua County, FL, a 2510 sq km area in north-central Florida with diverse geology and numerous isolated wetlands. The resul...
Touitou, Jamal; Burch, Robbie; Hardacre, Christopher; McManus, Colin; Morgan, Kevin; Sá, Jacinto; Goguet, Alexandre
2013-05-21
This paper reports the detailed description and validation of a fully automated, computer controlled analytical method to spatially probe the gas composition and thermal characteristics in packed bed systems. As an exemplar, we have examined a heterogeneously catalysed gas phase reaction within the bed of a powdered oxide supported metal catalyst. The design of the gas sampling and the temperature recording systems are disclosed. A stationary capillary with holes drilled in its wall and a moveable reactor coupled with a mass spectrometer are used to enable sampling and analysis. This method has been designed to limit the invasiveness of the probe on the reactor by using the smallest combination of thermocouple and capillary which can be employed practically. An 80 μm (O.D.) thermocouple has been inserted in a 250 μm (O.D.) capillary. The thermocouple is aligned with the sampling holes to enable both the gas composition and temperature profiles to be simultaneously measured at equivalent spatially resolved positions. This analysis technique has been validated by studying CO oxidation over a 1% Pt/Al2O3 catalyst and the spatial resolution profiles of chemical species concentrations and temperature as a function of the axial position within the catalyst bed are reported.
Directory of Open Access Journals (Sweden)
S. Wu
2017-10-01
Full Text Available The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.
International Nuclear Information System (INIS)
Soycan, Arzu; Soycan, Metin
2009-01-01
GIS (Geographical Information System) is one of the most striking innovation for mapping applications supplied by the developing computer and software technology to users. GIS is a very effective tool which can show visually combination of the geographical and non-geographical data by recording these to allow interpretations and analysis. DEM (Digital Elevation Model) is an inalienable component of the GIS. The existing TM (Topographic Map) can be used as the main data source for generating DEM by amanual digitizing or vectorization process for the contours polylines. The aim of this study is to examine the DEM accuracies, which were obtained by TMs, as depending on the number of sampling points and grid size. For these purposes, the contours of the several 1/1000 scaled scanned topographical maps were vectorized. The different DEMs of relevant area have been created by using several datasets with different numbers of sampling points. We focused on the DEM creation from contour lines using gridding with RBF (Radial Basis Function) interpolation techniques, namely TPS as the surface fitting model. The solution algorithm and a short review of the mathematical model of TPS (Thin Plate Spline) interpolation techniques are given. In the test study, results of the application and the obtained accuracies are drawn and discussed. The initial object of this research is to discuss the requirement of DEM in GIS, urban planning, surveying engineering and the other applications with high accuracy (a few deci meters). (author)
Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan
2012-01-01
Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.
Directory of Open Access Journals (Sweden)
Maheswari Subramanian
2018-01-01
Full Text Available Information hiding techniques have a significant role in recent application areas. Steganography is the embedding of information within an innocent cover work in a way which cannot be detected by any person without accessing the steganographic key. The proposed work uses a steganographic scheme for useful information with the help of human skin tone regions as cover image. The proposed algorithm has undergone Lagrange interpolation encryption for enhancement of the security of the hidden information. First, the skin tone regions are identified by using YCbCr color space which can be used as a cover image. Image pixels which belong to the skin regions are used to carry more secret bits, and the secret information is hidden in both horizontal and vertical sequences of the skin areas of the cover image. The secret information will hide behind the human skin regions rather than other objects in the same image because the skin pixels have high intensity value. The performance of embedding is done and is quite invisible by the vector discrete wavelet transformation (VDWT technique. A new Lagrange interpolation-based encryption method is introduced to achieve high security of the hidden information with higher payload and better visual quality.
Resistor mesh model of a spherical head: part 1: applications to scalp potential interpolation.
Chauveau, N; Morucci, J P; Franceries, X; Celsis, P; Rigaud, B
2005-11-01
A resistor mesh model (RMM) has been implemented to describe the electrical properties of the head and the configuration of the intracerebral current sources by simulation of forward and inverse problems in electroencephalogram/event related potential (EEG/ERP) studies. For this study, the RMM representing the three basic tissues of the human head (brain, skull and scalp) was superimposed on a spherical volume mimicking the head volume: it included 43 102 resistances and 14 123 nodes. The validation was performed with reference to the analytical model by consideration of a set of four dipoles close to the cortex. Using the RMM and the chosen dipoles, four distinct families of interpolation technique (nearest neighbour, polynomial, splines and lead fields) were tested and compared so that the scalp potentials could be recovered from the electrode potentials. The 3D spline interpolation and the inverse forward technique (IFT) gave the best results. The IFT is very easy to use when the lead-field matrix between scalp electrodes and cortex nodes has been calculated. By simple application of the Moore-Penrose pseudo inverse matrix to the electrode cap potentials, a set of current sources on the cortex is obtained. Then, the forward problem using these cortex sources renders all the scalp potentials.
Voice Morphing Using 3D Waveform Interpolation Surfaces and Lossless Tube Area Functions
Directory of Open Access Journals (Sweden)
Lavner Yizhar
2005-01-01
Full Text Available Voice morphing is the process of producing intermediate or hybrid voices between the utterances of two speakers. It can also be defined as the process of gradually transforming the voice of one speaker to that of another. The ability to change the speaker's individual characteristics and to produce high-quality voices can be used in many applications. Examples include multimedia and video entertainment, as well as enrichment of speech databases in text-to-speech systems. In this study we present a new technique which enables production of a given number of intermediate voices or of utterances which gradually change from one voice to another. This technique is based on two components: (1 creation of a 3D prototype waveform interpolation (PWI surface from the LPC residual signal, to produce an intermediate excitation signal; (2 a representation of the vocal tract by a lossless tube area function, and an interpolation of the parameters of the two speakers. The resulting synthesized signal sounds like a natural voice lying between the two original voices.
International Nuclear Information System (INIS)
Hu, Y; Mutic, S; Du, D; Green, O; Zeng, Q; Nana, R; Patrick, J; Shvartsman, S; Dempsey, J
2014-01-01
Purpose: To evaluate the feasibility of using the weighted hybrid iterative spiral k-space encoded estimation (WHISKEE) technique to improve spatial resolution of tracking images for onboard MR image guided radiation therapy (MR-IGRT). Methods: MR tracking images of abdomen and pelvis had been acquired from healthy volunteers using the ViewRay onboard MRIGRT system (ViewRay Inc. Oakwood Village, OH) at a spatial resolution of 2.0mm*2.0mm*5.0mm. The tracking MR images were acquired using the TrueFISP sequence. The temporal resolution had to be traded off to 2 frames per second (FPS) to achieve the 2.0mm in-plane spatial resolution. All MR images were imported into the MATLAB software. K-space data were synthesized through the Fourier Transform of the MR images. A mask was created to selected k-space points that corresponded to the under-sampled spiral k-space trajectory with an acceleration (or undersampling) factor of 3. The mask was applied to the fully sampled k-space data to synthesize the undersampled k-space data. The WHISKEE method was applied to the synthesized undersampled k-space data to reconstructed tracking MR images at 6 FPS. As a comparison, the undersampled k-space data were also reconstructed using the zero-padding technique. The reconstructed images were compared to the original image. The relatively reconstruction error was evaluated using the percentage of the norm of the differential image over the norm of the original image. Results: Compared to the zero-padding technique, the WHISKEE method was able to reconstruct MR images with better image quality. It significantly reduced the relative reconstruction error from 39.5% to 3.1% for the pelvis image and from 41.5% to 4.6% for the abdomen image at an acceleration factor of 3. Conclusion: We demonstrated that it was possible to use the WHISKEE method to expedite MR image acquisition for onboard MR-IGRT systems to achieve good spatial and temporal resolutions simultaneously. Y. Hu and O. green
Energy Technology Data Exchange (ETDEWEB)
Hu, Y; Mutic, S; Du, D; Green, O [Washington University School of Medicine, Saint Louis, MO (United States); Zeng, Q; Nana, R; Patrick, J; Shvartsman, S; Dempsey, J [ViewRay Incorporated, Oakwood Village, OH (United States)
2014-06-15
Purpose: To evaluate the feasibility of using the weighted hybrid iterative spiral k-space encoded estimation (WHISKEE) technique to improve spatial resolution of tracking images for onboard MR image guided radiation therapy (MR-IGRT). Methods: MR tracking images of abdomen and pelvis had been acquired from healthy volunteers using the ViewRay onboard MRIGRT system (ViewRay Inc. Oakwood Village, OH) at a spatial resolution of 2.0mm*2.0mm*5.0mm. The tracking MR images were acquired using the TrueFISP sequence. The temporal resolution had to be traded off to 2 frames per second (FPS) to achieve the 2.0mm in-plane spatial resolution. All MR images were imported into the MATLAB software. K-space data were synthesized through the Fourier Transform of the MR images. A mask was created to selected k-space points that corresponded to the under-sampled spiral k-space trajectory with an acceleration (or undersampling) factor of 3. The mask was applied to the fully sampled k-space data to synthesize the undersampled k-space data. The WHISKEE method was applied to the synthesized undersampled k-space data to reconstructed tracking MR images at 6 FPS. As a comparison, the undersampled k-space data were also reconstructed using the zero-padding technique. The reconstructed images were compared to the original image. The relatively reconstruction error was evaluated using the percentage of the norm of the differential image over the norm of the original image. Results: Compared to the zero-padding technique, the WHISKEE method was able to reconstruct MR images with better image quality. It significantly reduced the relative reconstruction error from 39.5% to 3.1% for the pelvis image and from 41.5% to 4.6% for the abdomen image at an acceleration factor of 3. Conclusion: We demonstrated that it was possible to use the WHISKEE method to expedite MR image acquisition for onboard MR-IGRT systems to achieve good spatial and temporal resolutions simultaneously. Y. Hu and O. green
Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation
Murarasu, Alin; Weidendorfer, Josef
2012-01-01
bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation
Phung, Dung; Huang, Cunrui; Rutherford, Shannon; Dwirahmadi, Febi; Chu, Cordia; Wang, Xiaoming; Nguyen, Minh; Nguyen, Nga Huy; Do, Cuong Manh; Nguyen, Trung Hieu; Dinh, Tuan Anh Diep
2015-05-01
The present study is an evaluation of temporal/spatial variations of surface water quality using multivariate statistical techniques, comprising cluster analysis (CA), principal component analysis (PCA), factor analysis (FA) and discriminant analysis (DA). Eleven water quality parameters were monitored at 38 different sites in Can Tho City, a Mekong Delta area of Vietnam from 2008 to 2012. Hierarchical cluster analysis grouped the 38 sampling sites into three clusters, representing mixed urban-rural areas, agricultural areas and industrial zone. FA/PCA resulted in three latent factors for the entire research location, three for cluster 1, four for cluster 2, and four for cluster 3 explaining 60, 60.2, 80.9, and 70% of the total variance in the respective water quality. The varifactors from FA indicated that the parameters responsible for water quality variations are related to erosion from disturbed land or inflow of effluent from sewage plants and industry, discharges from wastewater treatment plants and domestic wastewater, agricultural activities and industrial effluents, and contamination by sewage waste with faecal coliform bacteria through sewer and septic systems. Discriminant analysis (DA) revealed that nephelometric turbidity units (NTU), chemical oxygen demand (COD) and NH₃ are the discriminating parameters in space, affording 67% correct assignation in spatial analysis; pH and NO₂ are the discriminating parameters according to season, assigning approximately 60% of cases correctly. The findings suggest a possible revised sampling strategy that can reduce the number of sampling sites and the indicator parameters responsible for large variations in water quality. This study demonstrates the usefulness of multivariate statistical techniques for evaluation of temporal/spatial variations in water quality assessment and management.
Effect of interpolation on parameters extracted from seating interface pressure arrays
Michael Wininger, PhD; Barbara Crane, PhD, PT
2015-01-01
Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pre...
Kingdon, Andrew; Nayembil, Martin L.; Richardson, Anne E.; Smith, A. Graham
2016-11-01
New requirements to understand geological properties in three dimensions have led to the development of PropBase, a data structure and delivery tools to deliver this. At the BGS, relational database management systems (RDBMS) has facilitated effective data management using normalised subject-based database designs with business rules in a centralised, vocabulary controlled, architecture. These have delivered effective data storage in a secure environment. However, isolated subject-oriented designs prevented efficient cross-domain querying of datasets. Additionally, the tools provided often did not enable effective data discovery as they struggled to resolve the complex underlying normalised structures providing poor data access speeds. Users developed bespoke access tools to structures they did not fully understand sometimes delivering them incorrect results. Therefore, BGS has developed PropBase, a generic denormalised data structure within an RDBMS to store property data, to facilitate rapid and standardised data discovery and access, incorporating 2D and 3D physical and chemical property data, with associated metadata. This includes scripts to populate and synchronise the layer with its data sources through structured input and transcription standards. A core component of the architecture includes, an optimised query object, to deliver geoscience information from a structure equivalent to a data warehouse. This enables optimised query performance to deliver data in multiple standardised formats using a web discovery tool. Semantic interoperability is enforced through vocabularies combined from all data sources facilitating searching of related terms. PropBase holds 28.1 million spatially enabled property data points from 10 source databases incorporating over 50 property data types with a vocabulary set that includes 557 property terms. By enabling property data searches across multiple databases PropBase has facilitated new scientific research, previously
Efficient GPU-based texture interpolation using uniform B-splines
Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.
2008-01-01
This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and
A parameterization of observer-based controllers: Bumpless transfer by covariance interpolation
DEFF Research Database (Denmark)
Stoustrup, Jakob; Komareji, Mohammad
2009-01-01
This paper presents an algorithm to interpolate between two observer-based controllers for a linear multivariable system such that the closed loop system remains stable throughout the interpolation. The method interpolates between the inverse Lyapunov functions for the two original state feedback...