WorldWideScience

Sample records for thick slice interpolation

  1. Thick Slice and Thin Slice Teaching Evaluations

    Science.gov (United States)

    Tom, Gail; Tong, Stephanie Tom; Hesse, Charles

    2010-01-01

    Student-based teaching evaluations are an integral component to institutions of higher education. Previous work on student-based teaching evaluations suggest that evaluations of instructors based upon "thin slice" 30-s video clips of them in the classroom correlate strongly with their end of the term "thick slice" student evaluations. This study's…

  2. The effects of slice thickness and reconstructive parameters on VR image quality in multi-slice CT

    International Nuclear Information System (INIS)

    Gao Zhenlong; Wang Qiang; Liu Caixia

    2005-01-01

    Objective: To explore the effects of slice thickness, reconstructive thickness and reconstructive interval on VR image quality in multi-slice CT, in order to select the best slice thickness and reconstructive parameters for the imaging. Methods: Multi-slice CT scan was applied on a rubber dinosaur model with different slice thickness. VR images were reconstructed with different reconstructive thickness and reconstructive interval. Five radiologists were invited to evaluate the quality of the images without knowing anything about the parameters. Results: The slice thickness, reconstructive thickness and reconstructive interval did have effects on VR image quality and the effective degree was different. The effective coefficients were V 1 =1413.033, V 2 =563.733, V 3 =390.533, respectively. The parameters interacted with the others (P<0.05). The smaller of those parameters, the better of the image quality. With a small slice thickness and a reconstructive slice equal to slice thickness, the image quality had no obvious difference when the reconstructive interval was 1/2, 1/3, 1/4 of the slice thickness. Conclusion: A relative small scan slice thickness, a reconstructive slice equal to slice thickness and a reconstructive interval 1/2 of the slice thickness should be selected for the best VR image quality. The image quality depends mostly on the slice thickness. (authors)

  3. Influence of the slice thickness in CT to clinical effect

    International Nuclear Information System (INIS)

    Kimura, Kazue; Katakura, Toshihiko; Ito, Masami; Okuaki, Okihisa; Suzuki, Kenji

    1980-01-01

    CT is a kind of tomography. Therefore, what thickness of tissue is being observed in the picture - this is important in the clinical application of CT. The influence of slice thickness on the pictures, especially its clinical effect, was examined. The apparatus used is EMI CT 5005. For varying the slice thickness, it cannot be any larger than the thickness essential to the apparatus. Therefore, to make it thinner than the essential 14 mm, collimators were specially prepared, which were used on the sides of an X-ray tube and a detector. As basic observation, the revelation ability of form owing to the difference of slice thickness using acryl pipes, and the revelation ability of slice face owing to the difference of slice thickness, were examined. About clinical observation, the results for certain cases of cancer were compared with the CT images for the slice thickness of 14 mm essential to EMI CT 5005 and the slice thickness of 7 mm achieved by means of the collimators. (J.P.N.)

  4. Effects of Temperature and Slice Thickness on Drying Kinetics of Pumpkin Slices

    OpenAIRE

    Kongdej LIMPAIBOON

    2011-01-01

    Dried pumpkin slice is an alternative crisp food product. In this study, the effects of temperature and slice thickness on the drying characteristics of pumpkin were studied in a lab-scale tray dryer, using hot air temperatures of 55, 60 and 65 °C and 2, 3 and 4 mm slice thickness at a constant air velocity of 1.5 m/s. The initial moisture content of the pumpkin samples was 900.5 % (wb). The drying process was carried out until the final moisture content of product was 100.5 % (wb). The resul...

  5. Shape determinative slice localization for patient-specific masseter modeling using shape-based interpolation

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering (Singapore); Biomedical Imaging Lab., Agency for Science Technology and Research (Singapore); Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering (Singapore); Dept. of Preventive Dentistry, National Univ. of Singapore (Singapore); Ong, S.H. [Dept. of Electrical and Computer Engineering, National Univ. of Singapore (Singapore); Div. of Bioengineering, National Univ. of Singapore (Singapore); Liu, J.; Nowinski, W.L. [Biomedical Imaging Lab., Agency for Science Technology and Research (Singapore); Goh, P.S. [Dept. of Diagnostic Radiology, National Univ. of Singapore (Singapore)

    2007-06-15

    The masseter plays a critical role in the mastication system. A hybrid method to shape-based interpolation is used to build the masseter model from magnetic resonance (MR) data sets. The main contribution here is the localizing of determinative slices in the data sets where clinicians are required to perform manual segmentations in order for an accurate model to be built. Shape-based criteria were used to locate the candidates for determinative slices and fuzzy-c-means (FCM) clustering technique was used to establish the determinative slices. Five masseter models were built in our work and the average overlap indices ({kappa}) achieved is 85.2%. This indicates that there is good agreement between the models and the manual contour tracings. In addition, the time taken, as compared to manually segmenting all the slices, is significantly lesser. (orig.)

  6. Shape determinative slice localization for patient-specific masseter modeling using shape-based interpolation

    International Nuclear Information System (INIS)

    Ng, H.P.; Foong, K.W.C.; Ong, S.H.; Liu, J.; Nowinski, W.L.; Goh, P.S.

    2007-01-01

    The masseter plays a critical role in the mastication system. A hybrid method to shape-based interpolation is used to build the masseter model from magnetic resonance (MR) data sets. The main contribution here is the localizing of determinative slices in the data sets where clinicians are required to perform manual segmentations in order for an accurate model to be built. Shape-based criteria were used to locate the candidates for determinative slices and fuzzy-c-means (FCM) clustering technique was used to establish the determinative slices. Five masseter models were built in our work and the average overlap indices (κ) achieved is 85.2%. This indicates that there is good agreement between the models and the manual contour tracings. In addition, the time taken, as compared to manually segmenting all the slices, is significantly lesser. (orig.)

  7. Feasibility study of 2D thick-slice MR digital subtraction angiography

    International Nuclear Information System (INIS)

    Ishimori, Yoshiyuki; Takeuchi, Miho; Higashimura, Kyouji; Komuro, Hiroyuki

    2000-01-01

    Conditions required to perform contrast MR digital subtraction angiography using a two-dimensional thick-slice high-speed gradient echo were investigated. The conditions in the phantom experiment included: slice profile, flip angle, imaging matrix, fat suppression, duration of IR pulse and frequency selectivity, flip angle of IR pulse and inversion time. Based on the results of the experiment, 2D thick-slice MRDSA was performed in volunteers. Under TR/TE=5.3-9/1.3-1.8 ms conditions, the requirements were a slice thick enough to include the target region, a flip angle of 10 degrees, and a phase matrix of 96 or more. Fat suppression was required for adipose-tissue-rich regions, such as the abdomen. The optimal conditions for applying the IR preparation pulse of the IR prepped fast gradient recalled echo as spectrally selective inversion recovery appeared to be: duration of IR pulse =20 ms, flip angle =100 degrees, and inversion time =40 ms. The authors concluded that it was feasible to perform 2D thick-slice MRDSA with time resolution within 1 second. (K.H.)

  8. Inter-slice motion correction using spatiotemporal interpolation for functional magnetic resonance imaging of the moving fetus

    OpenAIRE

    Limperopoulos, Catherine; You, Wonsang

    2017-01-01

    Fetal motion continues to be one of the major artifacts in in-utero functional MRI; interestingly few methods have been developed to address fetal motion correction. In this study, we propose a robust method for motion correction in fetal fMRI by which both inter-slice and inter-volume motion artifacts are jointly corrected. To accomplish this, an original volume is temporally split into odd and even slices, and then voxel intensities are spatially and temporally interpolated in the process o...

  9. Fourier-based approach to interpolation in single-slice helical computed tomography

    International Nuclear Information System (INIS)

    La Riviere, Patrick J.; Pan Xiaochuan

    2001-01-01

    It has recently been shown that longitudinal aliasing can be a significant and detrimental presence in reconstructed single-slice helical computed tomography (CT) volumes. This aliasing arises because the directly measured data in helical CT are generally undersampled by a factor of at least 2 in the longitudinal direction and because the exploitation of the redundancy of fanbeam data acquired over 360 degree sign to generate additional longitudinal samples does not automatically eliminate the aliasing. In this paper we demonstrate that for pitches near 1 or lower, the redundant fanbeam data, when used properly, can provide sufficient information to satisfy a generalized sampling theorem and thus to eliminate aliasing. We develop and evaluate a Fourier-based algorithm, called 180FT, that accomplishes this. As background we present a second Fourier-based approach, called 360FT, that makes use only of the directly measured data. Both Fourier-based approaches exploit the fast Fourier transform and the Fourier shift theorem to generate from the helical projection data a set of fanbeam sinograms corresponding to equispaced transverse slices. Slice-by-slice reconstruction is then performed by use of two-dimensional fanbeam algorithms. The proposed approaches are compared to their counterparts based on the use of linear interpolation - the 360LI and 180LI approaches. The aliasing suppression property of the 180FT approach is a clear advantage of the approach and represents a step toward the desirable goal of achieving uniform longitudinal resolution properties in reconstructed helical CT volumes

  10. Shape-based grey-level image interpolation

    International Nuclear Information System (INIS)

    Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh

    1999-01-01

    The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)

  11. Favorable noise uniformity properties of Fourier-based interpolation and reconstruction approaches in single-slice helical computed tomography

    International Nuclear Information System (INIS)

    La Riviere, Patrick J.; Pan Xiaochuan

    2002-01-01

    Volumes reconstructed by standard methods from single-slice helical computed tomography (CT) data have been shown to have noise levels that are highly nonuniform relative to those in conventional CT. These noise nonuniformities can affect low-contrast object detectability and have also been identified as the cause of the zebra artifacts that plague maximum intensity projection (MIP) images of such volumes. While these spatially variant noise levels have their root in the peculiarities of the helical scan geometry, there is also a strong dependence on the interpolation and reconstruction algorithms employed. In this paper, we seek to develop image reconstruction strategies that eliminate or reduce, at its source, the nonuniformity of noise levels in helical CT relative to that in conventional CT. We pursue two approaches, independently and in concert. We argue, and verify, that Fourier-based longitudinal interpolation approaches lead to more uniform noise ratios than do the standard 360LI and 180LI approaches. We also demonstrate that a Fourier-based fan-to-parallel rebinning algorithm, used as an alternative to fanbeam filtered backprojection for slice reconstruction, also leads to more uniform noise ratios, even when making use of the 180LI and 360LI interpolation approaches

  12. Impact of Different CT Slice Thickness on Clinical Target Volume for 3D Conformal Radiation Therapy

    International Nuclear Information System (INIS)

    Prabhakar, Ramachandran; Ganesh, Tharmar; Rath, Goura K.; Julka, Pramod K.; Sridhar, Pappiah S.; Joshi, Rakesh C.; Thulkar, Sanjay

    2009-01-01

    The purpose of this study was to present the variation of clinical target volume (CTV) with different computed tomography (CT) slice thicknesses and the impact of CT slice thickness on 3-dimensional (3D) conformal radiotherapy treatment planning. Fifty patients with brain tumors were selected and CT scans with 2.5-, 5-, and 10-mm slice thicknesses were performed with non-ionic contrast enhancement. The patients were selected with tumor volume ranging from 2.54 cc to 222 cc. Three-dimensional treatment planning was performed for all three CT datasets. The target coverage and the isocenter shift between the treatment plans for different slice thickness were correlated with the tumor volume. An important observation from our study revealed that for volume 25 cc, the target underdosage was less than 6.7% for 5-mm slice thickness and 8% for 10-mm slice thickness. For 3D conformal radiotherapy treatment planning (3DCRT), a CT slice thickness of 2.5 mm is optimum for tumor volume 25 cc

  13. Influence of image slice thickness on rectal dose–response relationships following radiotherapy of prostate cancer

    International Nuclear Information System (INIS)

    Olsson, C; Thor, M; Apte, A; Deasy, J O; Liu, M; Moissenko, V; Petersen, S E; Høyer, M

    2014-01-01

    When pooling retrospective data from different cohorts, slice thicknesses of acquired computed tomography (CT) images used for treatment planning may vary between cohorts. It is, however, not known if varying slice thickness influences derived dose–response relationships. We investigated this for rectal bleeding using dose–volume histograms (DVHs) of the rectum and rectal wall for dose distributions superimposed on images with varying CT slice thicknesses. We used dose and endpoint data from two prostate cancer cohorts treated with three-dimensional conformal radiotherapy to either 74 Gy (N = 159) or 78 Gy (N = 159) at 2 Gy per fraction. The rectum was defined as the whole organ with content, and the morbidity cut-off was Grade ≥2 late rectal bleeding. Rectal walls were defined as 3 mm inner margins added to the rectum. DVHs for simulated slice thicknesses from 3 to 13 mm were compared to DVHs for the originally acquired slice thicknesses at 3 and 5 mm. Volumes, mean, and maximum doses were assessed from the DVHs, and generalized equivalent uniform dose (gEUD) values were calculated. For each organ and each of the simulated slice thicknesses, we performed predictive modeling of late rectal bleeding using the Lyman–Kutcher–Burman (LKB) model. For the most coarse slice thickness, rectal volumes increased (≤18%), whereas maximum and mean doses decreased (≤0.8 and ≤4.2 Gy, respectively). For all a values, the gEUD for the simulated DVHs were ≤1.9 Gy different than the gEUD for the original DVHs. The best-fitting LKB model parameter values with 95% CIs were consistent between all DVHs. In conclusion, we found that the investigated slice thickness variations had minimal impact on rectal dose–response estimations. From the perspective of predictive modeling, our results suggest that variations within 10 mm in slice thickness between cohorts are unlikely to be a limiting factor when pooling multi-institutional rectal dose data that include slice

  14. Influence of image slice thickness on rectal dose-response relationships following radiotherapy of prostate cancer

    Science.gov (United States)

    Olsson, C.; Thor, M.; Liu, M.; Moissenko, V.; Petersen, S. E.; Høyer, M.; Apte, A.; Deasy, J. O.

    2014-07-01

    When pooling retrospective data from different cohorts, slice thicknesses of acquired computed tomography (CT) images used for treatment planning may vary between cohorts. It is, however, not known if varying slice thickness influences derived dose-response relationships. We investigated this for rectal bleeding using dose-volume histograms (DVHs) of the rectum and rectal wall for dose distributions superimposed on images with varying CT slice thicknesses. We used dose and endpoint data from two prostate cancer cohorts treated with three-dimensional conformal radiotherapy to either 74 Gy (N = 159) or 78 Gy (N = 159) at 2 Gy per fraction. The rectum was defined as the whole organ with content, and the morbidity cut-off was Grade ≥2 late rectal bleeding. Rectal walls were defined as 3 mm inner margins added to the rectum. DVHs for simulated slice thicknesses from 3 to 13 mm were compared to DVHs for the originally acquired slice thicknesses at 3 and 5 mm. Volumes, mean, and maximum doses were assessed from the DVHs, and generalized equivalent uniform dose (gEUD) values were calculated. For each organ and each of the simulated slice thicknesses, we performed predictive modeling of late rectal bleeding using the Lyman-Kutcher-Burman (LKB) model. For the most coarse slice thickness, rectal volumes increased (≤18%), whereas maximum and mean doses decreased (≤0.8 and ≤4.2 Gy, respectively). For all a values, the gEUD for the simulated DVHs were ≤1.9 Gy different than the gEUD for the original DVHs. The best-fitting LKB model parameter values with 95% CIs were consistent between all DVHs. In conclusion, we found that the investigated slice thickness variations had minimal impact on rectal dose-response estimations. From the perspective of predictive modeling, our results suggest that variations within 10 mm in slice thickness between cohorts are unlikely to be a limiting factor when pooling multi-institutional rectal dose data that include slice thickness

  15. Evaluation of the possibility to use thick slabs of reconstructed outer breast tomosynthesis slice images

    Science.gov (United States)

    Petersson, Hannie; Dustler, Magnus; Tingberg, Anders; Timberg, Pontus

    2016-03-01

    The large image volumes in breast tomosynthesis (BT) have led to large amounts of data and a heavy workload for breast radiologists. The number of slice images can be decreased by combining adjacent image planes (slabbing) but the decrease in depth resolution can considerably affect the detection of lesions. The aim of this work was to assess if thicker slabbing of the outer slice images (where lesions seldom are present) could be a viable alternative in order to reduce the number of slice images in BT image volumes. The suggested slabbing (an image volume with thick outer slabs and thin slices between) were evaluated in two steps. Firstly, a survey of the depth of 65 cancer lesions within the breast was performed to estimate how many lesions would be affected by outer slabs of different thicknesses. Secondly, a selection of 24 lesions was reconstructed with 2, 6 and 10 mm slab thickness to evaluate how the appearance of lesions located in the thicker slabs would be affected. The results show that few malignant breast lesions are located at a depth less than 10 mm from the surface (especially for breast thicknesses of 50 mm and above). Reconstruction of BT volumes with 6 mm slab thickness yields an image quality that is sufficient for lesion detection for a majority of the investigated cases. Together, this indicates that thicker slabbing of the outer slice images is a promising option in order to reduce the number of slice images in BT image volumes.

  16. A Comparative Study of Spiral Tomograms with Different Slice Thicknesses in Dental Implant Planning

    International Nuclear Information System (INIS)

    Yoon Sook Ja

    1999-01-01

    To know whether there would be a difference among spiral tomograms of different slice thicknesses in the measurement of distances which are used for dental implant planning. 10 dry mandibules and 40 metal balls are used to take total 120 Scanora tomograms with the slice thickness of 2 mm, 4 mm and 8 mm. 3 oral radiologists interpreted each tomogram to measure the distances from the mandibular canal to the alveoalr crest and buccal, lingual and inferior borders of mandible. 3 observers recorded grades of 0, 1 or 2 to evaluate the perceptibility of alveolar crest and the superior border of mandibular canal. For statistical analysis, ANOVA with repeated measure, Chi-square tests and intraclass correlation coefficient (R2, alpha) were used. There was not a statistically significant difference among spiral tomograms with different slice thicknesses in the measurement of the distances and in the perceptibility of alveolar crest and mandibular canal (p>0.05). All of them showed a good relationship in the reliability analysis. The perceptibility of alveolar crest and mandibular canal was almost similar and an excellent relationship was seen on all of them. There would be no significant difference, no matter which spiral tomogram of any slice thickness may be used in dental implant planning, considering the thickness of dental implant fixture.

  17. Influence of slice thickness on the determination of left ventricular wall thickness and dimension by magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Ohnishi, Shusaku; Fukui, Sugao; Atsumi, Chisato and others

    1989-02-01

    Wall thickness of the ventricular septum and left ventricle, and left ventricular cavity dimension were determined on magnetic resonance (MR) images with slices 5 mm and 10 mm in thickness. Subjects were 3 healthy volunteers and 7 patients with hypertension (4), hypertrophic cardiomyopathy (one) or valvular heart disease (2). In visualizing the cardiac structures such as left ventricular papillary muscle and right and left ventricles, 5 mm-thick images were better than 10 mm-thick images. Edges of ventricular septum and left ventricular wall were more clearly visualized on 5 mm-thick images than 10 mm-thick images. Two mm-thick MR images obtained from 2 patients yielded the most excellent visualization in end-systole, but failed to reveal cardiac structures in detail in end-diastole. Phantom studies revealed no significant differences in image quality of 10 mm and 5 mm in thickness in the axial view 80 degree to the long axis. In the axial view 45 degree to the long axis, 10 mm-thick images were inferior to 5 mm-thick images in detecting the edge of the septum and the left ventricular wall. These results indicate that the selection of slice thickness is one of the most important determinant factors in the measurement of left ventricular wall thickness and cavity dimension. (Namekawa, K).

  18. Influence of slice thickness on the determination of left ventricular wall thickness and dimension by magnetic resonance imaging

    International Nuclear Information System (INIS)

    Ohnishi, Shusaku; Fukui, Sugao; Atsumi, Chisato

    1989-01-01

    Wall thickness of the ventricular septum and left ventricle, and left ventricular cavity dimension were determined on magnetic resonance (MR) images with slices 5 mm and 10 mm in thickness. Subjects were 3 healthy volunteers and 7 patients with hypertension (4), hypertrophic cardiomyopathy (one) or valvular heart disease (2). In visualizing the cardiac structures such as left ventricular papillary muscle and right and left ventricles, 5 mm-thick images were better than 10 mm-thick images. Edges of ventricular septum and left ventricular wall were more clearly visualized on 5 mm-thick images than 10 mm-thick images. Two mm-thick MR images obtained from 2 patients yielded the most excellent visualization in end-systole, but failed to reveal cardiac structures in detail in end-diastole. Phantom studies revealed no significant differences in image quality of 10 mm and 5 mm in thickness in the axial view 80 degree to the long axis. In the axial view 45 degree to the long axis, 10 mm-thick images were inferior to 5 mm-thick images in detecting the edge of the septum and the left ventricular wall. These results indicate that the selection of slice thickness is one of the most important determinant factors in the measurement of left ventricular wall thickness and cavity dimension. (Namekawa, K)

  19. A Unified Approach to Diffusion Direction Sensitive Slice Registration and 3-D DTI Reconstruction From Moving Fetal Brain Anatomy

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Seshamani, Sharmishtaa; Kroenke, Christopher

    2014-01-01

    to the underlying anatomy. Previous image registration techniques have been described to estimate the between slice fetal head motion, allowing the reconstruction of 3D a diffusion estimate on a regular grid using interpolation. We propose Approach to Unified Diffusion Sensitive Slice Alignment and Reconstruction...... (AUDiSSAR) that explicitly formulates a process for diffusion direction sensitive DW-slice-to-DTI-volume alignment. This also incorporates image resolution modeling to iteratively deconvolve the effects of the imaging point spread function using the multiple views provided by thick slices acquired...

  20. Study of imaging time shortening in Whole Heart MRCA. Evaluation of slice thickness

    International Nuclear Information System (INIS)

    Iwai, Mitsuhiro; Tateishi, Toshiki; Takeda, Soji; Hayashi, Ryuji

    2005-01-01

    A series of examinations in cardiac MR imaging, such as cine, perfusion, MR coronary angiography (MRCA) and viability, is generally known as One Stop Cardiac Examination. It takes about 40 to 60 minutes to perform One Stop Cardiac Examination, and Whole Heart MRCA accounts for most of the examination time. Therefore, we aimed to shorten imaging time of Whole Heart MRCA, especially in a large imaging area such as that in the case of the postoperative evaluation of a bypass graft, by investigating the depiction of a diameter of mimic blood vessels as changing the slice thickness of Whole Heart MRCA. The results showed that the maximum slice thickness of about 1 mm was excellent considering the diameters of actual coronary arteries are about 3 mm. In this study, we could grasp the relationships among slice thickness of Whole Heart MRCA, the diameter of a blood vessel, and shortened examination time. We suggested that it was useful for selecting the suitable sequence depending on a patient's conditions. (author)

  1. Image quality dependence on thickness of sliced rat kidney taken by a simplest DEI construction

    International Nuclear Information System (INIS)

    Li Gang; Chen Zhihua; Wu Ziyu; Ando, M.; Pan Lin; Wang, J.Y.; Jiang, X.M.

    2005-01-01

    The excised rat kidney slices were investigated using a simplified diffraction-enhanced imaging (DEI) configuration with only two crystals: the first one working as monochromator and the second one working as analyzer in the Bragg geometry that was developed at Beijing Synchrotron Radiation Facility (BSRF). Many fine anatomic structures of the sliced rat kidneys with thickness of 2mm and 120μm can be distinguished clearly in the DEI images that were obtained at the shoulder of a rocking curve. The authors would like to emphasize that the thick and thin slices DEI provides very different images; in the thick sample only the structure with the big density gradient or that near the surface where X-ray comes out can be distinguished, while in the thin ones some fine structures, which can not be distinguished at the thick sample under the same condition, can be seen very clearly. The reason related with the counteraction of δ(x,y,z) gradient in the integral process along the X-ray path inside the thick sample is discussed

  2. Image quality dependence on thickness of sliced rat kidney taken by a simplest DEI construction

    Energy Technology Data Exchange (ETDEWEB)

    Li Gang [Institute of High Energy Physics, Chinese Academy of Science, Yuquan Rd. No 19, Beijing 100039 (China)]. E-mail: lig@ihep.ac.cn; Chen Zhihua [China-Japan Friendship Institute of Clinical Medical Science, Yinghua Rd., Beijing 100029 (China); Wu Ziyu [Institute of High Energy Physics, Chinese Academy of Science, Yuquan Rd. No 19, Beijing 100039 (China); Ando, M. [Photon Factory, KEK, Oho 1-1, Tsukuba, Ibaraki 305-0801 (Japan); Pan Lin [China-Japan Friendship Institute of Clinical Medical Science, Yinghua Rd., Beijing 100029 (China); Wang, J.Y. [Institute of High Energy Physics, Chinese Academy of Science, Yuquan Rd. No 19, Beijing 100039 (China); Jiang, X.M. [Institute of High Energy Physics, Chinese Academy of Science, Yuquan Rd. No 19, Beijing 100039 (China)

    2005-08-11

    The excised rat kidney slices were investigated using a simplified diffraction-enhanced imaging (DEI) configuration with only two crystals: the first one working as monochromator and the second one working as analyzer in the Bragg geometry that was developed at Beijing Synchrotron Radiation Facility (BSRF). Many fine anatomic structures of the sliced rat kidneys with thickness of 2mm and 120{mu}m can be distinguished clearly in the DEI images that were obtained at the shoulder of a rocking curve. The authors would like to emphasize that the thick and thin slices DEI provides very different images; in the thick sample only the structure with the big density gradient or that near the surface where X-ray comes out can be distinguished, while in the thin ones some fine structures, which can not be distinguished at the thick sample under the same condition, can be seen very clearly. The reason related with the counteraction of {delta}(x,y,z) gradient in the integral process along the X-ray path inside the thick sample is discussed.

  3. The accuracy of ventricular volume measurement and the optimal slice thickness by using multislice helical computed tomography

    International Nuclear Information System (INIS)

    Cui Wei; Guo Yuyin

    2005-01-01

    Objective: To determine the optimal slice thickness for ventricular volume measurement by tomographic multislice Simpson's method and to evaluate the accuracy of ventricular volume measured by multislice helical computed tomography (MSCT) in human ventricular casts. Methods: Fourteen human left ventricular (LV) and 15 right ventricular (RV) casts were scanned with MSCT scanner by using a scanning protocol similar to clinical practice. A series of LV and RV short-axis images were reconstructed with slice thickness of 2 mm, 3.5 mm, 5 mm, 7 mm, and 10 mm, respectively. Multislice Simpson's method was used to calculate LV and RV volumes and true cast volume was determined by water displacement. Results: The true LV and RV volumes were (55.57 ± 28.91) ml, and (64.23 ± 24.51) ml, respectively. The calculated volumes from different slice thickness ranged from (58.78 ± 28.93) ml to (68.15 ± 32.57) ml for LV casts, and (74.45 ± 27.81) ml to (88.14 ± 32.91) ml for RV casts, respectively. Both the calculated LV and RV volumes correlated closely with the corresponding true volumes (all r > 0.95, P<0.001), but overestimated the corresponding true volume by (3.21 ± 5.95) to (12.58 ± 8.56) ml for LV and (10.22 ± 8.45) to (23.91 ± 12.24) ml for RV (all P<0.01). There was a close correlation between the overestimation and the selected slice thickness for both LV and RV volume measurements (r=0.998 and 0.996, P<0.001). However, when slice thickness was reduced to 5.0 mm, the overestimation became nonsignificant for slice thickness through 2.0 mm to 5.0 mm, and also for both LV and RV volume measurements. Conclusion: Both LV and RV volumes can be accurately calculated with MSCT. A 5 mm slice thickness is enough and most efficient for accurate measurement of LV and RV volume. (authors)

  4. A FAST MORPHING-BASED INTERPOLATION FOR MEDICAL IMAGES: APPLICATION TO CONFORMAL RADIOTHERAPY

    Directory of Open Access Journals (Sweden)

    Hussein Atoui

    2011-05-01

    Full Text Available A method is presented for fast interpolation between medical images. The method is intended for both slice and projective interpolation. It allows offline interpolation between neighboring slices in tomographic data. Spatial correspondence between adjacent images is established using a block matching algorithm. Interpolation of image intensities is then carried out by morphing between the images. The morphing-based method is compared to standard linear interpolation, block-matching-based interpolation and registrationbased interpolation in 3D tomographic data sets. Results show that the proposed method scored similar performance in comparison to registration-based interpolation, and significantly outperforms both linear and block-matching-based interpolation. This method is applied in the context of conformal radiotherapy for online projective interpolation between Digitally Reconstructed Radiographs (DRRs.

  5. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  6. Does slice thickness affect diagnostic performance of 64-slice CT coronary angiography in stable and unstable angina patients with a positive calcium score?

    Energy Technology Data Exchange (ETDEWEB)

    Meijs, Matthijs F.L.; Vos, Alexander M. de; Cramer, Maarten J.; Doevendans, Pieter A.; Vries, Jan J.J. de; Rutten, Annemarieke; Budde, Ricardo P.J.; Prokop, Mathias (Dept. of Radiology, Univ. Medical Center Utrecht, Utrecht (Netherlands)), e-mail: m.meijs@umcutrecht.nl; Meijboom, W. Bob; Feyter, Pim J. de (Dept. of Cardiology, Erasmus Medical Center, Rotterdam (Netherlands))

    2010-05-15

    Background: Coronary calcification can lead to over-estimation of the degree of coronary stenosis. Purpose: To evaluate whether thinner reconstruction thickness improves the diagnostic performance of 64-slice CT coronary angiography (CTCA) in angina patients with a positive calcium score. Material and Methods: We selected 20 scans from a clinical study comparing CTCA to conventional coronary angiography (CCA) in stable and unstable angina patients based on a low number of motion artifacts and a positive calcium score. All images were acquired at 64 x 0.625 mm and each CTCA scan was reconstructed at slice thickness/increment 0.67 mm/0.33 mm, 0.9 mm/0.45 mm, and 1.4 mm/0.7 mm. Two reviewers blinded for CCA results independently evaluated the scans for the presence of significant coronary artery disease (CAD) in three randomly composed series, with =2 weeks in between series. The diagnostic performance of CTCA was compared for the different slice thicknesses using a pooled analysis of both reviewers. Significant CAD was defined as >50% diameter narrowing on quantitative CCA. Image noise (standard deviation of CT numbers) was measured in all scans. Inter-observer variability was assessed with kappa. Results: Significant CAD was present in 8% of 304 available segments. Median total Agatston calcium score was 181.8 (interquartile range 34.9-815.6). Sensitivity at 0.67 mm, 0.9 mm, and 1.4 mm slice thickness was 70% (95% confidence interval 57-83%), 74% (62-86%), and 70% (57-83%), respectively. Specificity was 85% (82-88%), 84% (81-87%), and 84% (81-87%), respectively. The positive predictive value was 30 (21-38%), 29 (21-37%), and 28 (20-36%), respectively. The negative predictive value was 97% (95-98%), 97% (96-99%), and 97% (96-99%), respectively. Kappa for inter-observer agreement was 0.56, 0.58, and 0.59. Noise decreased from 32.9 HU at 0.67 mm, to 23.2 HU at 1.4 mm (P<0.001). Conclusion: Diagnostic performance of CTCA in angina patients with a positive calcium score

  7. Influence of slice thickness of computed tomography and type of rapid protyping on the accuracy of 3-dimensional medical model

    Energy Technology Data Exchange (ETDEWEB)

    Um, Ki Doo; Lee, Byung Do [Wonkwang University College of Medicine, Iksan (Korea, Republic of)

    2004-03-15

    This study was to evaluate the influence of slice thickness of computed tomography (CT) and rapid protyping (RP) type on the accuracy of 3-dimensional medical model. Transaxial CT data of human dry skull were taken from multi-detector spiral CT. Slice thickness were 1, 2, 3 and 4 mm respectively. Three-dimensional image model reconstruction using 3-D visualization medical software (V-works 3.0) and RP model fabrication were followed. 2-RP models were 3D printing (Z402, Z Corp., Burlington, USA) and Stereolithographic Apparatus model. Linear measurements of anatomical landmarks on dry skull, 3-D image model, and 2-RP models were done and compared according to slice thickness and RP model type. There were relative error percentage in absolute value of 0.97, 1.98, 3.83 between linear measurements of dry skull and image models of 1, 2, 3 mm slice thickness respectively. There was relative error percentage in absolute value of 0.79 between linear measurements of dry skull and SLA model. There was relative error difference in absolute value of 2.52 between linear measurements of dry skull and 3D printing model. These results indicated that 3-dimensional image model of thin slice thickness and stereolithographic RP model showed relative high accuracy.

  8. Influence of slice thickness of computed tomography and type of rapid protyping on the accuracy of 3-dimensional medical model

    International Nuclear Information System (INIS)

    Um, Ki Doo; Lee, Byung Do

    2004-01-01

    This study was to evaluate the influence of slice thickness of computed tomography (CT) and rapid protyping (RP) type on the accuracy of 3-dimensional medical model. Transaxial CT data of human dry skull were taken from multi-detector spiral CT. Slice thickness were 1, 2, 3 and 4 mm respectively. Three-dimensional image model reconstruction using 3-D visualization medical software (V-works 3.0) and RP model fabrication were followed. 2-RP models were 3D printing (Z402, Z Corp., Burlington, USA) and Stereolithographic Apparatus model. Linear measurements of anatomical landmarks on dry skull, 3-D image model, and 2-RP models were done and compared according to slice thickness and RP model type. There were relative error percentage in absolute value of 0.97, 1.98, 3.83 between linear measurements of dry skull and image models of 1, 2, 3 mm slice thickness respectively. There was relative error percentage in absolute value of 0.79 between linear measurements of dry skull and SLA model. There was relative error difference in absolute value of 2.52 between linear measurements of dry skull and 3D printing model. These results indicated that 3-dimensional image model of thin slice thickness and stereolithographic RP model showed relative high accuracy.

  9. The impact of computed tomography slice thickness on the assessment of stereotactic, 3D conformal and intensity-modulated radiotherapy of brain tumors.

    Science.gov (United States)

    Caivano, R; Fiorentino, A; Pedicini, P; Califano, G; Fusco, V

    2014-05-01

    To evaluate radiotherapy treatment planning accuracy by varying computed tomography (CT) slice thickness and tumor size. CT datasets from patients with primary brain disease and metastatic brain disease were selected. Tumor volumes ranging from about 2.5 to 100 cc and CT scan at different slice thicknesses (1, 2, 4, 6 and 10 mm) were used to perform treatment planning (1-, 2-, 4-, 6- and 10-CT, respectively). For any slice thickness, a conformity index (CI) referring to 100, 98, 95 and 90 % isodoses and tumor size was computed. All the CI and volumes obtained were compared to evaluate the impact of CT slice thickness on treatment plans. The smallest volumes reduce significantly if defined on 1-CT with respect to 4- and 6-CT, while the CT slice thickness does not affect target definition for the largest volumes. The mean CI for all the considered isodoses and CT slice thickness shows no statistical differences when 1-CT is compared to 2-CT. Comparing the mean CI of 1- with 4-CT and 1- with 6-CT, statistical differences appear only for the smallest volumes with respect to 100, 98 and 95 % isodoses-the CI for 90 % isodose being not statistically significant for all the considered PTVs. The accuracy of radiotherapy tumor volume definition depends on CT slice thickness. To achieve a better tumor definition and dose coverage, 1- and 2-CT would be suitable for small targets, while 4- and 6-CT are suitable for the other volumes.

  10. Comparing electron tomography and HRTEM slicing methods as tools to measure the thickness of nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Alloyeau, D., E-mail: alloyeau.damien@gmail.com [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Laboratoire d' Etude des Microstructures - ONERA/CNRS, UMR 104, B.P. 72, 92322 Chatillon (France); Ricolleau, C. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Oikawa, T. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); JEOL (Europe) SAS, Espace Claude Monet, 1 Allee de Giverny, 78290 Croissy-sur-Seine (France); Langlois, C. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Le Bouar, Y.; Loiseau, A. [Laboratoire d' Etude des Microstructures - ONERA/CNRS, UMR 104, B.P. 72, 92322 Chatillon (France)

    2009-06-15

    Nanoparticles' morphology is a key parameter in the understanding of their thermodynamical, optical, magnetic and catalytic properties. In general, nanoparticles, observed in transmission electron microscopy (TEM), are viewed in projection so that the determination of their thickness (along the projection direction) with respect to their projected lateral size is highly questionable. To date, the widely used methods to measure nanoparticles thickness in a transmission electron microscope are to use cross-section images or focal series in high-resolution transmission electron microscopy imaging (HRTEM 'slicing'). In this paper, we compare the focal series method with the electron tomography method to show that both techniques yield similar particle thickness in a range of size from 1 to 5 nm, but the electron tomography method provides better statistics since more particles can be analyzed at one time. For this purpose, we have compared, on the same samples, the nanoparticles thickness measurements obtained from focal series with the ones determined from cross-section profiles of tomograms (tomogram slicing) perpendicular to the plane of the substrate supporting the nanoparticles. The methodology is finally applied to the comparison of CoPt nanoparticles annealed ex situ at two different temperatures to illustrate the accuracy of the techniques in detecting small particle thickness changes.

  11. Influence of detector collimation on SNR in four different MDCT scanners using a reconstructed slice thickness of 5 mm

    International Nuclear Information System (INIS)

    Verdun, F.R.; Pachoud, M.; Monnin, P.; Valley, J.-F.; Noel, A.; Meuli, R.; Schnyder, P.; Denys, A.

    2004-01-01

    The purpose of this paper is to compare the influence of detector collimation on the signal-to-noise ratio (SNR) for a 5.0 mm reconstructed slice thickness for four multi-detector row CT (MDCT) units. SNRs were measured on Catphan test phantom images from four MDCT units: a GE LightSpeed QX/I, a Marconi MX 8000, a Toshiba Aquilion and a Siemens Volume Zoom. Five-millimetre-thick reconstructed slices were obtained from acquisitions performed using detector collimations of 2.0-2.5 mm and 5.0 mm, 120 kV, a 360 tube rotation time of 0.5 s, a wide range of mA and pitch values in the range of 0.75-0.85 and 1.25-1.5. For each set of acquisition parameters, a Wiener spectrum was also calculated. Statistical differences in SNR for the different acquisition parameters were evaluated using a Student's t-test (P<0.05). The influence of detector collimation on the SNR for a 5.0-mm reconstructed slice thickness is different for different MDCT scanners. At pitch values lower than unity, the use of a small detector collimation to produce 5.0-mm thick slices is beneficial for one unit and detrimental for another. At pitch values higher than unity, using a small detector collimation is beneficial for two units. One manufacturer uses different reconstruction filters when switching from a 2.5- to a 5.0-mm detector collimation. For a comparable reconstructed slice thickness, using a smaller detector collimation does not always reduce image noise. Thus, the impact of the detector collimation on image noise should be determined by standard deviation calculations, and also by assessing the power spectra of the noise. (orig.)

  12. Clinical lymph node staging-Influence of slice thickness and reconstruction kernel on volumetry and RECIST measurements

    Energy Technology Data Exchange (ETDEWEB)

    Fabel, M., E-mail: m.fabel@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Wulff, A., E-mail: a.wulff@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Heckel, F., E-mail: frank.heckel@mevis.fraunhofer.de [Fraunhofer MeVis, Universitaetsallee 29, 28359 Bremen (Germany); Bornemann, L., E-mail: lars.bornemann@mevis.fraunhofer.de [Fraunhofer MeVis, Universitaetsallee 29, 28359 Bremen (Germany); Freitag-Wolf, S., E-mail: freitag@medinfo.uni-kiel.de [Institute of Medical Informatics and Statistics, Brunswiker Strasse 10, 24105 Kiel (Germany); Heller, M., E-mail: martin.heller@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Biederer, J., E-mail: juergen.biederer@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Bolte, H., E-mail: hendrik.bolte@ukmuenster.de [Department of Nuclear Medicine, University Hospital Muenster, Albert-Schweitzer-Campus 1, Gebaeude A1, D-48149 Muenster (Germany)

    2012-11-15

    Purpose: Therapy response evaluation in oncological patient care requires reproducible and accurate image evaluation. Today, common standard in measurement of tumour growth or shrinkage is one-dimensional RECIST 1.1. A proposed alternative method for therapy monitoring is computer aided volumetric analysis. In lung metastases volumetry proved high reliability and accuracy in experimental studies. High reliability and accuracy of volumetry in lung metastases has been proven. However, other metastatic lesions such as enlarged lymph nodes are far more challenging. The aim of this study was to investigate the reproducibility of semi-automated volumetric analysis of lymph node metastases as a function of both slice thickness and reconstruction kernel. In addition, manual long axis diameters (LAD) as well as short axis diameters (SAD) were compared to automated RECIST measurements. Materials and methods: Multislice-CT of the chest, abdomen and pelvis of 15 patients with lymph node metastases of malignant melanoma were included. Raw data were reconstructed using different slice thicknesses (1-5 mm) and varying reconstruction kernels (B20f, B40f, B60f). Volume and RECIST measurements were performed for 85 lymph nodes between 10 and 60 mm using Oncology Prototype Software (Fraunhofer MEVIS, Siemens, Germany) and were compared to a defined reference volume and diameter by calculating absolute percentage errors (APE). Variability of the lymph node sizes was computed as relative measurement differences, precision of measurements was computed as relative measurement deviation. Results: Mean absolute percentage error (APE) for volumetric analysis varied between 3.95% and 13.8% and increased significantly with slice thickness. Differences between reconstruction kernels were not significant, however, a trend towards middle soft tissue kernel could be observed.. Between automated and manual short axis diameter (SAD, RECIST 1.1) and long axis diameter (LAD, RECIST 1.0) no

  13. Clinical lymph node staging—Influence of slice thickness and reconstruction kernel on volumetry and RECIST measurements

    International Nuclear Information System (INIS)

    Fabel, M.; Wulff, A.; Heckel, F.; Bornemann, L.; Freitag-Wolf, S.; Heller, M.; Biederer, J.; Bolte, H.

    2012-01-01

    Purpose: Therapy response evaluation in oncological patient care requires reproducible and accurate image evaluation. Today, common standard in measurement of tumour growth or shrinkage is one-dimensional RECIST 1.1. A proposed alternative method for therapy monitoring is computer aided volumetric analysis. In lung metastases volumetry proved high reliability and accuracy in experimental studies. High reliability and accuracy of volumetry in lung metastases has been proven. However, other metastatic lesions such as enlarged lymph nodes are far more challenging. The aim of this study was to investigate the reproducibility of semi-automated volumetric analysis of lymph node metastases as a function of both slice thickness and reconstruction kernel. In addition, manual long axis diameters (LAD) as well as short axis diameters (SAD) were compared to automated RECIST measurements. Materials and methods: Multislice-CT of the chest, abdomen and pelvis of 15 patients with lymph node metastases of malignant melanoma were included. Raw data were reconstructed using different slice thicknesses (1–5 mm) and varying reconstruction kernels (B20f, B40f, B60f). Volume and RECIST measurements were performed for 85 lymph nodes between 10 and 60 mm using Oncology Prototype Software (Fraunhofer MEVIS, Siemens, Germany) and were compared to a defined reference volume and diameter by calculating absolute percentage errors (APE). Variability of the lymph node sizes was computed as relative measurement differences, precision of measurements was computed as relative measurement deviation. Results: Mean absolute percentage error (APE) for volumetric analysis varied between 3.95% and 13.8% and increased significantly with slice thickness. Differences between reconstruction kernels were not significant, however, a trend towards middle soft tissue kernel could be observed.. Between automated and manual short axis diameter (SAD, RECIST 1.1) and long axis diameter (LAD, RECIST 1.0) no

  14. Filter and slice thickness selection in SPECT image reconstruction

    International Nuclear Information System (INIS)

    Ivanovic, M.; Weber, D.A.; Wilson, G.A.; O'Mara, R.E.

    1985-01-01

    The choice of filter and slice thickness in SPECT image reconstruction as function of activity and linear and angular sampling were investigated in phantom and patient imaging studies. Reconstructed transverse and longitudinal spatial resolution of the system were measured using a line source in a water filled phantom. Phantom studies included measurements of the Data Spectrum phantom; clinical studies included tomographic procedures in 40 patients undergoing imaging of the temporomandibular joint. Slices of the phantom and patient images were evaluated for spatial of the phantom and patient images were evaluated for spatial resolution, noise, and image quality. Major findings include; spatial resolution and image quality improve with increasing linear sampling frequencies over the range of 4-8 mm/p in the phantom images, best spatial resolution and image quality in clinical images were observed at a linear sampling frequency of 6mm/p, Shepp and Logan filter gives the best spatial resolution for phantom studies at the lowest linear sampling frequency; smoothed Shepp and Logan filter provides best quality images without loss of resolution at higher frequencies and, spatial resolution and image quality improve with increased angular sampling frequency in the phantom at 40 c/p but appear to be independent of angular sampling frequency at 400 c/p

  15. CT liver volumetry using three-dimensional image data in living donor liver transplantation: Effects of slice thickness on volume calculation

    Science.gov (United States)

    Hori, Masatoshi; Suzuki, Kenji; Epstein, Mark L.; Baron, Richard L.

    2011-01-01

    The purpose was to evaluate a relationship between slice thickness and calculated volume on CT liver volumetry by comparing the results for images with various slice thicknesses including three-dimensional images. Twenty adult potential liver donors (12 men, 8 women; mean age, 39 years; range, 24–64) underwent CT with a 64-section multi-detector row CT scanner after intra-venous injection of contrast material. Four image sets with slice thicknesses of 0.625 mm, 2.5 mm, 5 mm, and 10 mm were used. First, a program developed in our laboratory for automated liver extraction was applied to CT images, and the liver boundary was obtained automatically. Then, an abdominal radiologist reviewed all images on which automatically extracted boundaries were superimposed, and edited the boundary on each slice to enhance the accuracy. Liver volumes were determined by counting of the voxels within the liver boundary. Mean whole liver volumes estimated with CT were 1322.5 cm3 on 0.625-mm, 1313.3 cm3 on 2.5-mm, 1310.3 cm3 on 5-mm, and 1268.2 cm3 on 10-mm images. Volumes calculated for three-dimensional (0.625-mm-thick) images were significantly larger than those for thicker images (Pvolumetry. If not, three-dimensional images could be essential. PMID:21850689

  16. The influence of secondary reconstruction slice thickness on NewTom 3G cone beam computed tomography-based radiological interpretation of sheep mandibular condyle fractures.

    Science.gov (United States)

    Sirin, Yigit; Guven, Koray; Horasan, Sinan; Sencan, Sabri; Bakir, Baris; Barut, Oya; Tanyel, Cem; Aral, Ali; Firat, Deniz

    2010-11-01

    The objective of this study was to examine the diagnostic accuracy of the different secondary reconstruction slice thicknesses of cone beam computed tomography (CBCT) on artificially created mandibular condyle fractures. A total of 63 sheep heads with or without condylar fractures were scanned with a NewTom 3G CBCT scanner. Multiplanar reformatted (MPR) views in 0.2-mm, 1-mm, 2-mm, and 3-mm secondary reconstruction slice thicknesses were evaluated by 7 observers. Inter- and intraobserver agreements were calculated with weighted kappa statistics. The receiver operating characteristic (ROC) curve analysis was used to statistically compare the area under the curve (AUC) of each slice thickness. The kappa coefficients varied from fair and to excellent. The AUCs of 0.2-mm and 1-mm slice thicknesses were found to be significantly higher than those of 2 mm and 3 mm for some type of fractures. CBCT was found to be accurate in detecting all variants of fractures at 0.2 mm and 1 mm. However, 2-mm and 3-mm slices were not suitable to detect fissure, complete, and comminuted types of mandibular condyle fractures. Copyright © 2010 Mosby, Inc. All rights reserved.

  17. The interpolation method based on endpoint coordinate for CT three-dimensional image

    International Nuclear Information System (INIS)

    Suto, Yasuzo; Ueno, Shigeru.

    1997-01-01

    Image interpolation is frequently used to improve slice resolution to reach spatial resolution. Improved quality of reconstructed three-dimensional images can be attained with this technique as a result. Linear interpolation is a well-known and widely used method. The distance-image method, which is a non-linear interpolation technique, is also used to convert CT value images to distance images. This paper describes a newly developed method that makes use of end-point coordinates: CT-value images are initially converted to binary images by thresholding them and then sequences of pixels with 1-value are arranged in vertical or horizontal directions. A sequence of pixels with 1-value is defined as a line segment which has starting and end points. For each pair of adjacent line segments, another line segment was composed by spatial interpolation of the start and end points. Binary slice images are constructed from the composed line segments. Three-dimensional images were reconstructed from clinical X-ray CT images, using three different interpolation methods and their quality and processing speed were evaluated and compared. (author)

  18. Fan beam image reconstruction with generalized Fourier slice theorem.

    Science.gov (United States)

    Zhao, Shuangren; Yang, Kang; Yang, Kevin

    2014-01-01

    For parallel beam geometry the Fourier reconstruction works via the Fourier slice theorem (or central slice theorem, projection slice theorem). For fan beam situation, Fourier slice can be extended to a generalized Fourier slice theorem (GFST) for fan-beam image reconstruction. We have briefly introduced this method in a conference. This paper reintroduces the GFST method for fan beam geometry in details. The GFST method can be described as following: the Fourier plane is filled by adding up the contributions from all fanbeam projections individually; thereby the values in the Fourier plane are directly calculated for Cartesian coordinates such avoiding the interpolation from polar to Cartesian coordinates in the Fourier domain; inverse fast Fourier transform is applied to the image in Fourier plane and leads to a reconstructed image in spacial domain. The reconstructed image is compared between the result of the GFST method and the result from the filtered backprojection (FBP) method. The major differences of the GFST and the FBP methods are: (1) The interpolation process are at different data sets. The interpolation of the GFST method is at projection data. The interpolation of the FBP method is at filtered projection data. (2) The filtering process are done in different places. The filtering process of the GFST is at Fourier domain. The filtering process of the FBP method is the ramp filter which is done at projections. The resolution of ramp filter is variable with different location but the filter in the Fourier domain lead to resolution invariable with location. One advantage of the GFST method over the FBP method is in short scan situation, an exact solution can be obtained with the GFST method, but it can not be obtained with the FBP method. The calculation of both the GFST and the FBP methods are at O(N^3), where N is the number of pixel in one dimension.

  19. Generalized Fourier slice theorem for cone-beam image reconstruction.

    Science.gov (United States)

    Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang

    2015-01-01

    The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is O(N^4), which is close to the filtered back-projection method, here N is the image size of 1-dimension. However the interpolation process can be avoid, in that case the number of the calculations is O(N5).

  20. [Multimodal medical image registration using cubic spline interpolation method].

    Science.gov (United States)

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  1. Improving the visualization of electron-microscopy data through optical flow interpolation

    KAUST Repository

    Carata, Lucian

    2013-01-01

    Technical developments in neurobiology have reached a point where the acquisition of high resolution images representing individual neurons and synapses becomes possible. For this, the brain tissue samples are sliced using a diamond knife and imaged with electron-microscopy (EM). However, the technique achieves a low resolution in the cutting direction, due to limitations of the mechanical process, making a direct visualization of a dataset difficult. We aim to increase the depth resolution of the volume by adding new image slices interpolated from the existing ones, without requiring modifications to the EM image-capturing method. As classical interpolation methods do not provide satisfactory results on this type of data, the current paper proposes a re-framing of the problem in terms of motion volumes, considering the depth axis as a temporal axis. An optical flow method is adapted to estimate the motion vectors of pixels in the EM images, and this information is used to compute and insert multiple new images at certain depths in the volume. We evaluate the visualization results in comparison with interpolation methods currently used on EM data, transforming the highly anisotropic original dataset into a dataset with a larger depth resolution. The interpolation based on optical flow better reveals neurite structures with realistic undistorted shapes, and helps to easier map neuronal connections. © 2011 ACM.

  2. Emergency department CT screening of patients with nontraumatic neurological symptoms referred to the posterior fossa: comparison of thin versus thick slice images.

    Science.gov (United States)

    Kamalian, Shervin; Atkinson, Wendy L; Florin, Lauren A; Pomerantz, Stuart R; Lev, Michael H; Romero, Javier M

    2014-06-01

    Evaluation of the posterior fossa (PF) on 5-mm-thick helical CT images (current default) has improved diagnostic accuracy compared to 5-mm sequential CT images; however, 5-mm-thick images may not be ideal for PF pathology due to volume averaging of rapid changes in anatomy in the Z-direction. Therefore, we sought to determine if routine review of 1.25-mm-thin helical CT images has superior accuracy in screening for nontraumatic PF pathology. MRI proof of diagnosis was obtained within 6 h of helical CT acquisition for 90 consecutive ED patients with, and 88 without, posterior fossa lesions. Helical CT images were post-processed at 1.25 and 5-mm-axial slice thickness. Two neuroradiologists blinded to the clinical/MRI findings reviewed both image sets. Interobserver agreement and accuracy were rated using Kappa statistics and ROC analysis, respectively. Of the 90/178 (51 %) who were MR positive, 60/90 (66 %) had stroke and 30/90 (33 %) had other etiologies. There was excellent interobserver agreement (κ > 0.97) for both thick and thin slice assessments. The accuracy, sensitivity, and specificity for 1.25-mm images were 65, 44, and 84 %, respectively, and for 5-mm images were 67, 45, and 85 %, respectively. The diagnostic accuracy was not significantly different (p > 0.5). In this cohort of patients with nontraumatic neurological symptoms referred to the posterior fossa, 1.25-mm-thin slice CT reformatted images do not have superior accuracy compared to 5-mm-thick images. This information has implications on optimizing resource utilizations and efficiency in a busy emergency room. Review of 1.25-mm-thin images may help diagnostic accuracy only when review of 5-mm-thick images as current default is inconclusive.

  3. Influence of γ-irradiation on drying of slice potato

    International Nuclear Information System (INIS)

    Wang Jun; Chao Yan; Fu Junjie; Wang Jianping

    2001-01-01

    A new technology is introduced to dry food products by hot-air after pretreated by irradiation. The influence of different dosage of irradiation, temperature of hot air, thickness of the slice potato on the rate of dehydration temperature of irradiated potato were studied. A conclusion is reached that the 3 factors, irradiation dosage, hot-air temperature and thickness of slice potato, affect the rate of dehydration and temperature of slice potato. The higher the dosage is, the greater the rate of dehydration of potato becomes, and the higher the temperature of the slice potato gets. (authors)

  4. Dosimetric variation due to CT inter-slice spacing in four-dimensional carbon beam lung therapy

    International Nuclear Information System (INIS)

    Kumagai, Motoki; Mori, Shinichiro; Kandatsu, Susumu; Baba, Masayuki; Sharp, Gregory C; Asakura, Hiroshi; Endo, Masahiro

    2009-01-01

    When CT data with thick slice thickness are used in treatment planning, geometrical uncertainty may induce dosimetric errors. We evaluated carbon ion dose variations due to different CT slice thicknesses using a four-dimensional (4D) carbon ion beam dose calculation, and compared results between ungated and gated respiratory strategies. Seven lung patients were scanned in 4D mode with a 0.5 mm slice thickness using a 256-multi-slice CT scanner. CT images were averaged with various numbers of images to simulate reconstructed images with various slice thicknesses (0.5-5.0 mm). Two scenarios were studied (respiratory-ungated and -gated strategies). Range compensators were designed for each of the CT volumes with coarse inter-slice spacing to cover the internal target volume (ITV), as defined from 4DCT. Carbon ion dose distribution was computed for each resulting ITV on the 0.5 mm slice 4DCT data. The accumulated dose distribution was then calculated using deformable registration for 4D dose assessment. The magnitude of over- and under-dosage was found to be larger with the use of range compensators designed with a coarser inter-slice spacing than those obtained with a 0.5 mm slice thickness. Although no under-dosage was observed within the clinical target volume (CTV) region, D95 remained at over 97% of the prescribed dose for the ungated strategy and 95% for the gated strategy for all slice thicknesses. An inter-slice spacing of less than 3 mm may be able to minimize dose variation between the ungated and gated strategies. Although volumes with increased inter-slice spacing may reduce geometrical accuracy at a certain respiratory phase, this does not significantly affect delivery of the accumulated dose to the target during the treatment course.

  5. NMR surprizes with thin slices and strong gradients

    Energy Technology Data Exchange (ETDEWEB)

    Gaedke, Achim; Kresse, Benjamin [Institute of Condensed Matter Physics, Technische Universitaet Darmstadt (Germany); Nestle, Nikolaus

    2008-07-01

    In the context of our work on diffusion-relaxation-coupling in thin excited slices, we perform NMR experiments in static magnetic field gradients up to 200 T/m. For slice thicknesses in the range of 10{mu}m, the frequency bandwidth of the excited slices becomes sufficiently narrow that free induction decays (FIDs) become observable despite the presence of the strong static gradient. The observed FIDs were also simulated using standard methods from MRI physics. Possible effects of diffusion during the FID duration are still minor at this slice thickness in water but might become dominant for smaller slices or more diffusive media. Furthermore, the detailed excitation structure of the RF pulses was studied in profiling experiments over the edge of a plane liquid cell. Side lobe effects to the slices will be discussed along with approaches to control them. The spatial resolution achieved in the profiling experiments furthermore allows the identification of thermal expansion phenomena in the NMR magnet. Measures to reduce the temperature drift problems are presented.

  6. The bases for the use of interpolation in helical computed tomography: an explanation for radiologists

    International Nuclear Information System (INIS)

    Garcia-Santos, J. M.; Cejudo, J.

    2002-01-01

    In contrast to conventional computed tomography (CT), helical CT requires the application of interpolators to achieve image reconstruction. This is because the projections processed by the computer are not situated in the same plane. Since the introduction of helical CT. a number of interpolators have been designed in the attempt to maintain the thickness of the reconstructed section as close as possible to the thickness of the X-ray beam. The purpose of this article is to discuss the function of these interpolators, stressing the advantages and considering the possible inconveniences of high-grade curved interpolators with respect to standard linear interpolators. (Author) 7 refs

  7. Imaging skeletal anatomy of injured cervical spine specimens: comparison of single-slice vs multi-slice helical CT

    Energy Technology Data Exchange (ETDEWEB)

    Obenauer, S.; Alamo, L.; Herold, T.; Funke, M.; Kopka, L.; Grabbe, E. [Department of Radiology, Georg August-University Goettingen, Robert-Koch-Strasse 40, 37075 Goettingen (Germany)

    2002-08-01

    Our objective was to compare a single-slice CT (SS-CT) scanner with a multi-slice CT (MS-CT) scanner in the depiction of osseous anatomic structures and fractures of the upper cervical spine. Two cervical spine specimens with artificial trauma were scanned with a SS-CT scanner (HighSpeed, CT/i, GE, Milwaukee, Wis.) by using various collimations (1, 3, 5 mm) and pitch factors (1, 1.5, 2, 3) and a four-slice helical CT scanner (LightSpeed, QX/i, GE, Milwaukee, Wis.) by using various table speeds ranging from 3.75 to 15 mm/rotation for a pitch of 0.75 and from 7.5 to 30 mm/rotation for a pitch of 1.5. Images were reconstructed with an interval of 1 mm. Sagittal and coronal multiplanar reconstructions of the primary and reconstructed data set were performed. For MS-CT a tube current resulting in equivalent image noise as with SS-CT was used. All images were judged by two observers using a 4-point scale. The best image quality for SS-CT was achieved with the smallest slice thickness (1 mm) and a pitch smaller than 2 resulting in a table speed of up to 2 mm per gantry rotation (4 points). A reduction of the slice thickness rather than of the table speed proved to be beneficial at MS-CT. Therefore, the optimal scan protocol in MS-CT included a slice thickness of 1.25 mm with a table speed of 7.5 mm/360 using a pitch of 1.5 (4 points), resulting in a faster scan time than when a pitch of 0.75 (4 points) was used. This study indicates that MS-CT could provide equivalent image quality at approximately four times the volume coverage speed of SS-CT. (orig.)

  8. SU-E-I-10: Investigation On Detectability of a Small Target for Different Slice Direction of a Volumetric Cone Beam CT Image

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C; Han, M; Baek, J [Yonsei University, Incheon (Korea, Republic of)

    2015-06-15

    Purpose: To investigate the detectability of a small target for different slice direction of a volumetric cone beam CT image and its impact on dose reduction. Methods: Analytic projection data of a sphere object (1 mm diameter, 0.2/cm attenuation coefficient) were generated and reconstructed by FDK algorithm. In this work, we compared the detectability of the small target from four different backprojection Methods: hanning weighted ramp filter with linear interpolation (RECON 1), hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON 3), and ramp filter with Fourier interpolation (RECON4), respectively. For noise simulation, 200 photons per measurement were used, and the noise only data were reconstructed using FDK algorithm. For each reconstructed volume, axial and coronal slice were extracted and detection-SNR was calculated using channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels. Results: Detection-SNR of coronal images varies for different backprojection methods, while axial images have a similar detection-SNR. Detection-SNR{sup 2} ratios of coronal and axial images in RECON1 and RECON2 are 1.33 and 1.15, implying that the coronal image has a better detectability than axial image. In other words, using coronal slices for the small target detection can reduce the patient dose about 33% and 15% compared to using axial slices in RECON 1 and RECON 2. Conclusion: In this work, we investigated slice direction dependent detectability of a volumetric cone beam CT image. RECON 1 and RECON 2 produced the highest detection-SNR, with better detectability in coronal slices. These results indicate that it is more beneficial to use coronal slice to improve detectability of a small target in a volumetric cone beam CT image. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Program (NIPA-2014-H0201

  9. SU-E-I-10: Investigation On Detectability of a Small Target for Different Slice Direction of a Volumetric Cone Beam CT Image

    International Nuclear Information System (INIS)

    Lee, C; Han, M; Baek, J

    2015-01-01

    Purpose: To investigate the detectability of a small target for different slice direction of a volumetric cone beam CT image and its impact on dose reduction. Methods: Analytic projection data of a sphere object (1 mm diameter, 0.2/cm attenuation coefficient) were generated and reconstructed by FDK algorithm. In this work, we compared the detectability of the small target from four different backprojection Methods: hanning weighted ramp filter with linear interpolation (RECON 1), hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON 3), and ramp filter with Fourier interpolation (RECON4), respectively. For noise simulation, 200 photons per measurement were used, and the noise only data were reconstructed using FDK algorithm. For each reconstructed volume, axial and coronal slice were extracted and detection-SNR was calculated using channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels. Results: Detection-SNR of coronal images varies for different backprojection methods, while axial images have a similar detection-SNR. Detection-SNR 2 ratios of coronal and axial images in RECON1 and RECON2 are 1.33 and 1.15, implying that the coronal image has a better detectability than axial image. In other words, using coronal slices for the small target detection can reduce the patient dose about 33% and 15% compared to using axial slices in RECON 1 and RECON 2. Conclusion: In this work, we investigated slice direction dependent detectability of a volumetric cone beam CT image. RECON 1 and RECON 2 produced the highest detection-SNR, with better detectability in coronal slices. These results indicate that it is more beneficial to use coronal slice to improve detectability of a small target in a volumetric cone beam CT image. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Program (NIPA-2014-H0201

  10. Demonstration of the pulmonary interlobar fissures on multiplanar reformatted images with 64-slices spiral CT

    International Nuclear Information System (INIS)

    Wang Yafei; Chen Yerong; Shan Xiuhong; Tang Zhiyang; Ni Enzhen; Huang Hao; Wu Shuchun

    2009-01-01

    Objective: To determine the optimal orientation and slice thickness of reformatted images to visualize the interlobar fissures on multiplanar reformation (MPR) images and to recommend MPR imaging protocal for visualizing interlobar fissures in clinical practise. Methods: 64-slices CT scans of chest were obtained in 300 patients without pulmonary diseases. Axial, sagittal and coronal images were reformatted at 1, 2, 3, 7 mm slice thickness respectively from the raw volume data. Three experienced radiologists evaluated all of the MPR images in the lung window and compared the differences in visualization of the interlohar fissures among the three reformatted orientations and at the different slice thicknesses with Fisher test and Friedman test. Results: Fissures on sagittal MPR images using 1, 2, 3, and 7 mm reformatted slice thickness appeared as a fine line and the preference value analysis showed the MPR images with a 3 mm reformatted slice thickness is the best for visualizing the interlobar fissure. Compared to the sagittal orientation, the coronal was not as good and the axial was the worst among the three orientations. The coronal images with a 3 mm reformatted slice thickness were slightly inferior to sagittal images. The right horizontal fissures were observed as a fine line in all coronal image in 94.0% (282)of cases and in some of the images in 6.0% (18) of cases, the right oblique fissures were displayed as a fine line in coronal images in 2.3% (7) of cases and in some images in 85.0% (255) of cases, the left oblique fissures were displayed as a fine line in some coronal images in 35.7% (107) of cases and displayed as a coarse line in 64.3% (193) of cases. On axial MPR images using 3 mm reformation slice thickness, the right oblique fissures and the left oblique fissures were displayed as a fine line in some axial images in 79.3% (238) and 81.0% (243) of cases respectively, none of the images showed horizontal fissures as a fine line. There was

  11. Study of Energy Consumption of Potato Slices During Drying Process

    Directory of Open Access Journals (Sweden)

    Hafezi Negar

    2015-06-01

    Full Text Available One of the new methods of food drying using infrared heating under vacuum is to increase the drying rate and maintain the quality of dried product. In this study, potato slices were dried using vacuum-infrared drying. Experiments were performed with the infrared lamp power levels 100, 150 and 200 W, absolute pressure levels 20, 80, 140 and 760 mmHg, and with three thicknesses of slices 1, 2 and 3 mm, in three repetitions. The results showed that the infrared lamp power, absolute pressure and slice thickness have important effects on the drying of potato. With increasing the radiation power, reducing the absolute pressure (acts of vacuum in the dryer chamber and also reducing the thickness of potato slices, drying time and the amount of energy consumed is reduced. In relation to thermal utilization efficiency, results indicated that with increasing the infrared radiation power and decreasing the absolute pressure, thermal efficiency increased.

  12. The cause of the artifact in 4-slice helical computed tomography

    International Nuclear Information System (INIS)

    Taguchi, Katsuyuki; Aradate, Hiroshi; Saito, Yasuo; Zmora, Ilan; Han, Kyung S.; Silver, Michael D.

    2004-01-01

    The causes of the image artifacts in a 4-slice helical computed tomography have been discussed as follows: (1) changeover in pairs of data used in z interpolation, (2) sampling interval in z, and (3) the cone angle. This study analyzes the first two causes of the artifact and describes how the current algorithm [K. Taguchi and H. Aradate, Radiology 205P, 390 (1997); 205P, 618 (1997); Med. Phys. 25, 550-561 (1998); H. Hu, ibid. 26, 5-18 (1999); S. Schaller et al., IEEE Trans. Med. Imaging 19, 822-834 (2000); K. Taguchi, Ph.D. thesis, University of Tsukuba, 2002] solves the problem. An interpolated sinogram for a slice at the edge of a ball phantom shows discontinuity caused by the changeover. If we extend the streak artifact in the reconstructed image, it crosses the focus orbit at the corresponding projection angle. Applying z filtering can reduce such causes by its feathering effect and mixing data obtained by different cone angles; the best results are provided when z filtering is applied to densely sampled helical data

  13. Strip interpolation in silicon and germanium strip detectors

    International Nuclear Information System (INIS)

    Wulf, E. A.; Phlips, B. F.; Johnson, W. N.; Kurfess, J. D.; Lister, C. J.; Kondev, F.; Physics; Naval Research Lab.

    2004-01-01

    The position resolution of double-sided strip detectors is limited by the strip pitch and a reduction in strip pitch necessitates more electronics. Improved position resolution would improve the imaging capabilities of Compton telescopes and PET detectors. Digitizing the preamplifier waveform yields more information than can be extracted with regular shaping electronics. In addition to the energy, depth of interaction, and which strip was hit, the digitized preamplifier signals can locate the interaction position to less than the strip pitch of the detector by looking at induced signals in neighboring strips. This allows the position of the interaction to be interpolated in three dimensions and improve the imaging capabilities of the system. In a 2 mm thick silicon strip detector with a strip pitch of 0.891 mm, strip interpolation located the interaction of 356 keV gamma rays to 0.3 mm FWHM. In a 2 cm thick germanium detector with a strip pitch of 5 mm, strip interpolation of 356 keV gamma rays yielded a position resolution of 1.5 mm FWHM

  14. Exposure (mAs) optimisation of a multi-detector CT protocol for hepatic lesion detection: are thinner slices better?

    International Nuclear Information System (INIS)

    Dobeli, Karen L.; Lewis, Sarah J.; Meikle, Steven R.; Brennan, Patrick C.; Thiele, David L.

    2014-01-01

    The purpose of this work was to determine the exposure-optimised slice thickness for hepatic lesion detection with CT. A phantom containing spheres (diameter 9.5, 4.8 and 2.4mm) with CT density 10 HU below the background (50 HU) was scanned at 125, 100, 75 and 50 mAs. Data were reconstructed at 5-, 3- and 1-mm slice thicknesses. Noise, contrast-to-noise ratio (CNR), area under the curve (AUC) as calculated using receiver operating characteristic analysis and sensitivity representing lesion detection were calculated and compared. Compared with the 125 mAs/5mm slice thickness setting, significant reductions in AUC were found for 75 mAs (P<0.01) and 50 mAs (P<0.05) at 1- and 3-mm thicknesses, respectively; sensitivity for the 9.5-mm sphere was significantly reduced for 75 (P<0.05) and 50 mAs (P<0.01) at 1-mm thickness; sensitivity for the 4.8-mm sphere was significantly lower for 100, 75 and 50 mAs at all three slice thicknesses (P<0.05). The 2.4-mm sphere was rarely detected. At each slice thickness, noise at 100, 75 and 50 mAs exposures was approximately 10, 30 and 50% higher, respectively, than that at 125 mAs exposure. CNRs decreased in an irregular manner with reductions in exposure and slice thickness. This study demonstrated no advantage to using slices below 5mm thickness, and consequently thinner slices are not necessarily better.

  15. Design and Development of a tomato Slicing Machine

    OpenAIRE

    Kamaldeen Oladimeji Salaudeen; Awagu E. F.

    2012-01-01

    Principle of slicing was reviewed and tomato slicing machine was developed based on appropriate technology. Locally available materials like wood, stainless steel and mild steel were used in the fabrication. The machine was made to cut tomatoes in 2cm thickness. The capacity of the machine is 540.09g per minute and its performance efficiency is 70%.

  16. Multivariate interpolation

    Directory of Open Access Journals (Sweden)

    Pakhnutov I.A.

    2017-04-01

    Full Text Available the paper deals with iterative interpolation methods in forms of similar recursive procedures defined by a sort of simple functions (interpolation basis not necessarily real valued. These basic functions are kind of arbitrary type being defined just by wish and considerations of user. The studied interpolant construction shows virtue of versatility: it may be used in a wide range of vector spaces endowed with scalar product, no dimension restrictions, both in Euclidean and Hilbert spaces. The choice of basic interpolation functions is as wide as possible since it is subdued nonessential restrictions. The interpolation method considered in particular coincides with traditional polynomial interpolation (mimic of Lagrange method in real unidimensional case or rational, exponential etc. in other cases. The interpolation as iterative process, in fact, is fairly flexible and allows one procedure to change the type of interpolation, depending on the node number in a given set. Linear interpolation basis options (perhaps some nonlinear ones allow to interpolate in noncommutative spaces, such as spaces of nondegenerate matrices, interpolated data can also be relevant elements of vector spaces over arbitrary numeric field. By way of illustration, the author gives the examples of interpolation on the real plane, in the separable Hilbert space and the space of square matrices with vektorvalued source data.

  17. Efficacy of UV-C irradiation for inactivation of food-borne pathogens on sliced cheese packaged with different types and thicknesses of plastic films.

    Science.gov (United States)

    Ha, Jae-Won; Back, Kyeong-Hwan; Kim, Yoon-Hee; Kang, Dong-Hyun

    2016-08-01

    In this study, the efficacy of using UV-C light to inactivate sliced cheese inoculated with Escherichia coli O157:H7, Salmonella Typhimurium, and Listeria monocytogenes and, packaged with 0.07 mm films of polyethylene terephthalate (PET), polyvinylchloride (PVC), polypropylene (PP), and polyethylene (PE) was investigated. The results show that compared with PET and PVC, PP and PE films showed significantly reduced levels of the three pathogens compared to inoculated but non-treated controls. Therefore, PP and PE films of different thicknesses (0.07 mm, 0.10 mm, and 0.13 mm) were then evaluated for pathogen reduction of inoculated sliced cheese samples. Compared with 0.10 and 0.13 mm, 0.07 mm thick PP and PE films did not show statistically significant reductions compared to non-packaged treated samples. Moreover, there were no statistically significant differences between the efficacy of PP and PE films. These results suggest that adjusted PP or PE film packaging in conjunction with UV-C radiation can be applied to control foodborne pathogens in the dairy industry. Copyright © 2016. Published by Elsevier Ltd.

  18. Clinical usefulness of facial soft tissues thickness measurement using 3D computed tomographic images

    International Nuclear Information System (INIS)

    Jeong, Ho Gul; Kim, Kee Deog; Hu, Kyung Seok; Lee, Jae Bum; Park, Hyok; Han, Seung Ho; Choi, Seong Ho; Kim, Chong Kwan; Park, Chang Seo

    2006-01-01

    To evaluate clinical usefulness of facial soft tissue thickness measurement using 3D computed tomographic images. One cadaver that had sound facial soft tissues was chosen for the study. The cadaver was scanned with a Helical CT under following scanning protocols about slice thickness and table speed: 3 mm and 3 mm/sec, 5 mm and 5 mm/sec, 7 mm and 7 mm/sec. The acquired data were reconstructed 1.5, 2.5, 3.5 mm reconstruction interval respectively and the images were transferred to a personal computer. Using a program developed to measure facial soft tissue thickness in 3D image, the facial soft tissue thickness was measured. After the ten-time repeation of the measurement for ten times, repeated measure analysis of variance (ANOVA) was adopted to compare and analyze the measurements using the three scanning protocols. Comparison according to the areas was analysed by Mann-Whitney test. There were no statistically significant intraobserver differences in the measurements of the facial soft tissue thickness using the three scanning protocols (p>0.05). There were no statistically significant differences between measurements in the 3 mm slice thickness and those in the 5 mm, 7 mm slice thickness (p>0.05). There were statistical differences in the 14 of the total 30 measured points in the 5 mm slice thickness and 22 in the 7 mm slice thickness. The facial soft tissue thickness measurement using 3D images of 7 mm slice thickness is acceptable clinically, but those of 5 mm slice thickness is recommended for the more accurate measurement

  19. Analysis of aliasing artifacts in 16-slice helical CT

    International Nuclear Information System (INIS)

    Chen Wei; Liu Jingkang; Ou Xiaoguang; Li Wenzheng; Liao Weihua; Yan Ang

    2006-01-01

    Objective: To recognize the features of aliasing artifacts on CT images, and to investigate the effects of imaging parameters on the magnitude of this artifacts. Methods: An adult dry skull was placed in a plastic water-filled container and scanned with a PHILIPS 16-slice helical CT. All the acquired transaxial images by using several different acquisition or reconstruction parameters were examined for comparative assessment of the aliasing artifacts. Results: The aliasing artifacts could be seen in most instances and characterized as the spokewise patterns emanating from the edges of high contrast structure as its radius varies sharply in the longitudinal direction. The images that scanned with pitch of 0.3, 0.6 and 0.9, respectively, showed aliasing artifacts, and its severities increased with pitches escalated (detector combination 16 x 1.5, reconstruction thickness 2 mm); There were more significant aliasing artifacts on the images reconstructed with 0.8 mm slice width compared with 1-mm slice width, and no aliasing artifacts were observed on the images reconstructed with 2-mm slice width (detector combination 16 x 0.75, pitch 0.6); No artifacts were perceived on the images scanned with detector combination 16 x 0.75, while presented evidently with the use of detector combination 16 x 1.5 (pitch 0.6, reconstruction thickness 2 mm); The degrees of aliasing artifacts were unaltered when reconstruction interval and tube current changed. Conclusions: Aliasing artifacts are caused by undersampling. When the operator choose the thinner sampling thickness, lower pitch and a much wider reconstruction thickness judiciously, aliasing artifacts could be effectively mitigated or suppressed. (authors)

  20. Diagnostic limitations of 10 mm thickness single-slice computed tomography for patients with suspected appendicitis

    International Nuclear Information System (INIS)

    Kaidu, Motoki; Oyamatu, Manabu; Sato, Kenji; Saitou, Akira; Yamamoto, Satoshi; Yoshimura, Norihiko; Sasai, Keisuke

    2008-01-01

    The aim of this retrospective analysis was to evaluate the accuracy of 10 mm thickness single helical computed tomography (CT) examination for confirming the diagnosis of appendicitis or providing a diagnosis other than appendicitis, including underlying periappendical neoplasms. From April 1, 2001 to March 30, 2005, a total of 272 patients with suspected appendicitis underwent CT examinations. Of the 272 patients, 106 (39%) underwent surgery. Seven CT examinations for seven patients were excluded because of inconsistency of the CT protocol. We therefore reviewed 99 CT images (99 patients) with correlation to surgical-pathological findings to clarify the diagnostic accuracy of CT examinations. We compared the postoperative diagnosis with the preoperative CT report. The final diagnoses were confirmed by macroscopic findings at surgery and pathological evaluations if necessary. Of the 99 patients, 87 had acute appendicitis at surgery. The sensitivity, specificity, and accuracy of CT were 98.9%, 75.0%, and 96.0%, respectively. The positive predictive value and negative predictive value were 96.6% and 90.0%, respectively. Among nine patients in the true-negative category, five had colon cancers; and among three patients in the false-positive category, two had cancer of the cecal-appendiceal region as the underlying disease. CT examination is useful for patients with suspected appendicitis, but radiologists should be aware of the limitation of thick-sliced single helical CT. They should also be aware of the possibility of other diseases, including coincident abdominal neoplasms and underlying cecal-appendiceal cancer. (author)

  1. [Standardization of production of process Notopterygii Rhizoma et Radix slices].

    Science.gov (United States)

    Sun, Zhen-Yang; Wang, Ying-Zi; Nie, Rui-Jie; Zhang, Jing-Zhen; Wang, Si-Yu

    2017-12-01

    Notopterol, isoimperatorin, volatile oil and extract (water and ethanol) were used as the research objects in this study to investigate the effects of different softening method, slice thickness and drying methods on the quality of Notopterygii Rhizoma et Radix slices, and the experimental data were analyzed by homogeneous distance evaluation method. The results showed that different softening, cutting and drying processes could affect the content of five components in Notopterygii Rhizoma et Radix incisum. The best processing technology of Notopterygii Rhizoma et Radix slices was as follows: non-medicinal parts were removed; mildewed and rot as well as moth-eaten parts were removed; washed by the flowing drinking water; stacked in the drug pool; moistening method was used for softening, where 1/8 volume of water was sprayed for every 1 kg of herbs every 2 h; upper part of herbs covered with clean and moist cotton, and cut into thick slices (2-4 mm) after 12 h moistening until appropriate softness, then received blast drying for 4 h at 50 ℃, and turned over for 2 times during the drying. The process is practical and provides the experimental basis for the standardization of the processing of Notopterygii Rhizoma et Radix, with great significance to improve the quality of Notopterygii Rhizoma et Radix slices. Copyright© by the Chinese Pharmaceutical Association.

  2. Data-adapted moving least squares method for 3-D image interpolation

    International Nuclear Information System (INIS)

    Jang, Sumi; Lee, Yeon Ju; Jeong, Byeongseon; Nam, Haewon; Lee, Rena; Yoon, Jungho

    2013-01-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons. (paper)

  3. Effect of simultaneous infrared dry-blanching and dehydration on quality characteristics of carrot slices

    Science.gov (United States)

    This study investigated the effects of various processing parameters on carrot slices exposed to infrared (IR) radiation heating for achieving simultaneous infrared dry-blanching and dehydration (SIRDBD). The investigated parameters were product surface temperature, slice thickness and processing ti...

  4. Radial basis function interpolation of unstructured, three-dimensional, volumetric particle tracking velocimetry data

    International Nuclear Information System (INIS)

    Casa, L D C; Krueger, P S

    2013-01-01

    Unstructured three-dimensional fluid velocity data were interpolated using Gaussian radial basis function (RBF) interpolation. Data were generated to imitate the spatial resolution and experimental uncertainty of a typical implementation of defocusing digital particle image velocimetry. The velocity field associated with a steadily rotating infinite plate was simulated to provide a bounded, fully three-dimensional analytical solution of the Navier–Stokes equations, allowing for robust analysis of the interpolation accuracy. The spatial resolution of the data (i.e. particle density) and the number of RBFs were varied in order to assess the requirements for accurate interpolation. Interpolation constraints, including boundary conditions and continuity, were included in the error metric used for the least-squares minimization that determines the interpolation parameters to explore methods for improving RBF interpolation results. Even spacing and logarithmic spacing of RBF locations were also investigated. Interpolation accuracy was assessed using the velocity field, divergence of the velocity field, and viscous torque on the rotating boundary. The results suggest that for the present implementation, RBF spacing of 0.28 times the boundary layer thickness is sufficient for accurate interpolation, though theoretical error analysis suggests that improved RBF positioning may yield more accurate results. All RBF interpolation results were compared to standard Gaussian weighting and Taylor expansion interpolation methods. Results showed that RBF interpolation improves interpolation results compared to the Taylor expansion method by 60% to 90% based on the average squared velocity error and provides comparable velocity results to Gaussian weighted interpolation in terms of velocity error. RMS accuracy of the flow field divergence was one to two orders of magnitude better for the RBF interpolation compared to the other two methods. RBF interpolation that was applied to

  5. Comparison of sliced lungs with whole lung sets for a torso phantom measured with Ge detectors using Monte Carlo simulations (MCNP).

    Science.gov (United States)

    Kramer, Gary H; Guerriere, Steven

    2003-02-01

    Lung counters are generally used to measure low energy photons (<100 keV). They are usually calibrated with lung sets that are manufactured from a lung tissue substitute material that contains homogeneously distributed activity; however, it is difficult to verify either the activity in the phantom or the homogeneity of the activity distribution without destructive testing. Lung sets can have activities that are as much as 25% different from the expected value. An alternative method to using whole lungs to calibrate a lung counter is to use a sliced lung with planar inserts. Experimental work has already indicated that this alternative method of calibration can be a satisfactory substitute. This work has extended the experimental study by the use of Monte Carlo simulation to validate that sliced and whole lungs are equivalent. It also has determined the optimum slice thicknesses that separate the planar sources in the sliced lung. Slice thicknesses have been investigated in the range of 0.5 cm to 9.0 cm and at photon energies from 17 keV to 1,000 keV. Results have shown that there is little difference between sliced and whole lungs at low energies providing that the slice thickness is 2.0 cm or less. As the photon energy rises the slice thickness can increase substantially with no degradation on equivalence.

  6. Slice sensitivity profiles and pixel noise of multi-slice CT in comparison with single-slice CT

    International Nuclear Information System (INIS)

    Schorn, C.; Obenauer, S.; Funke, M.; Hermann, K.P.; Kopka, L.; Grabbe, E.

    1999-01-01

    Purpose: Presentation and evaluation of slice sensitivity profile and pixel noise of multi-slice CT in comparison to single-slice CT. Methods: Slice sensitivity profiles and pixel noise of a multi-slice CT equiped with a 2D matrix detector array and of a single-slice CT were evaluated in phantom studies. Results: For the single-slice CT the width of the slice sensitivity profiles increased with increasing pitch. In spite of a much higher table speed the slice sensitivity profiles of multi-slice CT were narrower and did not increase with higher pitch. Noise in single-slice CT was independent of pitch. For multi-slice CT noise increased with higher pitch and for the higher pitch decreased slightly with higher detector row collimation. Conclusions: Multi-slice CT provides superior z-resolution and higher volume coverage speed. These qualities fulfill one of the prerequisites for improvement of 3D postprocessing. (orig.) [de

  7. Visual patch clamp recording of neurons in thick portions of the adult spinal cord

    DEFF Research Database (Denmark)

    Munch, Anders Sonne; Smith, Morten; Moldovan, Mihai

    2010-01-01

    The study of visually identified neurons in slice preparations from the central nervous system offers considerable advantages over in vivo preparations including high mechanical stability in the absence of anaesthesia and full control of the extracellular medium. However, because of their relative...... remain alive and capable of generating action potentials. By stimulating the lateral funiculus we can evoke intense synaptic activity associated with large increases in conductance of the recorded neurons. The conductance increases substantially more in neurons recorded in thick slices suggesting...... that the size of the network recruited with the stimulation increases with the thickness of the slices. We also find that that the number of spontaneous excitatory postsynaptic currents (EPSCs) is higher in thick slices compared with thin slices while the number of spontaneous inhibitory postsynaptic currents...

  8. Linear Methods for Image Interpolation

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  9. Spatial interpolation

    NARCIS (Netherlands)

    Stein, A.

    1991-01-01

    The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are

  10. Linear Methods for Image Interpolation

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  11. Low-dose ECG-gated 64-slices helical CT angiography of the chest: evaluation of image quality in 105 patients

    International Nuclear Information System (INIS)

    D'Agostino, A.G.; Remy-Jardin, M.; Khalil, C.; Remy, J.; Delannoy-Deken, V.; Duhamel, A.; Flohr, T.

    2006-01-01

    The purpose of this study was to evaluate image quality of low-dose electrocardiogram (ECG)-gated multislice helical computed tomography (CT) angiograms of the chest. One hundred and five consecutive patients with a regular sinus rhythm (72 men; 33 women) underwent ECG-gated CT angiographic examination of the chest without administration of beta blockers using the following parameters: (a) collimation 32 x 0.6 mm with z-flying focal spot for the acquisition of 64 overlapping 0.6-mm slices, rotation time 0.33 s, pitch 0.3; (b) 120 kV, 200 mAs; (c) use of two dose modulation systems, including adjustment of the mAs setting to the patient's size and anatomical shape and an ECG-controlled tube current. Subjective and objective image quality was evaluated by two radiologists in consensus on 3-mm-thick scans reconstructed at 55% of the response rate (RR) interval. The population and protocol characteristics included: (a) a mean [±standard deviation (SD)] body mass index (BMI) of 24.47 (±4.64); (b) a mean (±SD) heart rate of 72.04 (±15.76) bpm; (c) a mean (±SD) scanning time of 18.3 (±2.73) s; (d) a mean (±SD) dose-length product (DLP) value of 260.57 (±83.67) mGy/cm; (e) an estimated average effective dose of 4.95 (±1.59) mSv. Subjective noise was depicted in a total of nine examinations (8.5%), always rated as mild. Objective noise was assessed by measuring the standard deviation of pixel values in a homogeneous region of interest within the trachea and descending aorta; SD was 15.91 HU in the trachea and 22.16 HU in the descending aorta, with no significant difference in the mean value of the standard deviations between the four categories of BMI except for obese patients, who had a higher mean SD within the aorta. Interpolation artefacts were depicted in 22 patients, with a mean heart rate significantly lower than that of patients without interpolation artifacts, rated as mild in 11 patients and severe in 11 patients. The severity of interpolation artefacts

  12. Three-dimensional image analysis of the skull using variable CT scanning protocols-effect of slice thickness on measurement in the three-dimensional CT images

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Ho Gul; Kim, Kee Deog; Park, Hyok; Kim, Dong Ook; Jeong, Hai Jo; Kim, Hee Joung; Yoo, Sun Kook; Kim, Yong Oock; Park, Chang Seo [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2004-07-15

    To evaluate the quantitative accuracy of three-dimensional (3D) images by mean of comparing distance measurements on the 3D images with direct measurements of dry human skull according to slice thickness and scanning modes. An observer directly measured the distance of 21 line items between 12 orthodontic landmarks on the skull surface using a digital vernier caliper and each was repeated five times. The dry human skull was scanned with a Helical CT with various slice thickness (3, 5, 7 mm) and acquisition modes (Conventional and Helical). The same observer measured corresponding distance of the same items on reconstructed 3D images with the internal program of V-works 4.0 (Cybermed Inc., Seoul, Korea). The quantitative accuracy of distance measurements were statistically evaluated with Wilcoxons' two-sample test. 11 line items in Conventional 3 mm, 8 in Helical 3 mm, 11 in Conventional 5 mm, 10 in Helical 5 mm, 5 in Conventional 7 mm and 9 in Helical 7 mm showed no statistically significant difference. Average difference between direct measurements and measurements on 3D CT images was within 2 mm in 19 line items of Conventional 3 mm. 20 of Helical 3 mm, 15 of Conventional 5 mm, 18 of Helical 5 mm, 11 of Conventional 7 mm and 16 of Helical 7 mm. Considering image quality and patient's exposure time, scanning protocol of Helical 5 mm is recommended for 3D image analysis of the skull in CT.

  13. Improved biochemical preservation of lung slices during cold storage.

    Science.gov (United States)

    Bull, D A; Connors, R C; Reid, B B; Albanil, A; Stringham, J C; Karwande, S V

    2000-05-15

    Development of lung preservation solutions typically requires whole-organ models which are animal and labor intensive. These models rely on physiologic rather than biochemical endpoints, making accurate comparison of the relative efficacy of individual solution components difficult. We hypothesized that lung slices could be used to assess preservation of biochemical function during cold storage. Whole rat lungs were precision cut into slices with a thickness of 500 microm and preserved at 4 degrees C in the following solutions: University of Wisconsin (UW), Euro-Collins (EC), low-potassium-dextran (LPD), Kyoto (K), normal saline (NS), or a novel lung preservation solution (NPS) developed using this model. Lung biochemical function was assessed by ATP content (etamol ATP/mg wet wt) and capacity for protein synthesis (cpm/mg protein) immediately following slicing (0 h) and at 6, 12, 18, and 24 h of cold storage. Six slices were assayed at each time point for each solution. The data were analyzed using analysis of variance and are presented as means +/- SD. ATP content was significantly higher in the lung slices stored in NPS compared with all other solutions at each time point (P cold storage. Copyright 2000 Academic Press.

  14. Cardiac tissue slices: preparation, handling, and successful optical mapping.

    Science.gov (United States)

    Wang, Ken; Lee, Peter; Mirams, Gary R; Sarathchandra, Padmini; Borg, Thomas K; Gavaghan, David J; Kohl, Peter; Bollensdorff, Christian

    2015-05-01

    Cardiac tissue slices are becoming increasingly popular as a model system for cardiac electrophysiology and pharmacology research and development. Here, we describe in detail the preparation, handling, and optical mapping of transmembrane potential and intracellular free calcium concentration transients (CaT) in ventricular tissue slices from guinea pigs and rabbits. Slices cut in the epicardium-tangential plane contained well-aligned in-slice myocardial cell strands ("fibers") in subepicardial and midmyocardial sections. Cut with a high-precision slow-advancing microtome at a thickness of 350 to 400 μm, tissue slices preserved essential action potential (AP) properties of the precutting Langendorff-perfused heart. We identified the need for a postcutting recovery period of 36 min (guinea pig) and 63 min (rabbit) to reach 97.5% of final steady-state values for AP duration (APD) (identified by exponential fitting). There was no significant difference between the postcutting recovery dynamics in slices obtained using 2,3-butanedione 2-monoxime or blebistatin as electromechanical uncouplers during the cutting process. A rapid increase in APD, seen after cutting, was caused by exposure to ice-cold solution during the slicing procedure, not by tissue injury, differences in uncouplers, or pH-buffers (bicarbonate; HEPES). To characterize intrinsic patterns of CaT, AP, and conduction, a combination of multipoint and field stimulation should be used to avoid misinterpretation based on source-sink effects. In summary, we describe in detail the preparation, mapping, and data analysis approaches for reproducible cardiac tissue slice-based investigations into AP and CaT dynamics. Copyright © 2015 the American Physiological Society.

  15. Usefulness of thin slice target CT scan in detecting mediastinal and hilar lymphadenopathy

    International Nuclear Information System (INIS)

    Yoshida, Shoji; Maeda, Tomoho; Nishioka, Masatoshi

    1986-01-01

    Comparative study of target scan with the different slice thickness and scan modes was performed to evaluate the mediastinal and hilar lymphadenopathy. 20 cases in controls and 35 cases in lymphadenopathy were examined. To delineate mediastinal and hilar lymphadenopathy, the scan mode of standard target was most useful in contrast and sharpness. Thin slice thickness with 5 mm was necessary in detecting small lymphnode or contour and internal structure of enlarged lymphnode. Valuable estimation of 5 mm contiguous target scan was obtained in the subaortic node (no. 5), tracheobronchial node (no. 4), precarinal and subcarinal node (no. 7) and right hilar node (no. 12). (author)

  16. Feature displacement interpolation

    DEFF Research Database (Denmark)

    Nielsen, Mads; Andresen, Per Rønsholt

    1998-01-01

    Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features...... often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov regularizations, also including scale......-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface....

  17. Scanning and contrast enhancement protocols for multi-slice CT in evaluation of the upper abdomen

    International Nuclear Information System (INIS)

    Awai, Kazuo; Onishi, Hiromitsu; Takada, Koichi; Yamaguchi, Yasuo; Eguchi, Nobuko; Hiraishi, Kumiko; Hori, Shinichi

    2000-01-01

    The advent of multi-slice CT is one of the quantum leaps in computed tomography since the introduction of helical CT. Multi-slice CT can rapidly scan a large longitudinal (z-axis) volume with high longitudinal resolution and low image artifacts. The rapid volume coverage speed of multi-slice CT can increase the difficulty in optimizing the delay time between the beginning of contrast material injection and the acquisition of images and we need accurate knowledge about optimal temporal window for adequate contrast enhancement. High z-axis resolution of multi-slice can improve the quality of three-dimensional images and MPR images and we must select adequate slice thickness and slice intervals in each case. We discuss basic considerations for adequate contrast enhancement and scanning protocols by multi-slice CT scanner in the upper abdomen. (author)

  18. Brain Slice Staining and Preparation for Three-Dimensional Super-Resolution Microscopy

    Science.gov (United States)

    German, Christopher L.; Gudheti, Manasa V.; Fleckenstein, Annette E.; Jorgensen, Erik M.

    2018-01-01

    Localization microscopy techniques – such as photoactivation localization microscopy (PALM), fluorescent PALM (FPALM), ground state depletion (GSD), and stochastic optical reconstruction microscopy (STORM) – provide the highest precision for single molecule localization currently available. However, localization microscopy has been largely limited to cell cultures due to the difficulties that arise in imaging thicker tissue sections. Sample fixation and antibody staining, background fluorescence, fluorophore photoinstability, light scattering in thick sections, and sample movement create significant challenges for imaging intact tissue. We have developed a sample preparation and image acquisition protocol to address these challenges in rat brain slices. The sample preparation combined multiple fixation steps, saponin permeabilization, and tissue clarification. Together, these preserve intracellular structures, promote antibody penetration, reduce background fluorescence and light scattering, and allow acquisition of images deep in a 30 μm thick slice. Image acquisition challenges were resolved by overlaying samples with a permeable agarose pad and custom-built stainless steel imaging adapter, and sealing the imaging chamber. This approach kept slices flat, immobile, bathed in imaging buffer, and prevented buffer oxidation during imaging. Using this protocol, we consistently obtained single molecule localizations of synaptic vesicle and active zone proteins in three-dimensions within individual synaptic terminals of the striatum in rat brain slices. These techniques may be easily adapted to the preparation and imaging of other tissues, substantially broadening the application of super-resolution imaging. PMID:28924666

  19. Computation of a voxelized anthropomorphic phantom from Computer Tomography slices and 3D dose distribution calculation utilizing the MCNP5 Code

    International Nuclear Information System (INIS)

    Abella, V.; Miro, R.; Juste, B.; Verdu, G.

    2008-01-01

    Full text: The purpose of this work is to obtain the voxelization of a series of tomography slices in order to provide a voxelized human phantom throughout a MatLab algorithm, and the consequent simulation of the irradiation of such phantom with the photon beam generated in a Theratron 780 (MDS Nordion) 60 Co radiotherapy unit, using the Monte Carlo transport code MCNP (Monte Carlo N-Particle), version 5. The project provides as results dose mapping calculations inside the voxelized anthropomorphic phantom. Prior works have validated the cobalt therapy model utilizing a simple heterogeneous water cube-shaped phantom. The reference phantom model utilized in this work is the Zubal phantom, which consists of a group of pre-segmented CT slices of a human body. The CT slices are to be input into the Matlab program which computes the voxelization by means of two-dimensional pixel and material identification on each slice, and three-dimensional interpolation, in order to depict the phantom geometry via small cubic cells. Each slice is divided in squares with the size of the desired voxelization, and then the program searches for the pixel intensity with a predefined material at each square, making a subsequent three-dimensional interpolation. At the end of this process, the program produces a voxelized phantom in which each voxel defines the mixture of the different materials that compose it. In the case of the Zubal phantom, the voxels result in pure organ materials due to the fact that the phantom is presegmented. The output of this code follows the MCNP input deck format and is integrated in a full input model including the 60 Co radiotherapy unit. Dose rates are calculated using the MCNP5 tool FMESH, superimposed mesh tally. This feature allows to tally particles on an independent mesh over the problem geometry, and to obtain the length estimation of the particle flux, in units of particles/cm 2 (tally F4). Furthermore, the particle flux is transformed into dose by

  20. Interpolation functions and the Lions-Peetre interpolation construction

    International Nuclear Information System (INIS)

    Ovchinnikov, V I

    2014-01-01

    The generalization of the Lions-Peetre interpolation method of means considered in the present survey is less general than the generalizations known since the 1970s. However, our level of generalization is sufficient to encompass spaces that are most natural from the point of view of applications, like the Lorentz spaces, Orlicz spaces, and their analogues. The spaces φ(X 0 ,X 1 ) p 0 ,p 1 considered here have three parameters: two positive numerical parameters p 0 and p 1 of equal standing, and a function parameter φ. For p 0 ≠p 1 these spaces can be regarded as analogues of Orlicz spaces under the real interpolation method. Embedding criteria are established for the family of spaces φ(X 0 ,X 1 ) p 0 ,p 1 , together with optimal interpolation theorems that refine all the known interpolation theorems for operators acting on couples of weighted spaces L p and that extend these theorems beyond scales of spaces. The main specific feature is that the function parameter φ can be an arbitrary natural functional parameter in the interpolation. Bibliography: 43 titles

  1. Contrast-guided image interpolation.

    Science.gov (United States)

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  2. Digital time-interpolator

    International Nuclear Information System (INIS)

    Schuller, S.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report presents a description of the design of a digital time meter. This time meter should be able to measure, by means of interpolation, times of 100 ns with an accuracy of 50 ps. In order to determine the best principle for interpolation, three methods were simulated at the computer with a Pascal code. On the basis of this the best method was chosen and used in the design. In order to test the principal operation of the circuit a part of the circuit was constructed with which the interpolation could be tested. The remainder of the circuit was simulated with a computer. So there are no data available about the operation of the complete circuit in practice. The interpolation part however is the most critical part, the remainder of the circuit is more or less simple logic. Besides this report also gives a description of the principle of interpolation and the design of the circuit. The measurement results at the prototype are presented finally. (author). 3 refs.; 37 figs.; 2 tabs

  3. A sandwich-like differential B-dot based on EACVD polycrystalline diamond slice

    Science.gov (United States)

    Xu, P.; Yu, Y.; Xu, L.; Zhou, H. Y.; Qiu, C. J.

    2018-06-01

    In this article, we present a method of mass production of a standardized high-performance differential B-dot magnetic probe together with the magnetic field measurement in a pulsed current device with the current up to hundreds of kilo-Amperes. A polycrystalline diamond slice produced in an Electron Assisted Chemical Vapor Deposition device is used as the base and insulating material to imprint two symmetric differential loops for the magnetic field measurement. The SP3 carbon bond in the cubic lattice structure of diamond is confirmed by Raman spectra. The thickness of this slice is 20 μm. A gold loop is imprinted onto each surface of the slice by using the photolithography technique. The inner diameter, width, and thickness of each loop are 0.8 mm, 50 μm, and 1 μm, respectively. It provides a way of measuring the pulsed magnetic field with a high spatial and temporal resolution, especially in limited space. This differential magnetic probe has demonstrated a very good common-mode rejection rate through the pulsed magnetic field measurement.

  4. Improved biochemical preservation of heart slices during cold storage.

    Science.gov (United States)

    Bull, D A; Reid, B B; Connors, R C; Albanil, A; Stringham, J C; Karwande, S V

    2000-01-01

    Development of myocardial preservation solutions requires the use of whole organ models which are animal and labor intensive. These models rely on physiologic rather than biochemical endpoints, making accurate comparison of the relative efficacy of individual solution components difficult. We hypothesized that myocardial slices could be used to assess preservation of biochemical function during cold storage. Whole rat hearts were precision cut into slices with a thickness of 200 microm and preserved at 4 degrees C in one of the following solutions: Columbia University (CU), University of Wisconsin (UW), D5 0.2% normal saline with 20 meq/l KCL (QNS), normal saline (NS), or a novel cardiac preservation solution (NPS) developed using this model. Myocardial biochemical function was assessed by ATP content (etamoles ATP/mg wet weight) and capacity for protein synthesis (counts per minute (cpm)/mg protein) immediately following slicing (0 hours), and at 6, 12, 18, and 24 hours of cold storage. Six slices were assayed at each time point for each solution. The data were analyzed using analysis of variance and are presented as the mean +/- standard deviation. ATP content was higher in the heart slices stored in the NPS compared to all other solutions at 6, 12, 18 and 24 hours of cold storage (p cold storage (p cold storage.

  5. Monotone piecewise bicubic interpolation

    International Nuclear Information System (INIS)

    Carlson, R.E.; Fritsch, F.N.

    1985-01-01

    In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables

  6. Localizing gravity on exotic thick 3-branes

    International Nuclear Information System (INIS)

    Castillo-Felisola, Oscar; Melfo, Alejandra; Pantoja, Nelson; Ramirez, Alba

    2004-01-01

    We consider localization of gravity on thick branes with a nontrivial structure. Double walls that generalize the thick Randall-Sundrum solution, and asymmetric walls that arise from a Z 2 symmetric scalar potential, are considered. We present a new asymmetric solution: a thick brane interpolating between two AdS 5 spacetimes with different cosmological constants, which can be derived from a 'fake supergravity' superpotential, and show that it is possible to confine gravity on such branes

  7. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    Science.gov (United States)

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  8. Is correction necessary when clinically determining quantitative cerebral perfusion parameters from multi-slice dynamic susceptibility contrast MR studies?

    International Nuclear Information System (INIS)

    Salluzzi, M; Frayne, R; Smith, M R

    2006-01-01

    Several groups have modified the standard singular value decomposition (SVD) algorithm to produce delay-insensitive cerebral blood flow (CBF) estimates from dynamic susceptibility contrast (DSC) perfusion studies. However, new dependences of CBF estimates on bolus arrival times and slice position in multi-slice studies have been recently recognized. These conflicting findings can be reconciled by accounting for several experimental and algorithmic factors. Using simulation and clinical studies, the non-simultaneous measurement of arterial and tissue concentration curves (relative slice position) in a multi-slice study is shown to affect time-related perfusion parameters, e.g. arterial-tissue-delay measurements. However, the current clinical impact of relative slice position on amplitude-related perfusion parameters, e.g. CBF, can be expected to be small unless any of the following conditions are present individually or in combination: (a) high concentration curve signal-to-noise ratios, (b) small tissue mean transit times, (c) narrow arterial input functions or (d) low temporal resolution of the DSC image sequence. Recent improvements in magnetic resonance (MR) technology can easily be expected to lead to scenarios where these effects become increasingly important sources of inaccuracy for all perfusion parameter estimates. We show that using Fourier interpolated (high temporal resolution) residue functions reduces the systematic error of the perfusion parameters obtained from multi-slice studies

  9. Interpolation for de-Dopplerisation

    Science.gov (United States)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  10. CMB anisotropies interpolation

    NARCIS (Netherlands)

    Zinger, S.; Delabrouille, Jacques; Roux, Michel; Maitre, Henri

    2010-01-01

    We consider the problem of the interpolation of irregularly spaced spatial data, applied to observation of Cosmic Microwave Background (CMB) anisotropies. The well-known interpolation methods and kriging are compared to the binning method which serves as a reference approach. We analyse kriging

  11. Spline Interpolation of Image

    OpenAIRE

    I. Kuba; J. Zavacky; J. Mihalik

    1995-01-01

    This paper presents the use of B spline functions in various digital signal processing applications. The theory of one-dimensional B spline interpolation is briefly reviewed, followed by its extending to two dimensions. After presenting of one and two dimensional spline interpolation, the algorithms of image interpolation and resolution increasing were proposed. Finally, experimental results of computer simulations are presented.

  12. Evaluation of TSE- and T1-3D-GRE-sequences for focal cartilage lesions in vitro in comparison to ultrahigh resolution multi-slice CT

    International Nuclear Information System (INIS)

    Stork, A.; Schulze, D.; Koops, A.; Kemper, J.; Adam, G.

    2002-01-01

    Purpose: Evaluation of TSE- and T 1 -3D-GRE-sequences for focal cartilage lesions in vitro in comparison to ultrahigh resolution multi-slice CT. Materials and methods: Forty artificial cartilage lesions in ten bovine patellae were immersed in a solution of iodinated contrast medium and assessed with ultrahigh resolution multi-slice CT. Fat-suppressed TSE images with intermediate- and T 2 -weighting at a slice thickness of 2, 3 and 4 mm as well as fat-suppressed T 1 -weighted 3D-FLASH images with an effective slice thickness of 1, 2 and 3 mm were acquired at 1.5 T. After adding Gd-DTPA to the saline solution containing the patellae, the T 1 -weighted 3D-FLASH imaging was repeated. Results: All cartilage lesions were visualised and graded with ultrahigh resolution multi-slice CT. The TSE images had a higher sensitivity and a higher inter- and intraobserver kappa compared to the FLASH-sequences (TSE: 70-95%; 0.82-0.83; 0.85-0.9; FLASH: 57.5-85%; 0.53-0.72; 0.73-0.82, respectively). An increase in slice thickness decreased the sensitivity, whereby deep lesions were even reliably depicted on TSE images at a slice thickness of 3 and 4 mm. Adding Gd-DTPA to the saline solution increased the sensitivity by 10% with no detectable advantage over the T 2 -weighted TSE images. Conclusion: TSE sequences and application of Gd-DTPA seemed to be superior to T 1 -weighted 3D-FLASH sequences without Gd-DTPA in the detection of focal cartilage lesions. The ultrahigh resolution multi-slice CT can serve as in vitro reference standard for focal cartilage lesions. (orig.) [de

  13. Temporal interpolation alters motion in fMRI scans: Magnitudes and consequences for artifact detection.

    Directory of Open Access Journals (Sweden)

    Jonathan D Power

    Full Text Available Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10-50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed. We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion.

  14. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Ince Serdar

    2008-01-01

    Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  15. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Janusz Konrad

    2009-01-01

    Full Text Available View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  16. Technical evaluation of DIC helical CT and 3D image for laparoscopic cholecystectomy

    International Nuclear Information System (INIS)

    Shibuya, Kouki; Uchimura, Fumiaki; Haga, Tomo

    1995-01-01

    Recently Laparoscopic Cholecystectomy (L.C.) was widely accepted for its low invasive procedure. Before L.C., it is important to understand anatomical recognization of biliary tree. We examined DIC Helical CT before L.C., and reconstructed 3D Cholangiographic image. We evaluated physical potentiality of Helical CT using Section Sensitivity Profiles (SSP) with 5, 10 mm slice thickness on 360deg linear interpolation. And we analyzed most useful 3D image for biliary tree. Results showed the SSP depended on slice thickness (X-ray beam width) and table movement at same reconstruction spacing. The peak of SSP depended on slice thickness (X-ray beam width) and reconstruction spacing at same table movement. Clinically, it was necessary under 5 mm/rotation table movement and 5 mm thickness for acquiring volume image data. 3D Cholangiographic image reconstructed with 1 mm spacing image was useful in evaluation of relationship of anatomical biliary tree. (author)

  17. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    Science.gov (United States)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  18. SPLINE, Spline Interpolation Function

    International Nuclear Information System (INIS)

    Allouard, Y.

    1977-01-01

    1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

  19. Measurement of slice sensitivity profile for a 64-slice spiral CT system

    International Nuclear Information System (INIS)

    Liu Chuanya; Qin Weichang; Wang Wei; Lu Chuanyou

    2006-01-01

    Objective: To measure and evaluate slice sensitivity profile (SSP) and the full width at half-maximum(FWHM) for a 64-slice spiral CT system. Methods: Using the same CT technique and body mode as those used for clinical CT, delta phantom was scanned with Somatom Sensation 64-slice spiral CT. SSPs and FWHM were measured both with reconstruction slice width of 0.6 mm at pitch=0.50, 0.75, 1.00, 1.25, 1.50 and with reconstruction slice width of 0.6, 1.0, 1.5 mm at pitch=1 respectively. Results: For normal slice width of 0. 6 mm, the measured FWHM, i.e. effective slice width, is 0.67, 0.67, 0.66, 0.69, 0.69 mm at different pitch. All the measured FWHM deviate less than 0.1 mm from the nominal slice width. The measured SSPs are symmetrical, bell-shaped curves without far-reaching tails, and show only slight variations as a function of the spiral pitch. When reconstruction slice width increase, relative SSP become wider. Conclusions: The variation of pitch hardly has effect all on SSP, effective slice width, and z-direction spatial resolution for Sensation 64-slice spiral CT system, which is helpful to optimize CT scanning protocol. (authors)

  20. Research on interpolation methods in medical image processing.

    Science.gov (United States)

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  1. The research on NURBS adaptive interpolation technology

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Zhang, Sujia; Zhang, Feng

    2017-04-01

    In order to solve the problems of Research on NURBS Adaptive Interpolation Technology, such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for NURBS adaptive interpolation method of NURBS curve and simulation. We can use NURBS adaptive interpolation that calculates (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meets the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  2. A z-gradient array for simultaneous multi-slice excitation with a single-band RF pulse.

    Science.gov (United States)

    Ertan, Koray; Taraghinia, Soheil; Sadeghi, Alireza; Atalar, Ergin

    2018-07-01

    Multi-slice radiofrequency (RF) pulses have higher specific absorption rates, more peak RF power, and longer pulse durations than single-slice RF pulses. Gradient field design techniques using a z-gradient array are investigated for exciting multiple slices with a single-band RF pulse. Two different field design methods are formulated to solve for the required current values of the gradient array elements for the given slice locations. The method requirements are specified, optimization problems are formulated for the minimum current norm and an analytical solution is provided. A 9-channel z-gradient coil array driven by independent, custom-designed gradient amplifiers is used to validate the theory. Performance measures such as normalized slice thickness error, gradient strength per unit norm current, power dissipation, and maximum amplitude of the magnetic field are provided for various slice locations and numbers of slices. Two and 3 slices are excited by a single-band RF pulse in simulations and phantom experiments. The possibility of multi-slice excitation with a single-band RF pulse using a z-gradient array is validated in simulations and phantom experiments. Magn Reson Med 80:400-412, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. A Note on Cubic Convolution Interpolation

    OpenAIRE

    Meijering, E.; Unser, M.

    2003-01-01

    We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

  4. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...... of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given...... a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java....

  5. Development of an electrically operated cassava slicing machine

    Directory of Open Access Journals (Sweden)

    I. S. Aji

    2013-08-01

    Full Text Available Labor input in manual cassava chips processing is very high and product quality is low. This paper presents the design and construction of an electrically operated cassava slicing machine that requires only one person to operate. Efficiency, portability, ease of operation, corrosion prevention of slicing component of the machine, force required to slice a cassava tuber, capacity of 10 kg/min and uniformity in the size of the cassava chips were considered in the design and fabrication of the machine. The performance of the machine was evaluated with cassava of average length and diameter of 253 mm and 60 mm respectively at an average speed of 154 rpm. The machine produced 5.3 kg of chips of 10 mm length and 60 mm diameter in 1 minute. The efficiency of the machine was 95.6% with respect to the quantity of the input cassava. The chips were found to be well chipped to the designed thickness, shape and of generally similar size. Galvanized steel sheets were used in the cutting section to avoid corrosion of components. The machine is portable and easy to operate which can be adopted for cassava processing in a medium size industry.

  6. Interpolation method by whole body computed tomography, Artronix 1120

    International Nuclear Information System (INIS)

    Fujii, Kyoichi; Koga, Issei; Tokunaga, Mitsuo

    1981-01-01

    Reconstruction of the whole body CT images by interpolation method was investigated by rapid scanning. Artronix 1120 with fixed collimator was used to obtain the CT images every 5 mm. X-ray source was circully movable to obtain perpendicular beam to the detector. A length of 150 mm was scanned in about 15 min., with the slice width of 5 mm. The images were reproduced every 7.5 mm, which was able to reduce every 1.5 mm when necessary. Out of 420 inspection in the chest, abdomen, and pelvis, 5 representative cases for which this method was valuable were described. The cases were fibrous histiocytoma of upper mediastinum, left adrenal adenoma, left ureter fibroma, recurrence of colon cancer in the pelvis, and abscess around the rectum. This method improved the image quality of lesions in the vicinity of the ureters, main artery, and rectum. The time required and exposure dose were reduced to 50% by this method. (Nakanishi, T.)

  7. Quasi interpolation with Voronoi splines.

    Science.gov (United States)

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  8. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  9. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  10. Image Interpolation with Contour Stencils

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    Image interpolation is the problem of increasing the resolution of an image. Linear methods must compromise between artifacts like jagged edges, blurring, and overshoot (halo) artifacts. More recent works consider nonlinear methods to improve interpolation of edges and textures. In this paper we apply contour stencils for estimating the image contours based on total variation along curves and then use this estimation to construct a fast edge-adaptive interpolation.

  11. Revisiting Veerman’s interpolation method

    DEFF Research Database (Denmark)

    Christiansen, Peter; Bay, Niels Oluf

    2016-01-01

    and (c) FEsimulations. A comparison of the determined forming limits yields insignificant differences in the limit strain obtainedwith Veerman’s method or exact Lagrangian interpolation for the two sheet metal forming processes investigated. Theagreement with the FE-simulations is reasonable.......This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established...... for comparison. Bulge testing and tensile testing of aluminium sheets containingelectro-chemically etched circle grids are performed to experimentally determine the forming limit of the sheet material.The forming limit is determined using (a) Veerman’s interpolation method, (b) exact Lagrangian interpolation...

  12. Interferometric interpolation of sparse marine data

    KAUST Repository

    Hanafy, Sherif M.

    2013-10-11

    We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.

  13. Outline and handling manual of experimental data time slice monitoring software 'SLICE'

    International Nuclear Information System (INIS)

    Shirai, Hiroshi; Hirayama, Toshio; Shimizu, Katsuhiro; Tani, Keiji; Azumi, Masafumi; Hirai, Ken-ichiro; Konno, Satoshi; Takase, Keizou.

    1993-02-01

    We have developed a software 'SLICE' which maps various kinds of plasma experimental data measured at the different geometrical position of JT-60U and JFT-2M onto the equilibrium magnetic configuration and treats them as a function of volume averaged minor radius ρ. Experimental data can be handled uniformly by using 'SLICE'. Plenty of commands of 'SLICE' make it easy to process the mapped data. The experimental data measured as line integrated values are also transformed by Abel inversion. The mapped data are fitted to a functional form and saved to the database 'MAPDB'. 'SLICE' can read the data from 'MAPDB' and re-display and transform them. Still more 'SLICE' creates run data of orbit following Monte-Carlo code 'OFMC' and tokamak predictive and interpretation code system 'TOPICS'. This report summarizes an outline and the usage of 'SLICE'. (author)

  14. Edge-detect interpolation for direct digital periapical images

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    The purpose of this study was to aid in the use of the digital images by edge-detect interpolation for direct digital periapical images using edge-deted interpolation. This study was performed by image processing of 20 digital periapical images; pixel replication, linear non-interpolation, linear interpolation, and edge-sensitive interpolation. The obtained results were as follows ; 1. Pixel replication showed blocking artifact and serious image distortion. 2. Linear interpolation showed smoothing effect on the edge. 3. Edge-sensitive interpolation overcame the smoothing effect on the edge and showed better image.

  15. On the way to isotopic spatial resolution: technical principles and applications of 16-slice CT

    International Nuclear Information System (INIS)

    Flohr, T.; Ohnesorge, B.; Stierstorfer, K.

    2005-01-01

    The broad introduction of multi-slice CT by all major vendors in 1998 was a milestone with regard to extended volume coverage, improved axial resolution and better utilization of the tube output. New clinical applications such as CT-examinations of the heart and the coronary arteries became possible. Despite all promising advances, some limitations remain for 4-slice CT systems. They come close to isotropic resolution, but do not fully reach it in routine clinical applications. Cardiac CT-examinations require careful patient selection. The new generation of multi-slice CT-systems offer simultaneous acquisition of up to 16 sub-millimeter slices and improved temporal resolution for cardiac examinations by means of reduced gantry rotation time (0.4 s). In this overview article we present the basic technical principles and potential applications of 16-slice technology for the example of a 16-slice CT-system (SOMATOM Sensation 16, Siemens AG, Forchheim). We discuss detector design and dose efficiency as well as spiral scan- and reconstruction techniques. At comparable slice thickness, 16-slice CT-systems have a better dose efficiency than 4-slice CT-systems. The cone-beam geometry of the measurement rays requires new reconstruction approaches, an example is the adaptive multiple plane reconstruction, AMPR. First clinical experience indicates that sub-millimeter slice width in combination with reduced gantry rotation-time improves the clinical stability of cardiac examinations and expands the spectrum of patients accessible to cardiac CT. 16-slice CT-systems have the potential to cover even large scan ranges with sub-millimeter slices at considerably reduced examination times, thus approaching the goal of routine isotropic imaging [de

  16. Dried fruit breadfruit slices by Refractive Window™ technique

    Directory of Open Access Journals (Sweden)

    Diego F. Tirado

    2016-01-01

    Full Text Available A large amount of products are dried due several reasons as preservation, weight reduction and improvement of stability. However, on the market are not offered low-cost and high quality products simultaneously. Although there are effective methods of dehydrating foods such as freeze drying, which preserves the flavor, color and vitamins, they are poor accessibility technologies. Therefore, alternative processes are required to be efficient and economical. The aim of this research was compare drying kinetics of sliced of breadfruit (Artocarpus communis using the technique of Refractive Window® (VR with the tray drying. To carry out this study, sliced of 1 and 2 mm thick were used. Refractive window drying was performed with the water bath temperature to 92 °C; and tray drying at 62 °C and an air velocity of 0.52 m/s. During the Refractive window drying technique, the moisture content reached the lower than tray drying levels. Similarly it happened with samples of 1 mm, which, having a smaller diameter reached lower moisture levels than samples 2 mm. The higher diffusivities were obtained during drying sliced VR 1 and 2 mm with coefficients of 6.13 and 3.90*10-9 m2/s respectively.

  17. Generalized interpolative quantum statistics

    International Nuclear Information System (INIS)

    Ramanathan, R.

    1992-01-01

    A generalized interpolative quantum statistics is presented by conjecturing a certain reordering of phase space due to the presence of possible exotic objects other than bosons and fermions. Such an interpolation achieved through a Bose-counting strategy predicts the existence of an infinite quantum Boltzmann-Gibbs statistics akin to the one discovered by Greenberg recently

  18. Evaluation of various interpolants available in DICE

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reu, Phillip L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Crozier, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.

  19. Multivariate Birkhoff interpolation

    CERN Document Server

    Lorentz, Rudolph A

    1992-01-01

    The subject of this book is Lagrange, Hermite and Birkhoff (lacunary Hermite) interpolation by multivariate algebraic polynomials. It unifies and extends a new algorithmic approach to this subject which was introduced and developed by G.G. Lorentz and the author. One particularly interesting feature of this algorithmic approach is that it obviates the necessity of finding a formula for the Vandermonde determinant of a multivariate interpolation in order to determine its regularity (which formulas are practically unknown anyways) by determining the regularity through simple geometric manipulations in the Euclidean space. Although interpolation is a classical problem, it is surprising how little is known about its basic properties in the multivariate case. The book therefore starts by exploring its fundamental properties and its limitations. The main part of the book is devoted to a complete and detailed elaboration of the new technique. A chapter with an extensive selection of finite elements follows as well a...

  20. Evaluation of solitary pulmonary metastasis of extrathoracic tumor with thin-slice computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Shiotani, Seiji; Yamada, Kouzo; Oshita, Fumihiro; Nomura, Ikuo; Noda, Kazumasa; Yamagata, Tatushi; Tajiri, Michihiko; Ishibashi, Makoto; Kameda, Youichi [Kanagawa Cancer Center, Yokohama (Japan)

    1995-10-01

    Thin-slice computed tomography (CT) images were compared with pathological findings in 9 specimens of solitary pulmonary nodules, which had been pathologically diagnosed as pulmonary metastasis of extrathoracic tumor. The thin-slice CT images were 2 mm-thick images reconstructed using a TCT-900S, HELIX (Toshiba, Tokyo) and examined at two different window and level settings. In every case, the surgical specimens were sliced transversely to correlate with the CT images. According to the image findings, the internal structure was of the solid-density type in every case, and the margin showed spiculation in 22%, notching in 67% and pleural indentation in 89%. Regarding the relationship between the pulmonary vessels and tumors, plural vascular involvement was revealed in every case. Thus, it was difficult to distinguish solitary pulmonary metastasis of extrathoracic tumor from primary lung cancer based on the thin-slice CT images. For some solitary pulmonary metastasis of extrathoracic tumor, a comprehensive diagnostic approach taking both the anamnesis and pathological findings into consideration was required. (author).

  1. Evaluation of spinal cord vessels using multi-slice CT angiography

    International Nuclear Information System (INIS)

    Chen Shuang; Zhu Ruijiang; Feng Xiaoyuan

    2006-01-01

    Objective: To evaluate the value of Multi-slice spiral CT angiography for spinal cord vessels. Methods: 11 adult subjects with suspected of myelopathy were performed with Multi-slice spiral CT angiography, An iodine contrast agent was injected at 3.5 ml/s, for total 100 ml. The parameters were axial 16 slice mode, 0.625 mm slice thickness, 0.8 s rotation, delay time depending on smartprep(15-25 s), multi-phase scan. The coronal and sagittal MPR and SSD were generated on a workstation compared with spinal digital subtraction angiography (DSA) to analyze normal or abnormal spinal cord vessels. Results: Normal findings at spinal CTA and digital subtraction angiography in six adult normal subjects and spinal cord vascular malformations (1 intradural extramedullary AVF, 4 dural AVFs) in five cases, Recognizable intradural vessels corresponding to anterior median (midline) veins and/or anterior spinal arteries were show in six adult normal subjects. Abnormal intradural vessels were detected in all five spinal cord vascular malformation with CT angiography, in comparison with digital subtraction angiography these vessels were primarily enlarged veins of the coronal venous plexus on the cord surface, radiculomedullary-dural arteries could not be clearly shown in four dural AVF, only one anterior spinal artery was detected in one patient with intradural medullary AVF, which direct shunt between anterior spinal artery and perimedullary vein with tortuous draining vessel. Conclusion: Multi-slice CT angiography is able to visualize the normal or abnormal spinal cord vessels. It could be used as a noninvasive method to screen the spinal cord vascular disease. (authors)

  2. Assessment of sphenoid sinus volume in order to determine sexual identity, using multi-slice CT images

    Directory of Open Access Journals (Sweden)

    Habibeh Farazdaghi

    2017-02-01

    Full Text Available Background and Aims: Gender determination is an important step in identification. For gender determination, anthropometric evaluation is one of the main forensic evaluations. The aim of this study was the assessment of sphenoid sinus volume in order to determine sexual identity, using multi-slice CT images. Materials and Methods: For volumetric analysis, axial paranasal sinus CT scan with 3-mm slice thickness was used. For this study, 80 images (40 women and 40 men older than 18 years were selected. For the assessment of sphenoid sinus volume, Digimizer software was used. The volume of sphenoid sinus was calculated using the following equation: v=∑ (area of each slice × thickness of each slice. Statistical analysis was performed by independent T-test. Results: The mean volume of sphenoid sinus was significantly greater in male gender (P=0.01.The assessed Cut off point was 9/35 cm3, showing that 63.4% of volume assessments greater than cut off point was supposed to be male and 64.1% of volumetry lesser than cut off point were female. Conclusion: According to the area under Roc curve (1.65%, sphenoid sinus volume is not an appropriate factor for differentiation of male and female from each other, which means the predictability of cut off point (9/35 cm3 is 65/1% close to reality.

  3. Imaging by the SSFSE single slice method at different viscosities of bile

    International Nuclear Information System (INIS)

    Kubo, Hiroya; Usui, Motoki; Fukunaga, Kenichi; Yamamoto, Naruto; Ikegami, Toshimi

    2001-01-01

    The single shot fast spin echo single thick slice method (single slice method) is a technique that visualizes the water component alone using a heavy T 2 . However, this method is considered to be markedly affected by changes in the viscosity of the material because a very long TE is used, and changes in the T 2 value, which are related to viscosity, directly affect imaging. In this study, we evaluated the relationship between the effects of TE and the T 2 value of bile in the single slice method and also examined the relationship between the signal intensity of bile on T 1 - and T 2 -weighted images and imaging by MR cholangiography (MRC). It was difficult to image bile with high viscosities at a usual effective TE level of 700-1,500 ms. With regard to the relationship between the signal intensity of bile and MRC imaging, all T 2 values of the bile samples showing relatively high signal intensities on the T 1 -weighted images suggested high viscosities, and MRC imaging of these bile samples was poor. In conclusion, MRC imaging of bile with high viscosities was poor with the single slice method. Imaging by the single slice method alone of bile showing a relatively high signal intensity on T 1 -weighted images should be avoided, and combination with other MRC sequences should be used. (author)

  4. Interpolation theory

    CERN Document Server

    Lunardi, Alessandra

    2018-01-01

    This book is the third edition of the 1999 lecture notes of the courses on interpolation theory that the author delivered at the Scuola Normale in 1998 and 1999. In the mathematical literature there are many good books on the subject, but none of them is very elementary, and in many cases the basic principles are hidden below great generality. In this book the principles of interpolation theory are illustrated aiming at simplification rather than at generality. The abstract theory is reduced as far as possible, and many examples and applications are given, especially to operator theory and to regularity in partial differential equations. Moreover the treatment is self-contained, the only prerequisite being the knowledge of basic functional analysis.

  5. Time-interpolator

    International Nuclear Information System (INIS)

    Blok, M. de; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report describes a time-interpolator with which time differences can be measured using digital and analog techniques. It concerns a maximum measuring time of 6.4 μs with a resolution of 100 ps. Use is made of Emitter Coupled Logic (ECL) and analogues of high-frequency techniques. The difficulty which accompanies the use of ECL-logic is keeping as short as possible the mutual connections and closing properly the outputs in order to avoid reflections. The digital part of the time-interpolator consists of a continuous running clock and logic which converts an input signal into a start- and stop signal. The analog part consists of a Time to Amplitude Converter (TAC) and an analog to digital converter. (author). 3 refs.; 30 figs

  6. A disposition of interpolation techniques

    NARCIS (Netherlands)

    Knotters, M.; Heuvelink, G.B.M.

    2010-01-01

    A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated

  7. Relationships of clinical protocols and reconstruction kernels with image quality and radiation dose in a 128-slice CT scanner: Study with an anthropomorphic and water phantom

    International Nuclear Information System (INIS)

    Paul, Jijo; Krauss, B.; Banckwitz, R.; Maentele, W.; Bauer, R.W.; Vogl, T.J.

    2012-01-01

    Research highlights: ► Clinical protocol, reconstruction kernel, reconstructed slice thickness, phantom diameter or the density of material it contains directly affects the image quality of DSCT. ► Dual energy protocol shows the lowest DLP compared to all other protocols examined. ► Dual-energy fused images show excellent image quality and the noise is same as that of single- or high-pitch mode protocol images. ► Advanced CT technology improves image quality and considerably reduce radiation dose. ► An important finding is the comparatively higher DLP of the dual-source high-pitch protocol compared to other single- or dual-energy protocols. - Abstract: Purpose: The aim of this study was to explore the relationship of scanning parameters (clinical protocols), reconstruction kernels and slice thickness with image quality and radiation dose in a DSCT. Materials and methods: The chest of an anthropomorphic phantom was scanned on a DSCT scanner (Siemens Somatom Definition flash) using different clinical protocols, including single- and dual-energy modes. Four scan protocols were investigated: 1) single-source 120 kV, 110 mA s, 2) single-source 100 kV, 180 mA s, 3) high-pitch 120 kV, 130 mA s and 4) dual-energy with 100/Sn140 kV, eff.mA s 89, 76. The automatic exposure control was switched off for all the scans and the CTDIvol selected was in between 7.12 and 7.37 mGy. The raw data were reconstructed using the reconstruction kernels B31f, B80f and B70f, and slice thicknesses were 1.0 mm and 5.0 mm. Finally, the same parameters and procedures were used for the scanning of water phantom. Friedman test and Wilcoxon-Matched-Pair test were used for statistical analysis. Results: The DLP based on the given CTDIvol values showed significantly lower exposure for protocol 4, when compared to protocol 1 (percent difference 5.18%), protocol 2 (percent diff. 4.51%), and protocol 3 (percent diff. 8.81%). The highest change in Hounsfield Units was observed with dual

  8. Node insertion in Coalescence Fractal Interpolation Function

    International Nuclear Information System (INIS)

    Prasad, Srijanani Anurag

    2013-01-01

    The Iterated Function System (IFS) used in the construction of Coalescence Hidden-variable Fractal Interpolation Function (CHFIF) depends on the interpolation data. The insertion of a new point in a given set of interpolation data is called the problem of node insertion. In this paper, the effect of insertion of new point on the related IFS and the Coalescence Fractal Interpolation Function is studied. Smoothness and Fractal Dimension of a CHFIF obtained with a node are also discussed

  9. BIMOND3, Monotone Bivariate Interpolation

    International Nuclear Information System (INIS)

    Fritsch, F.N.; Carlson, R.E.

    2001-01-01

    1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data

  10. High-resolution ex vivo imaging of coronary artery stents using 64-slice computed tomography - initial experience

    International Nuclear Information System (INIS)

    Rist, Carsten; Nikolaou, Konstantin; Wintersperger, Bernd J.; Reiser, Maximilian F.; Becker, Christoph R.; Flohr, Thomas

    2006-01-01

    The aim of the study was to evaluate the potential of new-generation multi-slice computed tomography (CT) scanner technology for the delineation of coronary artery stents in an ex vivo setting. Nine stents of various diameters (seven stents 3 mm, two stents 2.5 mm) were implanted into the coronary arteries of ex vivo porcine hearts and filled with a mixture of an iodine-containing contrast agent. Specimens were scanned with a 16-slice CT (16SCT) machine; (Somatom Sensation 16, Siemens Medical Solutions), slice thickness 0.75 mm, and a 64-slice CT (64SCT, Somatom Sensation 64), slice-thickness 0.6 mm. Stent diameters as well as contrast densities were measured, on both the 16SCT and 64SCT images. No significant differences of CT densities were observed between the 16SCT and 64SCT images outside the stent lumen: 265±25HU and 254±16HU (P=0.33), respectively. CT densities derived from the 64SCT images and 16SCT images within the stent lumen were 367±36HU versus 402±28HU, P<0.05, respectively. Inner and outer stent diameters as measured from 16SCT and 64SCT images were 2.68±0.08 mm versus 2.81±0.07 mm and 3.29±0.06 mm versus 3.18±0.07 mm (P<0.05), respectively. The new 64SCT scanner proved to be superior in the ex vivo assessment of coronary artery stents to the conventional 16SCT machine. Increased spatial resolution allows for improved assessment of the coronary artery stent lumen. (orig.)

  11. COMPARISONS BETWEEN DIFFERENT INTERPOLATION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    G. Garnero

    2014-01-01

    In the present study different algorithms will be analysed in order to spot an optimal interpolation methodology. The availability of the recent digital model produced by the Regione Piemonte with airborne LIDAR and the presence of sections of testing realized with higher resolutions and the presence of independent digital models on the same territory allow to set a series of analysis with consequent determination of the best methodologies of interpolation. The analysis of the residuals on the test sites allows to calculate the descriptive statistics of the computed values: all the algorithms have furnished interesting results; all the more interesting, notably for dense models, the IDW (Inverse Distance Weighing algorithm results to give best results in this study case. Moreover, a comparative analysis was carried out by interpolating data at different input point density, with the purpose of highlighting thresholds in input density that may influence the quality reduction of the final output in the interpolation phase.

  12. Research progress and hotspot analysis of spatial interpolation

    Science.gov (United States)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  13. Analysis of ECT Synchronization Performance Based on Different Interpolation Methods

    Directory of Open Access Journals (Sweden)

    Yang Zhixin

    2014-01-01

    Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.

  14. Analysis of velocity planning interpolation algorithm based on NURBS curve

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.

  15. Distance-two interpolation for parallel algebraic multigrid

    International Nuclear Information System (INIS)

    Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M

    2007-01-01

    In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers

  16. Ripple artifact reduction using slice overlap in slice encoding for metal artifact correction.

    Science.gov (United States)

    den Harder, J Chiel; van Yperen, Gert H; Blume, Ulrike A; Bos, Clemens

    2015-01-01

    Multispectral imaging (MSI) significantly reduces metal artifacts. Yet, especially in techniques that use gradient selection, such as slice encoding for metal artifact correction (SEMAC), a residual ripple artifact may be prominent. Here, an analysis is presented of the ripple artifact and of slice overlap as an approach to reduce the artifact. The ripple artifact was analyzed theoretically to clarify its cause. Slice overlap, conceptually similar to spectral bin overlap in multi-acquisition with variable resonances image combination (MAVRIC), was achieved by reducing the selection gradient and, thus, increasing the slice profile width. Time domain simulations and phantom experiments were performed to validate the analyses and proposed solution. Discontinuities between slices are aggravated by signal displacement in the frequency encoding direction in areas with deviating B0. Specifically, it was demonstrated that ripple artifacts appear only where B0 varies both in-plane and through-plane. Simulations and phantom studies of metal implants confirmed the efficacy of slice overlap to reduce the artifact. The ripple artifact is an important limitation of gradient selection based MSI techniques, and can be understood using the presented simulations. At a scan-time penalty, slice overlap effectively addressed the artifact, thereby improving image quality near metal implants. © 2014 Wiley Periodicals, Inc.

  17. Microfilament Contraction Promotes Rounding of Tunic Slices: An Integumentary Defense System in the Colonial Ascidian Aplidium yamazii.

    Science.gov (United States)

    Hirose, E; Ishii, T

    1995-08-01

    In Aplidium yamazii, when a slice of a live colony (approximately 0.5 mm thick) was incubated in seawater for 12 h, the slice became a round tunic fragment. This tunic rounding was inhibited by freezing of the slices, incubation with Ca2+-Mg2+ -free seawater, or addition of cytochalasin B. Staining of microfilaments in the slices with phalloidin-FITC showed the existence of a cellular network in the tunic. Contraction of this cellular network probably promotes rounding of the tunic slice. In electron microscopic observations, a new tunic cuticle regenerated at the surface of the round tunic fragments; the tunic cuticle did not regenerate in newly sliced specimens nor in specimens in which rounding was experimentally inhibited. Based on these results, an integumentary defense system is proposed in this species as follows. (1) When the colony is wounded externally, contraction of the cellular network promotes tunic contraction around the wound. (2) The wound is almost closed by tunic contraction. (3) Tunic contraction increases the density of the filamentous components of the tunic at the wound, and it may accelerate the regeneration of tunic cuticle there.

  18. Imaging by the SSFSE single slice method at different viscosities of bile

    Energy Technology Data Exchange (ETDEWEB)

    Kubo, Hiroya; Usui, Motoki; Fukunaga, Kenichi; Yamamoto, Naruto; Ikegami, Toshimi [Kawasaki Hospital, Kobe (Japan)

    2001-11-01

    The single shot fast spin echo single thick slice method (single slice method) is a technique that visualizes the water component alone using a heavy T{sub 2}. However, this method is considered to be markedly affected by changes in the viscosity of the material because a very long TE is used, and changes in the T{sub 2} value, which are related to viscosity, directly affect imaging. In this study, we evaluated the relationship between the effects of TE and the T{sub 2} value of bile in the single slice method and also examined the relationship between the signal intensity of bile on T{sub 1}- and T{sub 2}-weighted images and imaging by MR cholangiography (MRC). It was difficult to image bile with high viscosities at a usual effective TE level of 700-1,500 ms. With regard to the relationship between the signal intensity of bile and MRC imaging, all T{sub 2} values of the bile samples showing relatively high signal intensities on the T{sub 1}-weighted images suggested high viscosities, and MRC imaging of these bile samples was poor. In conclusion, MRC imaging of bile with high viscosities was poor with the single slice method. Imaging by the single slice method alone of bile showing a relatively high signal intensity on T{sub 1}-weighted images should be avoided, and combination with other MRC sequences should be used. (author)

  19. Interpolative Boolean Networks

    Directory of Open Access Journals (Sweden)

    Vladimir Dobrić

    2017-01-01

    Full Text Available Boolean networks are used for modeling and analysis of complex systems of interacting entities. Classical Boolean networks are binary and they are relevant for modeling systems with complex switch-like causal interactions. More descriptive power can be provided by the introduction of gradation in this model. If this is accomplished by using conventional fuzzy logics, the generalized model cannot secure the Boolean frame. Consequently, the validity of the model’s dynamics is not secured. The aim of this paper is to present the Boolean consistent generalization of Boolean networks, interpolative Boolean networks. The generalization is based on interpolative Boolean algebra, the [0,1]-valued realization of Boolean algebra. The proposed model is adaptive with respect to the nature of input variables and it offers greater descriptive power as compared with traditional models. For illustrative purposes, IBN is compared to the models based on existing real-valued approaches. Due to the complexity of the most systems to be analyzed and the characteristics of interpolative Boolean algebra, the software support is developed to provide graphical and numerical tools for complex system modeling and analysis.

  20. Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI

    Science.gov (United States)

    Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.

    2015-01-01

    Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085

  1. A simple water-immersion condenser for imaging living brain slices on an inverted microscope.

    Science.gov (United States)

    Prusky, G T

    1997-09-05

    Due to some physical limitations of conventional condensers, inverted compound microscopes are not optimally suited for imaging living brain slices with transmitted light. Herein is described a simple device that converts an inverted microscope into an effective tool for this application by utilizing an objective as a condenser. The device is mounted on a microscope in place of the condenser, is threaded to accept a water immersion objective, and has a slot for a differential interference contrast (DIC) slider. When combined with infrared video techniques, this device allows an inverted microscope to effectively image living cells within thick brain slices in an open perfusion chamber.

  2. Optimum and robust 3D facies interpolation strategies in a heterogeneous coal zone (Tertiary As Pontes basin, NW Spain)

    Energy Technology Data Exchange (ETDEWEB)

    Falivene, Oriol; Cabrera, Lluis; Saez, Alberto [Geomodels Institute, Group of Geodynamics and Basin Analysis, Department of Stratigraphy, Paleontology and Marine Geosciences, Universitat de Barcelona, c/ Marti i Franques s/n, Facultat de Geologia, 08028 Barcelona (Spain)

    2007-07-02

    Coal exploration and mining in extensively drilled and sampled coal zones can benefit from 3D statistical facies interpolation. Starting from closely spaced core descriptions, and using interpolation methods, a 3D optimum and robust facies distribution model was obtained for a thick, heterogeneous coal zone deposited in the non-marine As Pontes basin (Oligocene-Early Miocene, NW Spain). Several grid layering styles, interpolation methods (truncated inverse squared distance weighting, truncated kriging, truncated kriging with an areal trend, indicator inverse squared distance weighting, indicator kriging, and indicator kriging with an areal trend) and searching conditions were compared. Facies interpolation strategies were evaluated using visual comparison and cross validation. Moreover, robustness of the resultant facies distribution with respect to variations in interpolation method input parameters was verified by taking into account several scenarios of uncertainty. The resultant 3D facies reconstruction improves the understanding of the distribution and geometry of the coal facies. Furthermore, since some coal quality properties (e.g. calorific value or sulphur percentage) display a good statistical correspondence with facies, predicting the distribution of these properties using the reconstructed facies distribution as a template proved to be a powerful approach, yielding more accurate and realistic reconstructions of these properties in the coal zone. (author)

  3. Evaluation of the retrospective ECG-gated helical scan using half-second multi-slice CT. Motion phantom study for volumetry

    International Nuclear Information System (INIS)

    Yamamoto, Shuji; Matsumoto, Takashi; Nakanishi, Shohzoh; Hamada, Seiki; Takahei, Kazunari; Naito, Hiroaki; Ogata, Yuji

    2002-01-01

    ECG synchronized technique on multi-slice CT provide the thinner (less 2 mm slice thickness) and faster (0.5 sec/rotation) scan than that of the single detector CT and can acquire the coverage of the entire heart volume within one breath-hold. However, temporal resolution of multi-slice CT is insufficient on practical range of heart rate. The purpose of this study was to evaluate the accuracy of volumetry on cardiac function measurement in retrospective ECG-gated helical scan. We discussed the influence of the degradation of image quality and limitation of the heart rate in cardiac function measurement (volumetry) using motion phantom. (author)

  4. An Improved Rotary Interpolation Based on FPGA

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2014-08-01

    Full Text Available This paper presents an improved rotary interpolation algorithm, which consists of a standard curve interpolation module and a rotary process module. Compared to the conventional rotary interpolation algorithms, the proposed rotary interpolation algorithm is simpler and more efficient. The proposed algorithm was realized on a FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe, which uses rotary ellipse and rotary parabolic as an example. According to the theoretical analysis and practical process validation, the algorithm has the following advantages: firstly, less arithmetic items is conducive for interpolation operation; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  5. Fuzzy linguistic model for interpolation

    International Nuclear Information System (INIS)

    Abbasbandy, S.; Adabitabar Firozja, M.

    2007-01-01

    In this paper, a fuzzy method for interpolating of smooth curves was represented. We present a novel approach to interpolate real data by applying the universal approximation method. In proposed method, fuzzy linguistic model (FLM) applied as universal approximation for any nonlinear continuous function. Finally, we give some numerical examples and compare the proposed method with spline method

  6. Comparison of iterative model, hybrid iterative, and filtered back projection reconstruction techniques in low-dose brain CT: impact of thin-slice imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nakaura, Takeshi; Iyama, Yuji; Kidoh, Masafumi; Yokoyama, Koichi [Amakusa Medical Center, Diagnostic Radiology, Amakusa, Kumamoto (Japan); Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Oda, Seitaro; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Tokuyasu, Shinichi [Philips Electronics, Kumamoto (Japan); Harada, Kazunori [Amakusa Medical Center, Department of Surgery, Kumamoto (Japan)

    2016-03-15

    The purpose of this study was to evaluate the utility of iterative model reconstruction (IMR) in brain CT especially with thin-slice images. This prospective study received institutional review board approval, and prior informed consent to participate was obtained from all patients. We enrolled 34 patients who underwent brain CT and reconstructed axial images with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and IMR with 1 and 5 mm slice thicknesses. The CT number, image noise, contrast, and contrast noise ratio (CNR) between the thalamus and internal capsule, and the rate of increase of image noise in 1 and 5 mm thickness images between the reconstruction methods, were assessed. Two independent radiologists assessed image contrast, image noise, image sharpness, and overall image quality on a 4-point scale. The CNRs in 1 and 5 mm slice thickness were significantly higher with IMR (1.2 ± 0.6 and 2.2 ± 0.8, respectively) than with FBP (0.4 ± 0.3 and 1.0 ± 0.4, respectively) and HIR (0.5 ± 0.3 and 1.2 ± 0.4, respectively) (p < 0.01). The mean rate of increasing noise from 5 to 1 mm thickness images was significantly lower with IMR (1.7 ± 0.3) than with FBP (2.3 ± 0.3) and HIR (2.3 ± 0.4) (p < 0.01). There were no significant differences in qualitative analysis of unfamiliar image texture between the reconstruction techniques. IMR offers significant noise reduction and higher contrast and CNR in brain CT, especially for thin-slice images, when compared to FBP and HIR. (orig.)

  7. Portable Device Slices Thermoplastic Prepregs

    Science.gov (United States)

    Taylor, Beverly A.; Boston, Morton W.; Wilson, Maywood L.

    1993-01-01

    Prepreg slitter designed to slit various widths rapidly by use of slicing bar holding several blades, each capable of slicing strip of preset width in single pass. Produces material evenly sliced and does not contain jagged edges. Used for various applications in such batch processes involving composite materials as press molding and autoclaving, and in such continuous processes as pultrusion. Useful to all manufacturers of thermoplastic composites, and in slicing B-staged thermoset composites.

  8. A method of image improvement in three-dimensional imaging

    International Nuclear Information System (INIS)

    Suto, Yasuzo; Huang, Tewen; Furuhata, Kentaro; Uchino, Masafumi.

    1988-01-01

    In general, image interpolation is required when the surface configurations of such structures as bones and organs are three-dimensionally constructed from the multi-sliced images obtained by CT. Image interpolation is a processing method whereby an artificial image is inserted between two adjacent slices to make spatial resolution equal to slice resolution in appearance. Such image interpolation makes it possible to increase the image quality of the constructed three-dimensional image. In our newly-developed algorithm, we have converted the presently and subsequently sliced images to distance images, and generated the interpolation images from these two distance images. As a result, compared with the previous method, three-dimensional images with better image quality have been constructed. (author)

  9. Convergence of trajectories in fractal interpolation of stochastic processes

    International Nuclear Information System (INIS)

    MaIysz, Robert

    2006-01-01

    The notion of fractal interpolation functions (FIFs) can be applied to stochastic processes. Such construction is especially useful for the class of α-self-similar processes with stationary increments and for the class of α-fractional Brownian motions. For these classes, convergence of the Minkowski dimension of the graphs in fractal interpolation of the Hausdorff dimension of the graph of original process was studied in [Herburt I, MaIysz R. On convergence of box dimensions of fractal interpolation stochastic processes. Demonstratio Math 2000;4:873-88.], [MaIysz R. A generalization of fractal interpolation stochastic processes to higher dimension. Fractals 2001;9:415-28.], and [Herburt I. Box dimension of interpolations of self-similar processes with stationary increments. Probab Math Statist 2001;21:171-8.]. We prove that trajectories of fractal interpolation stochastic processes converge to the trajectory of the original process. We also show that convergence of the trajectories in fractal interpolation of stochastic processes is equivalent to the convergence of trajectories in linear interpolation

  10. Effectiveness of thin-slice axial images of multidetector row CT for visualization of bronchial artery before bronchial arterial embolization

    International Nuclear Information System (INIS)

    Shida, Yoshitaka; Hasuo, Kanehiro; Aibe, Hitoshi; Kubo, Yuko; Terashima, Kotaro; Kinjo, Maya; Kamano, H.; Yoshida, Atsuko

    2008-01-01

    We assessed the ability of visualization of bronchial artery (BA) by using thin-slice axial images of 4-detector multidetector row CT in 65 patients with hemoptysis. In all patients, the origins of BA were well identified with observation of consecutive axial images with 1 mm thickness by paging method and bronchial arterial embolization (BAE) was performed successfully. Thin-slice axial images were considered to be useful to recognize BA and to perform BAE in patients with hemoptysis. (author)

  11. Shape-based interpolation of multidimensional grey-level images

    International Nuclear Information System (INIS)

    Grevera, G.J.; Udupa, J.K.

    1996-01-01

    Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. In this paper, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n + 1)-dimensional [(n + 1)-D] space. The binary shape-based method is then applied to this image to create an (n + 1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation

  12. Fresh Slice Self-Seeding and Fresh Slice Harmonic Lasing at LCLS

    Energy Technology Data Exchange (ETDEWEB)

    Amann, J.W. [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2018-04-01

    We present results from the successful demonstration of fresh slice self-seeding at the Linac Coherent Light Source (LCLS).* The performance is compared with SASE and regular self-seeding at photon energy of 5.5 keV, resulting in a relative average brightness increase of a factor of 12 and a factor of 2 respectively. Following this proof-of-principle we discuss the forthcoming plans to use the same technique** for fresh slice harmonic lasing in an upcoming experiment. The demonstration of fresh slice harmonic lasing provides an attractive solution for future XFELs aiming to achieve high efficiency, high brightness X-ray pulses at high photon energies (>12 keV).***

  13. Flat slices in Minkowski space

    Science.gov (United States)

    Murchadha, Niall Ó.; Xie, Naqing

    2015-03-01

    Minkowski space, flat spacetime, with a distance measure in natural units of d{{s}2}=-d{{t}2}+d{{x}2}+d{{y}2}+d{{z}2}, or equivalently, with spacetime metric diag(-1, +1, +1, +1), is recognized as a fundamental arena for physics. The Poincaré group, the set of all rigid spacetime rotations and translations, is the symmetry group of Minkowski space. The action of this group preserves the form of the spacetime metric. Each t = constant slice of each preferred coordinate system is flat. We show that there are also nontrivial non-singular representations of Minkowski space with complete flat slices. If the embedding of the flat slices decays appropriately at infinity, the only flat slices are the standard ones. However, if we remove the decay condition, we find non-trivial flat slices with non-vanishing extrinsic curvature. We write out explicitly the coordinate transformation to a frame with such slices.

  14. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  15. Comparing interpolation schemes in dynamic receive ultrasound beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Andresen, Henrik; Nikolov, Svetoslav

    2005-01-01

    In medical ultrasound interpolation schemes are of- ten applied in receive focusing for reconstruction of image points. This paper investigates the performance of various interpolation scheme by means of ultrasound simulations of point scatterers in Field II. The investigation includes conventional...... B-mode imaging and synthetic aperture (SA) imaging using a 192-element, 7 MHz linear array transducer with λ pitch as simulation model. The evaluation consists primarily of calculations of the side lobe to main lobe ratio, SLMLR, and the noise power of the interpolation error. When using...... conventional B-mode imaging and linear interpolation, the difference in mean SLMLR is 6.2 dB. With polynomial interpolation the ratio is in the range 6.2 dB to 0.3 dB using 2nd to 5th order polynomials, and with FIR interpolation the ratio is in the range 5.8 dB to 0.1 dB depending on the filter design...

  16. Experimental demonstration of spectrum-sliced elastic optical path network (SLICE).

    Science.gov (United States)

    Kozicki, Bartłomiej; Takara, Hidehiko; Tsukishima, Yukio; Yoshimatsu, Toshihide; Yonenaga, Kazushige; Jinno, Masahiko

    2010-10-11

    We describe experimental demonstration of spectrum-sliced elastic optical path network (SLICE) architecture. We employ optical orthogonal frequency-division multiplexing (OFDM) modulation format and bandwidth-variable optical cross-connects (OXC) to generate, transmit and receive optical paths with bandwidths of up to 1 Tb/s. We experimentally demonstrate elastic optical path setup and spectrally-efficient transmission of multiple channels with bit rates ranging from 40 to 140 Gb/s between six nodes of a mesh network. We show dynamic bandwidth scalability for optical paths with bit rates of 40 to 440 Gb/s. Moreover, we demonstrate multihop transmission of a 1 Tb/s optical path over 400 km of standard single-mode fiber (SMF). Finally, we investigate the filtering properties and the required guard band width for spectrally-efficient allocation of optical paths in SLICE.

  17. 5-D interpolation with wave-front attributes

    Science.gov (United States)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that

  18. Mass transfer characteristics of bisporus mushroom ( Agaricus bisporus) slices during convective hot air drying

    Science.gov (United States)

    Ghanbarian, Davoud; Baraani Dastjerdi, Mojtaba; Torki-Harchegani, Mehdi

    2016-05-01

    An accurate understanding of moisture transfer parameters, including moisture diffusivity and moisture transfer coefficient, is essential for efficient mass transfer analysis and to design new dryers or improve existing drying equipments. The main objective of the present study was to carry out an experimental and theoretical investigation of mushroom slices drying and determine the mass transfer characteristics of the samples dried under different conditions. The mushroom slices with two thicknesses of 3 and 5 mm were dried at air temperatures of 40, 50 and 60 °C and air flow rates of 1 and 1.5 m s-1. The Dincer and Dost model was used to determine the moisture transfer parameters and predict the drying curves. It was observed that the entire drying process took place in the falling drying rate period. The obtained lag factor and Biot number indicated that the moisture transfer in the samples was controlled by both internal and external resistance. The effective moisture diffusivity and the moisture transfer coefficient increased with increasing air temperature, air flow rate and samples thickness and varied in the ranges of 6.5175 × 10-10 to 1.6726 × 10-9 m2 s-1 and 2.7715 × 10-7 to 3.5512 × 10-7 m s-1, respectively. The validation of the Dincer and Dost model indicated a good capability of the model to describe the drying curves of the mushroom slices.

  19. Interpolation of fuzzy data | Khodaparast | Journal of Fundamental ...

    African Journals Online (AJOL)

    Considering the many applications of mathematical functions in different ways, it is essential to have a defining function. In this study, we used Fuzzy Lagrangian interpolation and natural fuzzy spline polynomials to interpolate the fuzzy data. In the current world and in the field of science and technology, interpolation issues ...

  20. The effect of slicing type on drying kinetics and quality of dried carrot

    Directory of Open Access Journals (Sweden)

    M Naghipour zadeh mahani

    2016-04-01

    Full Text Available Introduction: Carrot is one of the most common vegetables used for human nutrition because of its high vitamin and fiber contents. Drying improves the product shelf life without addition of any chemical preservative and reduces both the size of package and the transport cost. Drying also aidsto reduce postharvest losses of fruits and vegetables especially, which can be as high as 70%. Dried carrots are used in dehydrated soups and in the form of powder in pastries and sauces. The main aim of drying agricultural products is decrease the moisture content to a level which allows safe storage over an extended period. Many fruits and vegetables can be sliced before drying.because of different tissue of a fruit or vegetable, cutting them in different direction and shape created different tissue slices. Due to drying is the exiting process of the moisture from internal tissue so different tissue slices caused different drying kinetics. Therefore, the study on effect of cutting parameters on drying is necessary. Materials and Methods: Carrots (Daucus carota L. were purchased from the local market (Kerman, Iran and stored in a refrigerator at 5°C. The initial moisture contents of the Carrot samples were determined by the oven drying method. The sample was dried in an oven at 105±2°C about 24 hours. The carrots cut by 3 models blade at 3 directions. The samples were dried in an oven at 70°C. Moisture content of the carrot slices were determined by weighting of samples during drying. Volume changes because of sample shrinkage were measured by a water displacement method. Rehydration experiment was performed by immersing a weighted amount of dried samples into hot water 50 °C for 30 min. In this study the effect of some cutting parameters was considered on carrot drying and the quality of final drying product. The tests were performed as a completely random design. The effects of carrot thickness at two levels (3 and 6 mm, blade in 3 models (flat blade

  1. Interpolation of quasi-Banach spaces

    International Nuclear Information System (INIS)

    Tabacco Vignati, A.M.

    1986-01-01

    This dissertation presents a method of complex interpolation for familities of quasi-Banach spaces. This method generalizes the theory for families of Banach spaces, introduced by others. Intermediate spaces in several particular cases are characterized using different approaches. The situation when all the spaces have finite dimensions is studied first. The second chapter contains the definitions and main properties of the new interpolation spaces, and an example concerning the Schatten ideals associated with a separable Hilbert space. The case of L/sup P/ spaces follows from the maximal operator theory contained in Chapter III. Also introduced is a different method of interpolation for quasi-Banach lattices of functions, and conditions are given to guarantee that the two techniques yield the same result. Finally, the last chapter contains a different, and more direct, approach to the case of Hardy spaces

  2. Image Interpolation Scheme based on SVM and Improved PSO

    Science.gov (United States)

    Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.

    2018-01-01

    In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.

  3. Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation

    Directory of Open Access Journals (Sweden)

    Hezerul Abdul Karim

    2004-09-01

    Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.

  4. Single-slice epicardial fat area measurement. Do we need to measure the total epicardial fat volume?

    International Nuclear Information System (INIS)

    Oyama, Noriko; Goto, Daisuke; Ito, Yoichi M.

    2011-01-01

    The aim of this study was to assess a method for measuring epicardial fat volume (EFV) by means of a single-slice area measurement. We investigated the relation between a single-slice fat area measurement and total EFV. A series of 72 consecutive patients (ages 65±11 years; 36 men) who had undergone cardiac computed tomography (CT) on a 64-slice multidetector scanner with prospective electrocardiographic triggering were retrospectively reviewed. Pixels in the pericardium with a density range from -230 to -30 Hounsfield units were considered fat, giving the per-slice epicardial fat area (EFA). The EFV was estimated by the summation of EFAs multiplied by the slice thickness. We investigated the relation between total EFV and each EFA. EFAs measured at several anatomical landmarks - right pulmonary artery, origins of the left main coronary artery, right coronary artery, coronary sinus - all correlated with the EFV (r=0.77-0.92). The EFA at the LMCA level was highly reproducible and showed an excellent correlation with the EFV (r=0.92). The EFA is significantly correlated with the EFV. The EFA is a simple, quick method for representing the time-consuming EFV, which has been used as a predictive indicator of cardiovascular diseases. (author)

  5. A map for the thick beam-beam interaction

    International Nuclear Information System (INIS)

    Irwin, J.; Chen, T.

    1995-01-01

    The authors give a closed-form expression for the thick beam-beam interaction for a small disruption parameter, as typical in electron-positron storage rings. The dependence on transverse angle and position of the particle trajectory as well as the longitudinal position of collision and the waist-modified shape of the beam distribution are included. Large incident angles, as are present for beam-halo particles or for large crossing-angle geometry, are accurately represented. The closed-form expression is well approximated by polynomials times the complex error function. Comparisons with multi-slice representations show even the first order terms are more accurate than a five slice representation, saving a factor of 5 in computation time

  6. Thin-layer catalytic far-infrared radiation drying and flavour of tomato slices

    Directory of Open Access Journals (Sweden)

    Ernest Ekow Abano

    2014-06-01

    Full Text Available A far-infrared radiation (FIR catalytic laboratory dryer was designed by us and used to dry tomato. The kinetics of drying of tomato slices with FIR energy was dependent on both the distance from the heat source and the sample thickness. Numerical evaluation of the simplified Fick’s law for Fourier number showed that the effective moisture diffusivity increased from 0.193×10–9 to 1.893×10–9 m2/s, from 0.059×10–9 to 2.885×10–9 m2/s, and, from 0.170×10–9 to 4.531×10–9 m2/s for the 7, 9, and 11 mm thick slices as moisture content decreased. Application of FIR enhanced the flavour of the dried tomatoes by 36.6% when compared with the raw ones. The results demonstrate that in addition to shorter drying times, the flavour of the products can be enhanced with FIR. Therefore, FIR drying should be considered as an efficient drying method for tomato with respect to minimization of processing time, enhancement in flavour, and improvements in the quality and functional property of dried tomatoes.

  7. Differential Interpolation Effects in Free Recall

    Science.gov (United States)

    Petrusic, William M.; Jamieson, Donald G.

    1978-01-01

    Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…

  8. SAR image formation with azimuth interpolation after azimuth transform

    Science.gov (United States)

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  9. The relationship between image quality and CT dose index of multi-slice low-dose chest CT

    International Nuclear Information System (INIS)

    Zhu Xiaohua; Shao Jiang; Shi Jingyun; You Zhengqian; Li Shijun; Xue Yongming

    2003-01-01

    Objective: To explore the rationality and possibility of multi-slice low-dose CT scan in the examination of the chest. Methods: (1) X-ray dose index measurement: 120 kV tube voltage, 0.75 s rotation, 8 mm and 3 mm slice thickness, and the tube current setting of 115.0, 40.0, 25.0, and 7.5 mAs were employed in every section. The X-ray radiation dose was measured and compared statistically. (2) phantom measurement of homogeneity and noise: The technical parameters were 120 kV, 0.75 s, 8 mm and 3 mm sections, and every slice was scanned using tube current of 115.0, 40.0, 25.0, and 7.5 mAs. Five same regions of interest were measured on every image. The homogeneity and noise level of CT were appraised. (3) The multi-slice low-dose CT in patients: 30 patients with mass and 30 with patch shadow in the lung were selected randomly. The technical parameters were 120 kV, 0.75 s, 8 mm and 3 mm slice thickness. 115.0, 40.0, 25.0, 15.0, and 7.5 mAs tube current were employed in each same slice. Otherwise, 15 cases with helical scan were examined using 190, 150, 40, 25, and 15 mAs tube current. The reconstruction images of MIP, MPR, CVR, HRCT, 3D, CT virtual endoscopy, and variety of interval reconstruction were compared. (4) Evaluation of image quality: CT images were evaluated by four doctors using single-blind method, and 3 degrees including normal image, image with few artifact, and image with excessive artifact, were employed and analyzed statistically. Results: (1) The CT dose index with 115.0 mAs tube current exceeded those of 40.0, 25.0, and 7.5 mAs by about 60%, 70%, and 85%, respectively. (2) The phantom measurement showed that the lower of CT dose the lower of homogeneity, the lower of CT dose the higher of noise level. (3) Result of image quality evaluation: The percentage of the normal image had no significant difference between 8 and 3 mm in 115, 40, and 25 mAs (P>0.05). Conclusion: Multi-slice low-dose chest CT technology may protect the patients and guarantee the

  10. The use of maxillary sinus dimensions in gender determination: a thin-slice multidetector computed tomography assisted morphometric study.

    Science.gov (United States)

    Ekizoglu, Oguzhan; Inci, Ercan; Hocaoglu, Elif; Sayin, Ibrahim; Kayhan, Fatma Tulin; Can, Ismail Ozgur

    2014-05-01

    Gender determination is an important step in identification. For gender determination, anthropometric evaluation is one of the main forensic evaluations. In the present study, morphometric analysis of maxillary sinuses was performed to determine gender. For morphometric analysis, coronal and axial paranasal sinus computed tomography (CT) scan with 1-mm slice thickness was used. For this study, 140 subjects (70 women and 70 men) were enrolled (age ranged between 18 and 63). The size of each subject's maxillary sinuses was measured in anteroposterior, transverse, cephalocaudal, and volume directions. In each measurement, the size of the maxillary sinus is significantly small in female gender (P discrimination analysis was performed, the accuracy rate was detected as 80% for women and 74.3% for men with an overall rate of 77.15%. With the use of 1-mm slice thickness CT, morphometric analysis of maxillary sinuses will be helpful for gender determination.

  11. Organotypic brain slice cultures of adult transgenic P301S mice--a model for tauopathy studies.

    Directory of Open Access Journals (Sweden)

    Agneta Mewes

    Full Text Available BACKGROUND: Organotypic brain slice cultures represent an excellent compromise between single cell cultures and complete animal studies, in this way replacing and reducing the number of animal experiments. Organotypic brain slices are widely applied to model neuronal development and regeneration as well as neuronal pathology concerning stroke, epilepsy and Alzheimer's disease (AD. AD is characterized by two protein alterations, namely tau hyperphosphorylation and excessive amyloid β deposition, both causing microglia and astrocyte activation. Deposits of hyperphosphorylated tau, called neurofibrillary tangles (NFTs, surrounded by activated glia are modeled in transgenic mice, e.g. the tauopathy model P301S. METHODOLOGY/PRINCIPAL FINDINGS: In this study we explore the benefits and limitations of organotypic brain slice cultures made of mature adult transgenic mice as a potential model system for the multifactorial phenotype of AD. First, neonatal (P1 and adult organotypic brain slice cultures from 7- to 10-month-old transgenic P301S mice have been compared with regard to vitality, which was monitored with the lactate dehydrogenase (LDH- and the MTT (3-(4,5-Dimethylthiazol-2-yl-2,5-diphenyltetrazolium bromide assays over 15 days. Neonatal slices displayed a constant high vitality level, while the vitality of adult slice cultures decreased significantly upon cultivation. Various preparation and cultivation conditions were tested to augment the vitality of adult slices and improvements were achieved with a reduced slice thickness, a mild hypothermic cultivation temperature and a cultivation CO(2 concentration of 5%. Furthermore, we present a substantial immunohistochemical characterization analyzing the morphology of neurons, astrocytes and microglia in comparison to neonatal tissue. CONCLUSION/SIGNIFICANCE: Until now only adolescent animals with a maximum age of two months have been used to prepare organotypic brain slices. The current study

  12. The effect of thickness in the through-diffusion experiment. Final report

    International Nuclear Information System (INIS)

    Valkiainen, M.; Aalto, H.; Lehikoinen, J.; Uusheimo, K.

    1996-01-01

    The report contains an experimental study of diffusion in the water-filled pores of rock samples. The samples studied are rapakivi granite from Loviisa, southern Finland. The drill-core sample was sectioned perpendicularly with a diamond saw and three cylindrical samples were obtained. The nominal thicknesses (heights of the cylinders) are 2, 4 and 6 cm. For the diffusion measurement the sample holders were pressed between two chambers. One of the chambers was filled with 0.0044 molar sodium chloride solution spiked with tracers. Another chamber was filled with inactive solution. Tritium (HTO) considered to be a water equivalent tracer and anionic 36 Cl - were used as tracers. The through diffusion was monitored about 1000 days after which time the diffusion cells were emptied and the sample holders dismantled. The samples were sectioned into 1 cm slices and the tracers were leached from the slices. The porosities of the slices were determined by the weighing method. The rock-capacity factors could be determined from the leaching results obtained. It was seen that the porosity values were in accordance with the rock capacity factors obtained with HTO. An anion exclusion can be seen comparing the results obtained with HTO and 36 Cl - . The concentration profile through even the thickest sample had reached a constant slope and the rate of diffusion was practically at a steady state. An anion exclusion effect was also seen in the effective diffusion coefficients. The effect of thickness on diffusion shows that the connectivity of the pores decreases in the thickness range 2-4 cm studied. The decrease as reflected in the diffusion coefficient was not dramatic and it can be said that especially for studying chemical interactions during diffusion, the thickness of 2 cm is adequate. (orig.) (12 refs.)

  13. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  14. The virtual slice setup.

    Science.gov (United States)

    Lytton, William W; Neymotin, Samuel A; Hines, Michael L

    2008-06-30

    In an effort to design a simulation environment that is more similar to that of neurophysiology, we introduce a virtual slice setup in the NEURON simulator. The virtual slice setup runs continuously and permits parameter changes, including changes to synaptic weights and time course and to intrinsic cell properties. The virtual slice setup permits shocks to be applied at chosen locations and activity to be sampled intra- or extracellularly from chosen locations. By default, a summed population display is shown during a run to indicate the level of activity and no states are saved. Simulations can run for hours of model time, therefore it is not practical to save all of the state variables. These, in any case, are primarily of interest at discrete times when experiments are being run: the simulation can be stopped momentarily at such times to save activity patterns. The virtual slice setup maintains an automated notebook showing shocks and parameter changes as well as user comments. We demonstrate how interaction with a continuously running simulation encourages experimental prototyping and can suggest additional dynamical features such as ligand wash-in and wash-out-alternatives to typical instantaneous parameter change. The virtual slice setup currently uses event-driven cells and runs at approximately 2 min/h on a laptop.

  15. Illumination estimation via thin-plate spline interpolation.

    Science.gov (United States)

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  16. Induksi Ginogenesis melalui Kultur Multi Ovule Slice dan Kultur Ovary Slice Dianthus chinensis

    Directory of Open Access Journals (Sweden)

    Suskandari Kartikaningrum

    2013-10-01

    Full Text Available Callus induction was studied in five genotypes of Dianthus chinensis using 2.4 D and NAA. Calluses can be obtainedfrom unfertilized ovule culture and ovary culture. The aim of the research was to study gynogenic potential and responseof Dianthus chinensis through ovule slice and ovary slice culture for obtaining haploid plants. Five genotypes of Dianthuschinensis and five media were used in ovule slice culture and two genotypes and three medium were used in ovary culture.Flower buds in the 7th stage were incubated for the purpose of dark pre-treatment at 4 oC for one day. Ovules and ovaries wereisolated and cultured in induction medium. Cultures were incubated for the purpose of dark pre-treatment at 4 oC for seven days, followed by 25 oC light incubation. The result showed that 2.4D was better than NAA in inducing callus. Percentage of regenerated calluses were produced in V11, V13 and V15 genotypes in M7 medium (MS + 2 mg L-1 2.4D + 1 mg L-1 BAP + 30 g L-1 sucrose and M10 medium (MS + 1 mg L-1 2.4D + 1 mg L-1 BAP + 20 g L-1 sucrose. All calluses originated from ovule and ovary cultures flowered prematurely. Double haploid (V11-34 were obtained from ovule slice culture based on PER (peroksidase and EST (esterase isoenzym marker.Keywords: ovule slice culture, ovary slice culture, callus, Dianthus sp., haploid

  17. Efficient GPU-based texture interpolation using uniform B-splines

    NARCIS (Netherlands)

    Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.

    2008-01-01

    This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and

  18. Shape Preserving Interpolation Using C2 Rational Cubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2016-01-01

    Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.

  19. Permanently calibrated interpolating time counter

    International Nuclear Information System (INIS)

    Jachna, Z; Szplet, R; Kwiatkowski, P; Różyc, K

    2015-01-01

    We propose a new architecture of an integrated time interval counter that provides its permanent calibration in the background. Time interval measurement and the calibration procedure are based on the use of a two-stage interpolation method and parallel processing of measurement and calibration data. The parallel processing is achieved by a doubling of two-stage interpolators in measurement channels of the counter, and by an appropriate extension of control logic. Such modification allows the updating of transfer characteristics of interpolators without the need to break a theoretically infinite measurement session. We describe the principle of permanent calibration, its implementation and influence on the quality of the counter. The precision of the presented counter is kept at a constant level (below 20 ps) despite significant changes in the ambient temperature (from −10 to 60 °C), which can cause a sevenfold decrease in the precision of the counter with a traditional calibration procedure. (paper)

  20. Transfinite C2 interpolant over triangles

    International Nuclear Information System (INIS)

    Alfeld, P.; Barnhill, R.E.

    1984-01-01

    A transfinite C 2 interpolant on a general triangle is created. The required data are essentially C 2 , no compatibility conditions arise, and the precision set includes all polynomials of degree less than or equal to eight. The symbol manipulation language REDUCE is used to derive the scheme. The scheme is discretized to two different finite dimensional C 2 interpolants in an appendix

  1. Fractional Delayer Utilizing Hermite Interpolation with Caratheodory Representation

    Directory of Open Access Journals (Sweden)

    Qiang DU

    2018-04-01

    Full Text Available Fractional delay is indispensable for many sorts of circuits and signal processing applications. Fractional delay filter (FDF utilizing Hermite interpolation with an analog differentiator is a straightforward way to delay discrete signals. This method has a low time-domain error, but a complicated sampling module than the Shannon sampling scheme. A simplified scheme, which is based on Shannon sampling and utilizing Hermite interpolation with a digital differentiator, will lead a much higher time-domain error when the signal frequency approaches the Nyquist rate. In this letter, we propose a novel fractional delayer utilizing Hermite interpolation with Caratheodory representation. The samples of differential signal are obtained by Caratheodory representation from the samples of the original signal only. So, only one sampler is needed and the sampling module is simple. Simulation results for four types of signals demonstrate that the proposed method has significantly higher interpolation accuracy than Hermite interpolation with digital differentiator.

  2. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  3. Fast image interpolation via random forests.

    Science.gov (United States)

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  4. Effect of interpolation on parameters extracted from seating interface pressure arrays.

    Science.gov (United States)

    Wininger, Michael; Crane, Barbara

    2014-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

  5. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  6. Spatial interpolation and simulation of post-burn duff thickness after prescribed fire

    Science.gov (United States)

    Peter R. Robichaud; S. M. Miller

    1999-01-01

    Prescribed fire is used as a site treatment after timber harvesting. These fires result in spatial patterns with some portions consuming all of the forest floor material (duff) and others consuming little. Prior to the burn, spatial sampling of duff thickness and duff water content can be used to generate geostatistical spatial simulations of these characteristics....

  7. A new look at the fetus: Thick-slab T2-weighted sequences in fetal MRI

    International Nuclear Information System (INIS)

    Brugger, Peter C.; Mittermayer, Christoph; Prayer, Daniela

    2006-01-01

    Although magnetic resonance imaging (MRI) of the fetus is considered an established adjunct to fetal ultrasound, stacks of images alone cannot provide an overall impression of the fetus. The present study evaluates the use of thick-slab T2-weighted MR images to obtain a three-dimensional impression of the fetus using MRI. A thick-slab T2-weighted sequence was added to the routine protocol in 100 fetal MRIs obtained for various indications (19th to 37th gestational weeks) on a 1.5 T magnet using a five-element phased-array surface coil. Slice thickness adapted to fetal size and uterine geometry varied between 25 and 50 mm, as did the field of view (250-350 mm). Acquisition of one image took less than 1 s. The pictorial essay shows that these images visualize fetal anatomy in a more comprehensive way than is possible with a series of 3-4 mm thick slices. These thick-slab images facilitate the assessment of the whole fetus, fetal proportions, surface structures, and extremities. Fetal pathology may be captured in one image. Thick-slab T2-weighted images provide additional information that cannot be gathered from a series of images and are considered a valuable adjunct to conventional 2D MR images

  8. A new look at the fetus: Thick-slab T2-weighted sequences in fetal MRI

    Energy Technology Data Exchange (ETDEWEB)

    Brugger, Peter C. [Center of Anatomy and Cell Biology, Integrative Morphology Group, Medical University of Vienna, Vienna (Austria)]. E-mail: peter.brugger@meduniwien.ac.at; Mittermayer, Christoph [Department of Neonatology and Intensive Care, University Hospital of Vienna (Austria); Prayer, Daniela [Department of Neuroradiology, University Clinics of Radiodiagnostics, Medical University of Vienna, Vienna (Austria)

    2006-02-15

    Although magnetic resonance imaging (MRI) of the fetus is considered an established adjunct to fetal ultrasound, stacks of images alone cannot provide an overall impression of the fetus. The present study evaluates the use of thick-slab T2-weighted MR images to obtain a three-dimensional impression of the fetus using MRI. A thick-slab T2-weighted sequence was added to the routine protocol in 100 fetal MRIs obtained for various indications (19th to 37th gestational weeks) on a 1.5 T magnet using a five-element phased-array surface coil. Slice thickness adapted to fetal size and uterine geometry varied between 25 and 50 mm, as did the field of view (250-350 mm). Acquisition of one image took less than 1 s. The pictorial essay shows that these images visualize fetal anatomy in a more comprehensive way than is possible with a series of 3-4 mm thick slices. These thick-slab images facilitate the assessment of the whole fetus, fetal proportions, surface structures, and extremities. Fetal pathology may be captured in one image. Thick-slab T2-weighted images provide additional information that cannot be gathered from a series of images and are considered a valuable adjunct to conventional 2D MR images.

  9. Microtome Sliced Block Copolymers and Nanoporous Polymers as Masks for Nanolithography

    DEFF Research Database (Denmark)

    Shvets, Violetta; Schulte, Lars; Ndoni, Sokol

    2014-01-01

    Introduction. Block copolymers self-assembling properties are commonly used for creation of very fine nanostructures [1]. Goal of our project is to test new methods of the block-copolymer lithography mask preparation: macroscopic pieces of block-copolymers or nanoporous polymers with cross...... PDMS can be chemically etched from the PB matrix by tetrabutylammonium fluoride in tetrahydrofuran and macroscopic nanoporous PB piece is obtained. Both block-copolymer piece and nanoporous polymer piece were sliced with cryomicrotome perpendicular to the axis of cylinder alignment and flakes...... of etching patterns appear only under the certain parts of thick flakes and are not continuous. Although flakes from block copolymer are thinner and more uniform in thickness than flakes from nanoporous polymer, quality of patterns under nanoporous flakes appeared to be better than under block copolymer...

  10. Plastination of whole-body slices: a new aid in cross-sectional anatomy, demonstrated for thoracic organs in dogs

    International Nuclear Information System (INIS)

    Polgar, M.; Probst, A.; Koenig, H.E.; Sora, M.-C.

    2003-01-01

    Plastic-embedded, transparent serially sectioned slices from the canine thorax were compared wit cross-sections, made with the commonly used technique and computed tomograms. Three Beagles, at the age of seven months, were cut into 4 mm thick slices and plastinated with the epoxy resin Biodur E12. The area of the thorax was examined macro-scopically and scrutinized closely. Survey and magnification photographs were taken. Compared with the commonly used prepared sections the E12-slices proved to be transparent, hard, dry, odourless, resistant and show unlimited durability. Good color maintenance of the specimens makes differentiation of the organs easy. The leading of the blood vessels, nerves and other conductions of the thoracal cavity can be followed from section to section. The colorful images help to interpret CT and MRI and provide good learning aids for clinicians and students. (author)

  11. Scalable Intersample Interpolation Architecture for High-channel-count Beamformers

    DEFF Research Database (Denmark)

    Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt

    2011-01-01

    Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....

  12. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  13. Can multi-slice or navigator-gated R2* MRI replace single-slice breath-hold acquisition for hepatic iron quantification?

    International Nuclear Information System (INIS)

    Loeffler, Ralf B.; McCarville, M.B.; Song, Ruitian; Hillenbrand, Claudia M.; Wagstaff, Anne W.; Smeltzer, Matthew P.; Krafft, Axel J.; Hankins, Jane S.

    2017-01-01

    Liver R2* values calculated from multi-gradient echo (mGRE) magnetic resonance images (MRI) are strongly correlated with hepatic iron concentration (HIC) as shown in several independently derived biopsy calibration studies. These calibrations were established for axial single-slice breath-hold imaging at the location of the portal vein. Scanning in multi-slice mode makes the exam more efficient, since whole-liver coverage can be achieved with two breath-holds and the optimal slice can be selected afterward. Navigator echoes remove the need for breath-holds and allow use in sedated patients. To evaluate if the existing biopsy calibrations can be applied to multi-slice and navigator-controlled mGRE imaging in children with hepatic iron overload, by testing if there is a bias-free correlation between single-slice R2* and multi-slice or multi-slice navigator controlled R2*. This study included MRI data from 71 patients with transfusional iron overload, who received an MRI exam to estimate HIC using gradient echo sequences. Patient scans contained 2 or 3 of the following imaging methods used for analysis: single-slice images (n = 71), multi-slice images (n = 69) and navigator-controlled images (n = 17). Small and large blood corrected region of interests were selected on axial images of the liver to obtain R2* values for all data sets. Bland-Altman and linear regression analysis were used to compare R2* values from single-slice images to those of multi-slice images and navigator-controlled images. Bland-Altman analysis showed that all imaging method comparisons were strongly associated with each other and had high correlation coefficients (0.98 ≤ r ≤ 1.00) with P-values ≤0.0001. Linear regression yielded slopes that were close to 1. We found that navigator-gated or breath-held multi-slice R2* MRI for HIC determination measures R2* values comparable to the biopsy-validated single-slice, single breath-hold scan. We conclude that these three R2* methods can be

  14. Can multi-slice or navigator-gated R2* MRI replace single-slice breath-hold acquisition for hepatic iron quantification?

    Energy Technology Data Exchange (ETDEWEB)

    Loeffler, Ralf B.; McCarville, M.B.; Song, Ruitian; Hillenbrand, Claudia M. [St. Jude Children' s Research Hospital, Diagnostic Imaging, Memphis, TN (United States); Wagstaff, Anne W. [St. Jude Children' s Research Hospital, Diagnostic Imaging, Memphis, TN (United States); Rhodes College, Memphis, TN (United States); University of Alabama at Birmingham School of Medicine, Birmingham, AL (United States); Smeltzer, Matthew P. [St. Jude Children' s Research Hospital, Department of Biostatistics, Memphis, TN (United States); University of Memphis, Division of Epidemiology, Biostatistics, and Environmental Health, School of Public Health, Memphis, TN (United States); Krafft, Axel J. [St. Jude Children' s Research Hospital, Diagnostic Imaging, Memphis, TN (United States); University Hospital Center Freiburg, Department of Radiology, Freiburg (Germany); Hankins, Jane S. [St. Jude Children' s Research Hospital, Department of Hematology, Memphis, TN (United States)

    2017-01-15

    Liver R2* values calculated from multi-gradient echo (mGRE) magnetic resonance images (MRI) are strongly correlated with hepatic iron concentration (HIC) as shown in several independently derived biopsy calibration studies. These calibrations were established for axial single-slice breath-hold imaging at the location of the portal vein. Scanning in multi-slice mode makes the exam more efficient, since whole-liver coverage can be achieved with two breath-holds and the optimal slice can be selected afterward. Navigator echoes remove the need for breath-holds and allow use in sedated patients. To evaluate if the existing biopsy calibrations can be applied to multi-slice and navigator-controlled mGRE imaging in children with hepatic iron overload, by testing if there is a bias-free correlation between single-slice R2* and multi-slice or multi-slice navigator controlled R2*. This study included MRI data from 71 patients with transfusional iron overload, who received an MRI exam to estimate HIC using gradient echo sequences. Patient scans contained 2 or 3 of the following imaging methods used for analysis: single-slice images (n = 71), multi-slice images (n = 69) and navigator-controlled images (n = 17). Small and large blood corrected region of interests were selected on axial images of the liver to obtain R2* values for all data sets. Bland-Altman and linear regression analysis were used to compare R2* values from single-slice images to those of multi-slice images and navigator-controlled images. Bland-Altman analysis showed that all imaging method comparisons were strongly associated with each other and had high correlation coefficients (0.98 ≤ r ≤ 1.00) with P-values ≤0.0001. Linear regression yielded slopes that were close to 1. We found that navigator-gated or breath-held multi-slice R2* MRI for HIC determination measures R2* values comparable to the biopsy-validated single-slice, single breath-hold scan. We conclude that these three R2* methods can be

  15. Integration and interpolation of sampled waveforms

    International Nuclear Information System (INIS)

    Stearns, S.D.

    1978-01-01

    Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed

  16. Comparison of 640-Slice Multidetector Computed Tomography Versus 32-Slice MDCT for Imaging of the Osteo-odonto-keratoprosthesis Lamina.

    Science.gov (United States)

    Norris, Joseph M; Kishikova, Lyudmila; Avadhanam, Venkata S; Koumellis, Panos; Francis, Ian S; Liu, Christopher S C

    2015-08-01

    To investigate the efficacy of 640-slice multidetector computed tomography (MDCT) for detecting osteo-odonto laminar resorption in the osteo-odonto-keratoprosthesis (OOKP) compared with the current standard 32-slice MDCT. Explanted OOKP laminae and bone-dentine fragments were scanned using 640-slice MDCT (Aquilion ONE; Toshiba) and 32-slice MDCT (LightSpeed Pro32; GE Healthcare). Pertinent comparisons including image quality, radiation dose, and scanning parameters were made. Benefits of 640-slice MDCT over 32-slice MDCT were shown. Key comparisons of 640-slice MDCT versus 32-slice MDCT included the following: percentage difference and correlation coefficient between radiological and anatomical measurements, 1.35% versus 3.67% and 0.9961 versus 0.9882, respectively; dose-length product, 63.50 versus 70.26; rotation time, 0.175 seconds versus 1.000 seconds; and detector coverage width, 16 cm versus 2 cm. Resorption of the osteo-odonto lamina after OOKP surgery can result in potentially sight-threatening complications, hence it warrants regular monitoring and timely intervention. MDCT remains the gold standard for radiological assessment of laminar resorption, which facilitates detection of subtle laminar changes earlier than the onset of clinical signs, thus indicating when preemptive measures can be taken. The 640-slice MDCT exhibits several advantages over traditional 32-slice MDCT. However, such benefits may not offset cost implications, except in rare cases, such as in young patients who might undergo years of radiation exposure.

  17. Subsurface temperature maps in French sedimentary basins: new data compilation and interpolation

    International Nuclear Information System (INIS)

    Bonte, D.; Guillou-Frottier, L.; Garibaldi, C.; Bourgine, B.; Lopez, S.; Bouchot, V.; Garibaldi, C.; Lucazeau, F.

    2010-01-01

    Assessment of the underground geothermal potential requires the knowledge of deep temperatures (1-5 km). Here, we present new temperature maps obtained from oil boreholes in the French sedimentary basins. Because of their origin, the data need to be corrected, and their local character necessitates spatial interpolation. Previous maps were obtained in the 1970's using empirical corrections and manual interpolation. In this study, we update the number of measurements by using values collected during the last thirty years, correct the temperatures for transient perturbations and carry out statistical analyses before modelling the 3D distribution of temperatures. This dataset provides 977 temperatures corrected for transient perturbations in 593 boreholes located in the French sedimentary basins. An average temperature gradient of 30.6 deg. C/km is obtained for a representative surface temperature of 10 deg. C. When surface temperature is not accounted for, deep measurements are best fitted with a temperature gradient of 25.7 deg. C/km. We perform a geostatistical analysis on a residual temperature dataset (using a drift of 25.7 deg. C/km) to constrain the 3D interpolation kriging procedure with horizontal and vertical models of variograms. The interpolated residual temperatures are added to the country-scale averaged drift in order to get a three dimensional thermal structure of the French sedimentary basins. The 3D thermal block enables us to extract isothermal surfaces and 2D sections (iso-depth maps and iso-longitude cross-sections). A number of anomalies with a limited depth and spatial extension have been identified, from shallow in the Rhine graben and Aquitanian basin, to deep in the Provence basin. Some of these anomalies (Paris basin, Alsace, south of the Provence basin) may be partly related to thick insulating sediments, while for some others (southwestern Aquitanian basin, part of the Provence basin) large-scale fluid circulation may explain superimposed

  18. Radiation exposure in multi-slice versus single-slice spiral CT: results of a nationwide survey

    International Nuclear Information System (INIS)

    Brix, G.; Nagel, H.D.; Stamm, G.; Veit, R.; Lechel, U.; Griebel, J.; Galanski, M.

    2003-01-01

    Multi-slice (MS) technology increases the efficacy of CT procedures and offers new promising applications. The expanding use of MSCT, however, may result in an increase in both frequency of procedures and levels of patient exposure. It was, therefore, the aim of this study to gain an overview of MSCT examinations conducted in Germany in 2001. All MSCT facilities were requested to provide information about 14 standard examinations with respect to scan parameters and frequency. Based on this data, dosimetric quantities were estimated using an experimentally validated formalism. Results are compared with those of a previous survey for single-slice (SS) spiral CT scanners. According to the data provided for 39 dual- and 73 quad-slice systems, the average annual number of patients examined at MSCT is markedly higher than that examined at SSCT scanners (5500 vs 3500). The average effective dose to patients was changed from 7.4 mSv at single-slice to 5.5 mSv and 8.1 mSv at dual- and quad-slice scanners, respectively. There is a considerable potential for dose reduction at quad-slice systems by an optimisation of scan protocols and better education of the personnel. To avoid an increase in the collective effective dose from CT procedures, a clear medical justification is required in each case. (orig.)

  19. Calculation of the Scattered Radiation Profile in 64 Slice CT Scanners Using Experimental Measurement

    Directory of Open Access Journals (Sweden)

    Afshin Akbarzadeh

    2009-06-01

    Full Text Available Introduction: One of the most important parameters in x-ray CT imaging is the noise induced by detected scattered radiation. The detected scattered radiation is completely dependent on the scanner geometry as well as size, shape and material of the scanned object. The magnitude and spatial distribution of the scattered radiation in x-ray CT should be quantified for development of robust scatter correction techniques. Empirical methods based on blocking the primary photons in a small region are not able to extract scatter in all elements of the detector array while the scatter profile is required for a scatter correction procedure. In this study, we measured scatter profiles in 64 slice CT scanners using a new experimental measurement. Material and Methods: To measure the scatter profile, a lead block array was inserted under the collimator and the phantom was exposed at the isocenter. The raw data file, which contained detector array readouts, was transferred to a PC and was read using a dedicated GUI running under MatLab 7.5. The scatter profile was extracted by interpolating the shadowed area. Results: The scatter and SPR profiles were measured. Increasing the tube voltage from 80 to 140 kVp resulted in an 80% fall off in SPR for a water phantom (d=210 mm and 86% for a polypropylene phantom (d = 350 mm. Increasing the air gap to 20.9 cm caused a 30% decrease in SPR. Conclusion: In this study, we presented a novel approach for measurement of scattered radiation distribution and SPR in a CT scanner with 64-slice capability using a lead block array. The method can also be used on other multi-slice CT scanners. The proposed technique can accurately estimate scatter profiles. It is relatively straightforward, easy to use, and can be used for any related measurement.

  20. Bayer Demosaicking with Polynomial Interpolation.

    Science.gov (United States)

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

    2016-08-30

    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  1. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    Science.gov (United States)

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  2. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-03

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  3. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-01

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  4. Research of Cubic Bezier Curve NC Interpolation Signal Generator

    Directory of Open Access Journals (Sweden)

    Shijun Ji

    2014-08-01

    Full Text Available Interpolation technology is the core of the computer numerical control (CNC system, and the precision and stability of the interpolation algorithm directly affect the machining precision and speed of CNC system. Most of the existing numerical control interpolation technology can only achieve circular arc interpolation, linear interpolation or parabola interpolation, but for the numerical control (NC machining of parts with complicated surface, it needs to establish the mathematical model and generate the curved line and curved surface outline of parts and then discrete the generated parts outline into a large amount of straight line or arc to carry on the processing, which creates the complex program and a large amount of code, so it inevitably introduce into the approximation error. All these factors affect the machining accuracy, surface roughness and machining efficiency. The stepless interpolation of cubic Bezier curve controlled by analog signal is studied in this paper, the tool motion trajectory of Bezier curve can be directly planned out in CNC system by adjusting control points, and then these data were put into the control motor which can complete the precise feeding of Bezier curve. This method realized the improvement of CNC trajectory controlled ability from the simple linear and circular arc to the complex project curve, and it provides a new way for economy realizing the curve surface parts with high quality and high efficiency machining.

  5. Use of 60 Co gamma radiation to expend the shell life of packaged sliced loaves

    International Nuclear Information System (INIS)

    Nazato, R.E.S.

    1991-11-01

    The evaluation of conservation of sliced loaves (bread cut into slices). baked by five bakeries of Piracicaba, after gamma irradiation and maintained into polyethylene begs of low density, of 47,5 and 85 μm of thicknesses is shown. The sliced loaves were put into the bags and thermo-sealed by hand, like they were handled by the bakers. After this, they were irradiated with doses of 0.0: 2.0; 4.0; 6.0; 8.0 and 10.0 kGy of gamma radiation in a irradiation chamber of Cobalt-60 at a dose rate of 2,68 kGy per hour, at the room temperature (28 0 C). After irradiation the samples were maintained under at the room temperature (26 - 34 0 C), and humidity, as similar as possible to the conditions of the markets, bakeries and shops they were sold. The samples were evaluated every days and if any of them presented sign of contamination. It was threw away because it was inappropriate for human consumption. (author)

  6. Interpolation for a subclass of H

    Indian Academy of Sciences (India)

    |g(zm)| ≤ c |zm − zm |, ∀m ∈ N. Thus it is natural to pose the following interpolation problem for H. ∞. : DEFINITION 4. We say that (zn) is an interpolating sequence in the weak sense for H. ∞ if given any sequence of complex numbers (λn) verifying. |λn| ≤ c ψ(zn,z. ∗ n) |zn − zn |, ∀n ∈ N,. (4) there exists a product fg ∈ H.

  7. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    Science.gov (United States)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  8. An integral conservative gridding-algorithm using Hermitian curve interpolation

    International Nuclear Information System (INIS)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-01-01

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  9. Surface interpolation with radial basis functions for medical imaging

    International Nuclear Information System (INIS)

    Carr, J.C.; Beatson, R.K.; Fright, W.R.

    1997-01-01

    Radial basis functions are presented as a practical solution to the problem of interpolating incomplete surfaces derived from three-dimensional (3-D) medical graphics. The specific application considered is the design of cranial implants for the repair of defects, usually holes, in the skull. Radial basis functions impose few restrictions on the geometry of the interpolation centers and are suited to problems where interpolation centers do not form a regular grid. However, their high computational requirements have previously limited their use to problems where the number of interpolation centers is small (<300). Recently developed fast evaluation techniques have overcome these limitations and made radial basis interpolation a practical approach for larger data sets. In this paper radial basis functions are fitted to depth-maps of the skull's surface, obtained from X-ray computed tomography (CT) data using ray-tracing techniques. They are used to smoothly interpolate the surface of the skull across defect regions. The resulting mathematical description of the skull's surface can be evaluated at any desired resolution to be rendered on a graphics workstation or to generate instructions for operating a computer numerically controlled (CNC) mill

  10. Linear and Quadratic Interpolators Using Truncated-Matrix Multipliers and Squarers

    Directory of Open Access Journals (Sweden)

    E. George Walters III

    2015-11-01

    Full Text Available This paper presents a technique for designing linear and quadratic interpolators for function approximation using truncated multipliers and squarers. Initial coefficient values are found using a Chebyshev-series approximation and then adjusted through exhaustive simulation to minimize the maximum absolute error of the interpolator output. This technique is suitable for any function and any precision up to 24 bits (IEEE single precision. Designs for linear and quadratic interpolators that implement the 1/x, 1/ √ x, log2(1+2x, log2(x and 2x functions are presented and analyzed as examples. Results show that a proposed 24-bit interpolator computing 1/x with a design specification of ±1 unit in the last place of the product (ulp error uses 16.4% less area and 15.3% less power than a comparable standard interpolator with the same error specification. Sixteen-bit linear interpolators for other functions are shown to use up to 17.3% less area and 12.1% less power, and 16-bit quadratic interpolators are shown to use up to 25.8% less area and 24.7% less power.

  11. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    Science.gov (United States)

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  12. Effects of chemical composite, puffing temperature and intermediate moisture content on physical properties of potato and apple slices

    Science.gov (United States)

    Tabtaing, S.; Paengkanya, S.; Tanthong, P.

    2017-09-01

    Puffing technique is the process that can improve texture and volumetric of crisp fruit and vegetable. However, the effect of chemical composite in foods on puffing characteristics is still lack of study. Therefore, potato and apple slices were comparative study on their physical properties. Potato and apple were sliced into 2.5 mm thickness and 2.5 cm in diameter. Potato slices were treated by hot water for 2 min while apple slices were not treatment. After that, they were dried in 3 steps. First step, they were dried by hot air at temperature of 90°C until their moisture content reached to 30, 40, and 50 % dry basis. Then they were puffed by hot air at temperature of 130, 150, and 170°C for 2 min. Finally, they were dried again by hot air at temperature of 90°C until their final moisture content reached to 4% dry basis. The experimental results showed that chemical composite of food affected on physical properties of puffed product. Puffed potato had higher volume ratio than those puffed apple because potato slices contains starch. The higher starch content provided more hard texture of potato than those apples. Puffing temperature and moisture content strongly affected on the color, volume ratio, and textural properties of puffed potato slices. In addition, the high drying rate of puffed product observed at high puffing temperature and higher moisture content.

  13. Improved Interpolation Kernels for Super-resolution Algorithms

    DEFF Research Database (Denmark)

    Rasti, Pejman; Orlova, Olga; Tamberg, Gert

    2016-01-01

    Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....

  14. On Multiple Interpolation Functions of the -Genocchi Polynomials

    Directory of Open Access Journals (Sweden)

    Jin Jeong-Hee

    2010-01-01

    Full Text Available Abstract Recently, many mathematicians have studied various kinds of the -analogue of Genocchi numbers and polynomials. In the work (New approach to q-Euler, Genocchi numbers and their interpolation functions, "Advanced Studies in Contemporary Mathematics, vol. 18, no. 2, pp. 105–112, 2009.", Kim defined new generating functions of -Genocchi, -Euler polynomials, and their interpolation functions. In this paper, we give another definition of the multiple Hurwitz type -zeta function. This function interpolates -Genocchi polynomials at negative integers. Finally, we also give some identities related to these polynomials.

  15. Steady State Stokes Flow Interpolation for Fluid Control

    DEFF Research Database (Denmark)

    Bhatacharya, Haimasree; Nielsen, Michael Bang; Bridson, Robert

    2012-01-01

    — suffer from a common problem. They fail to capture the rotational components of the velocity field, although extrapolation in the normal direction does consider the tangential component. We address this problem by casting the interpolation as a steady state Stokes flow. This type of flow captures......Fluid control methods often require surface velocities interpolated throughout the interior of a shape to use the velocity as a feedback force or as a boundary condition. Prior methods for interpolation in computer graphics — velocity extrapolation in the normal direction and potential flow...

  16. Quadratic Interpolation and Linear Lifting Design

    Directory of Open Access Journals (Sweden)

    Joel Solé

    2007-03-01

    Full Text Available A quadratic image interpolation method is stated. The formulation is connected to the optimization of lifting steps. This relation triggers the exploration of several interpolation possibilities within the same context, which uses the theory of convex optimization to minimize quadratic functions with linear constraints. The methods consider possible knowledge available from a given application. A set of linear equality constraints that relate wavelet bases and coefficients with the underlying signal is introduced in the formulation. As a consequence, the formulation turns out to be adequate for the design of lifting steps. The resulting steps are related to the prediction minimizing the detail signal energy and to the update minimizing the l2-norm of the approximation signal gradient. Results are reported for the interpolation methods in terms of PSNR and also, coding results are given for the new update lifting steps.

  17. Slicing, sampling, and distance-dependent effects affect network measures in simulated cortical circuit structures.

    Science.gov (United States)

    Miner, Daniel C; Triesch, Jochen

    2014-01-01

    The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results.

  18. Trace interpolation by slant-stack migration

    International Nuclear Information System (INIS)

    Novotny, M.

    1990-01-01

    The slant-stack migration formula based on the radon transform is studied with respect to the depth steep Δz of wavefield extrapolation. It can be viewed as a generalized trace-interpolation procedure including wave extrapolation with an arbitrary step Δz. For Δz > 0 the formula yields the familiar plane-wave decomposition, while for Δz > 0 it provides a robust tool for migration transformation of spatially under sampled wavefields. Using the stationary phase method, it is shown that the slant-stack migration formula degenerates into the Rayleigh-Sommerfeld integral in the far-field approximation. Consequently, even a narrow slant-stack gather applied before the diffraction stack can significantly improve the representation of noisy data in the wavefield extrapolation process. The theory is applied to synthetic and field data to perform trace interpolation and dip reject filtration. The data examples presented prove that the radon interpolator works well in the dip range, including waves with mutual stepouts smaller than half the dominant period

  19. Optimized Quasi-Interpolators for Image Reconstruction.

    Science.gov (United States)

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  20. Interpolation algorithm for asynchronous ADC-data

    Directory of Open Access Journals (Sweden)

    S. Bramburger

    2017-09-01

    Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  1. Input variable selection for interpolating high-resolution climate ...

    African Journals Online (AJOL)

    Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...

  2. Extension Of Lagrange Interpolation

    Directory of Open Access Journals (Sweden)

    Mousa Makey Krady

    2015-01-01

    Full Text Available Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.

  3. Interpolation and sampling in spaces of analytic functions

    CERN Document Server

    Seip, Kristian

    2004-01-01

    The book is about understanding the geometry of interpolating and sampling sequences in classical spaces of analytic functions. The subject can be viewed as arising from three classical topics: Nevanlinna-Pick interpolation, Carleson's interpolation theorem for H^\\infty, and the sampling theorem, also known as the Whittaker-Kotelnikov-Shannon theorem. The book aims at clarifying how certain basic properties of the space at hand are reflected in the geometry of interpolating and sampling sequences. Key words for the geometric descriptions are Carleson measures, Beurling densities, the Nyquist rate, and the Helson-Szegő condition. The book is based on six lectures given by the author at the University of Michigan. This is reflected in the exposition, which is a blend of informal explanations with technical details. The book is essentially self-contained. There is an underlying assumption that the reader has a basic knowledge of complex and functional analysis. Beyond that, the reader should have some familiari...

  4. Study on Scattered Data Points Interpolation Method Based on Multi-line Structured Light

    International Nuclear Information System (INIS)

    Fan, J Y; Wang, F G; W, Y; Zhang, Y L

    2006-01-01

    Aiming at the range image obtained through multi-line structured light, a regional interpolation method is put forward in this paper. This method divides interpolation into two parts according to the memory format of the scattered data, one is interpolation of the data on the stripes, and the other is interpolation of data between the stripes. Trend interpolation method is applied to the data on the stripes, and Gauss wavelet interpolation method is applied to the data between the stripes. Experiments prove regional interpolation method feasible and practical, and it also promotes the speed and precision

  5. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    Science.gov (United States)

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  6. Effect of interpolation on parameters extracted from seating interface pressure arrays

    OpenAIRE

    Michael Wininger, PhD; Barbara Crane, PhD, PT

    2015-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pre...

  7. Spectral interpolation - Zero fill or convolution. [image processing

    Science.gov (United States)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  8. Slicing, sampling, and distance-dependent effects affect network measures in simulated cortical circuit structures

    Directory of Open Access Journals (Sweden)

    Daniel Carl Miner

    2014-11-01

    Full Text Available The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results.

  9. A parameterization of observer-based controllers: Bumpless transfer by covariance interpolation

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Komareji, Mohammad

    2009-01-01

    This paper presents an algorithm to interpolate between two observer-based controllers for a linear multivariable system such that the closed loop system remains stable throughout the interpolation. The method interpolates between the inverse Lyapunov functions for the two original state feedback...

  10. Influence of 60Co γ irradiation pre-treatment on characteristics of hot air drying sweet potato slices

    International Nuclear Information System (INIS)

    Jiang Ning; Liu Chunquan; Li Dajing; Liu Xia; Yan Qimei

    2012-01-01

    The influences of irradiation, hot air temperature and thicknesses of the slices on the characters of dehydration and surface temperature of 60 Co γ-rays irradiated sweet potato were investigated. Meanwhile, microscopic observation and determination of water activity of irradiated sweet potato were conducted. The results show that the drying rate and the surface temperature rose with the increasing of irradiation dose. When the dry basis moisture content was 150%, the drying rate of the samples were 1.92, 1.97, 2.05, 2.28, 3.12% /min while the irradiation dose were 0, 2, 5, 8, 10 kGy, and the surface temperature were 48.5 ℃, 46.3℃, 44.5 ℃, 42.2 ℃, 41.5 ℃, respectively. With higher air temperature and thinner of the sweet potato slices, the dehydration of the irradiated sweet potato slices were faster. The drying speed of sweet potato slices at 85 ℃ was 170 min faster than that of 65 ℃. The drying speed of 7 mm sweet potato slices was 228 min faster than that of 3 mm sample. The cell wall and the vacuole of the sweet potato slices were broken after irradiation, and its water activity increased with the increase is radiation dose. The water activity of the irradiated samples were 0.92, 0.945, 0.958, 0.969, 0.979 with the irradiation doses of 0, 2, 5, 8, 10 kGy, respectively. The hot air drying rate, surface temperature and water activity of sweet potato are significantly impacted by irradiation. The conclusion provides a theoretical foundation for further processing technology of combined radiation and hot air drying sweet potato. (authors)

  11. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    Science.gov (United States)

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  12. Radiation sterilization and identification of gizzard slices

    International Nuclear Information System (INIS)

    Zhu, S.; Fu, C.; Jiang, W.; Yao, D.; Zhao, K.; Zhang, Y.

    1998-01-01

    An orthogonal test of 4 factors of radiation dose, storage temperature, storage time, and sanitation of cutting places was carried out to optimize the conditions for disinfection of gizzard slices. In the optimized condition, both the sanitary quality and the shelf-life of gizzard slices were improved. To identify irradiated gizzard slices, the sensory change, and the levels of water-soluble nitrogen, amino acid, total volatile basic nitrogen, peroxide value, vitamin C consumption and KMnO 4 consumption were determinated. No significant change was observed except for the color which was light brown on the surface of irradiated slices

  13. Slice hyperholomorphic Schur analysis

    CERN Document Server

    Alpay, Daniel; Sabadini, Irene

    2016-01-01

    This book defines and examines the counterpart of Schur functions and Schur analysis in the slice hyperholomorphic setting. It is organized into three parts: the first introduces readers to classical Schur analysis, while the second offers background material on quaternions, slice hyperholomorphic functions, and quaternionic functional analysis. The third part represents the core of the book and explores quaternionic Schur analysis and its various applications. The book includes previously unpublished results and provides the basis for new directions of research.

  14. New families of interpolating type IIB backgrounds

    Science.gov (United States)

    Minasian, Ruben; Petrini, Michela; Zaffaroni, Alberto

    2010-04-01

    We construct new families of interpolating two-parameter solutions of type IIB supergravity. These correspond to D3-D5 systems on non-compact six-dimensional manifolds which are mathbb{T}2 fibrations over Eguchi-Hanson and multi-center Taub-NUT spaces, respectively. One end of the interpolation corresponds to a solution with only D5 branes and vanishing NS three-form flux. A topology changing transition occurs at the other end, where the internal space becomes a direct product of the four-dimensional surface and the two-torus and the complexified NS-RR three-form flux becomes imaginary self-dual. Depending on the choice of the connections on the torus fibre, the interpolating family has either mathcal{N}=2 or mathcal{N}=1 supersymmetry. In the mathcal{N}=2 case it can be shown that the solutions are regular.

  15. Quadratic polynomial interpolation on triangular domain

    Science.gov (United States)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  16. Image Interpolation with Geometric Contour Stencils

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We consider the image interpolation problem where given an image vm,n with uniformly-sampled pixels vm,n and point spread function h, the goal is to find function u(x,y satisfying vm,n = (h*u(m,n for all m,n in Z. This article improves upon the IPOL article Image Interpolation with Contour Stencils. In the previous work, contour stencils are used to estimate the image contours locally as short line segments. This article begins with a continuous formulation of total variation integrated over a collection of curves and defines contour stencils as a consistent discretization. This discretization is more reliable than the previous approach and can effectively distinguish contours that are locally shaped like lines, curves, corners, and circles. These improved contour stencils sense more of the geometry in the image. Interpolation is performed using an extension of the method described in the previous article. Using the improved contour stencils, there is an increase in image quality while maintaining similar computational efficiency.

  17. Single-Image Super-Resolution Based on Rational Fractal Interpolation.

    Science.gov (United States)

    Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming

    2018-08-01

    This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.

  18. Application of ordinary kriging for interpolation of micro-structured technical surfaces

    International Nuclear Information System (INIS)

    Raid, Indek; Kusnezowa, Tatjana; Seewig, Jörg

    2013-01-01

    Kriging is an interpolation technique used in geostatistics. In this paper we present kriging applied in the field of three-dimensional optical surface metrology. Technical surfaces are not always optically cooperative, meaning that measurements of technical surfaces contain invalid data points because of different effects. These data points need to be interpolated to obtain a complete area in order to fulfil further processing. We present an elementary type of kriging, known as ordinary kriging, and apply it to interpolate measurements of different technical surfaces containing different kinds of realistic defects. The result of the interpolation with kriging is compared to six common interpolation techniques: nearest neighbour, natural neighbour, inverse distance to a power, triangulation with linear interpolation, modified Shepard's method and radial basis function. In order to quantify the results of different interpolations, the topographies are compared to defect-free reference topographies. Kriging is derived from a stochastic model that suggests providing an unbiased, linear estimation with a minimized error variance. The estimation with kriging is based on a preceding statistical analysis of the spatial structure of the surface. This comprises the choice and adaptation of specific models of spatial continuity. In contrast to common methods, kriging furthermore considers specific anisotropy in the data and adopts the interpolation accordingly. The gained benefit requires some additional effort in preparation and makes the overall estimation more time-consuming than common methods. However, the adaptation to the data makes this method very flexible and accurate. (paper)

  19. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  20. Wideband DOA Estimation through Projection Matrix Interpolation

    OpenAIRE

    Selva, J.

    2017-01-01

    This paper presents a method to reduce the complexity of the deterministic maximum likelihood (DML) estimator in the wideband direction-of-arrival (WDOA) problem, which is based on interpolating the array projection matrix in the temporal frequency variable. It is shown that an accurate interpolator like Chebyshev's is able to produce DML cost functions comprising just a few narrowband-like summands. Actually, the number of such summands is far smaller (roughly by factor ten in the numerical ...

  1. A COMPARATIVE STUDY OF TYMPANOPLASTY USING SLICED CARTILAGE GRAFT VS. TEMPORALIS FASCIA GRAFT

    Directory of Open Access Journals (Sweden)

    Rahul Ashok Telang

    2018-02-01

    Full Text Available BACKGROUND The objective of the study was to compare the hearing improvement after using sliced cartilage graft with that of temporalis fascia and to compare the graft take-up between the two graft materials. MATERIALS AND METHODS A prospective clinical study including 60 patients with chronic mucosal otitis media, who were selected randomly from the outpatient department, after obtaining their consent were divided into 2 groups of 30 each, and evaluated according the study protocol. Their pre-operative audiometry was recorded and both groups of patients underwent surgery with one of the graft materials- temporalis fascia or sliced tragal cartilage with a thickness of 0.5 mm. All patients were regularly followed up and post-operative audiometry was done at 3 months. The hearing improvement in the form of closure of air-bone-gap and graft take-up was analysed statistically. RESULTS The temporalis fascia graft group had a pre-operative ABG of 22.33 ± 6.24 dB and post-operative ABG of 12.33 ± 4.72 dB with hearing improvement of 10.00 dB. The sliced cartilage graft group had a pre-operative ABG of 20.77 ± 5.75 dB and postoperative ABG of 10.50 ± 4.46 dB with hearing improvement of 10.27 dB. In the temporalis fascia group, 28 (93.3% patients had good graft take-up and in the sliced cartilage group 29 (96.7% had good graft take-up. There was statistically significant hearing improvement in both of our study groups but there was no statistically significant difference between the two groups. There was no statistically significant difference in graft take-up also. CONCLUSION Sliced cartilage graft is a good auto-graft material in tympanoplasty, which can give good hearing improvement and has good graft take-up, which is comparable with that of temporalis fascia.

  2. Integrating interface slicing into software engineering processes

    Science.gov (United States)

    Beck, Jon

    1993-01-01

    Interface slicing is a tool which was developed to facilitate software engineering. As previously presented, it was described in terms of its techniques and mechanisms. The integration of interface slicing into specific software engineering activities is considered by discussing a number of potential applications of interface slicing. The applications discussed specifically address the problems, issues, or concerns raised in a previous project. Because a complete interface slicer is still under development, these applications must be phrased in future tenses. Nonetheless, the interface slicing techniques which were presented can be implemented using current compiler and static analysis technology. Whether implemented as a standalone tool or as a module in an integrated development or reverse engineering environment, they require analysis no more complex than that required for current system development environments. By contrast, conventional slicing is a methodology which, while showing much promise and intuitive appeal, has yet to be fully implemented in a production language environment despite 12 years of development.

  3. A Meshfree Quasi-Interpolation Method for Solving Burgers’ Equation

    Directory of Open Access Journals (Sweden)

    Mingzhu Li

    2014-01-01

    Full Text Available The main aim of this work is to consider a meshfree algorithm for solving Burgers’ equation with the quartic B-spline quasi-interpolation. Quasi-interpolation is very useful in the study of approximation theory and its applications, since it can yield solutions directly without the need to solve any linear system of equations and overcome the ill-conditioning problem resulting from using the B-spline as a global interpolant. The numerical scheme is presented, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the time derivative of the dependent variable. Compared to other numerical methods, the main advantages of our scheme are higher accuracy and lower computational complexity. Meanwhile, the algorithm is very simple and easy to implement and the numerical experiments show that it is feasible and valid.

  4. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    Science.gov (United States)

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  5. Interpolation in Spaces of Functions

    Directory of Open Access Journals (Sweden)

    K. Mosaleheh

    2006-03-01

    Full Text Available In this paper we consider the interpolation by certain functions such as trigonometric and rational functions for finite dimensional linear space X. Then we extend this to infinite dimensional linear spaces

  6. Conformal Interpolating Algorithm Based on Cubic NURBS in Aspheric Ultra-Precision Machining

    International Nuclear Information System (INIS)

    Li, C G; Zhang, Q R; Cao, C G; Zhao, S L

    2006-01-01

    Numeric control machining and on-line compensation for aspheric surface are key techniques in ultra-precision machining. In this paper, conformal cubic NURBS interpolating curve is applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal cubic NURBS interpolation, we compare it with the linear interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by cubic NURBS is higher than by line. The algorithm is benefit to increasing the surface form precision of workpieces in ultra-precision machining

  7. Gluteal fat thickness in pelvic CT

    International Nuclear Information System (INIS)

    Park, Jeong Mi; Jung, Se Young; Lee, Jae Mun; Park, Seog Hee; Kim, Choon Yul; Bahk, Yong Whee

    1986-01-01

    Many calcifications due to fat necrosis in the buttocks detected on the pelvis roentgenograms suggest that the majority of injections intended to be intramuscular actually are delivered into fat. We measured thickness of adult gluteal fat to decide whether the injection using needle of usual length is done into fat or muscle. We measured the vertical thickness of the subcutaneous fat at a point of 2-3cm above the femoral head cut slice with randomly collected 116 cases of adults in the department of Radiology, St. Mary's Hospital, Catholic Medical College. We found that 32% female cases might actually receive on intra adipose injection when a needle of maximum 3.8cm length is inserted into the buttock. If deposition into muscle is desirable, we need to choose needle whose length is appropriate for the site of injection and the patient's deposits of fat.

  8. Smooth Phase Interpolated Keying

    Science.gov (United States)

    Borah, Deva K.

    2007-01-01

    Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values

  9. A survey of program slicing for software engineering

    Science.gov (United States)

    Beck, Jon

    1993-01-01

    This research concerns program slicing which is used as a tool for program maintainence of software systems. Program slicing decreases the level of effort required to understand and maintain complex software systems. It was first designed as a debugging aid, but it has since been generalized into various tools and extended to include program comprehension, module cohesion estimation, requirements verification, dead code elimination, and maintainence of several software systems, including reverse engineering, parallelization, portability, and reuse component generation. This paper seeks to address and define terminology, theoretical concepts, program representation, different program graphs, developments in static slicing, dynamic slicing, and semantics and mathematical models. Applications for conventional slicing are presented, along with a prognosis of future work in this field.

  10. Abstract interpolation in vector-valued de Branges-Rovnyak spaces

    NARCIS (Netherlands)

    Ball, J.A.; Bolotnikov, V.; ter Horst, S.

    2011-01-01

    Following ideas from the Abstract Interpolation Problem of Katsnelson et al. (Operators in spaces of functions and problems in function theory, vol 146, pp 83–96, Naukova Dumka, Kiev, 1987) for Schur class functions, we study a general metric constrained interpolation problem for functions from a

  11. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  12. Data interpolation for vibration diagnostics using two-variable correlations

    International Nuclear Information System (INIS)

    Branagan, L.

    1991-01-01

    This paper reports that effective machinery vibration diagnostics require a clear differentiation between normal vibration changes caused by plant process conditions and those caused by degradation. The normal relationship between vibration and a process parameter can be quantified by developing the appropriate correlation. The differences in data acquisition requirements between dynamic signals (vibration spectra) and static signals (pressure, temperature, etc.) result in asynchronous data acquisition; the development of any correlation must then be based on some form of interpolated data. This interpolation can reproduce or distort the original measured quantity depending on the characteristics of the data and the interpolation technique. Relevant data characteristics, such as acquisition times, collection cycle times, compression method, storage rate, and the slew rate of the measured variable, are dependent both on the data handling and on the measured variable. Linear and staircase interpolation, along with the use of clustering and filtering, provide the necessary options to develop accurate correlations. The examples illustrate the appropriate application of these options

  13. Facial soft tissue thickness in North Indian adult population

    Directory of Open Access Journals (Sweden)

    Tanushri Saxena

    2012-01-01

    Full Text Available Objectives: Forensic facial reconstruction is an attempt to reproduce a likeness of facial features of an individual, based on characteristics of the skull, for the purpose of individual identification - The aim of this study was to determine the soft tissue thickness values of individuals of Bareilly population, Uttar Pradesh, India and to evaluate whether these values can help in forensic identification. Study design: A total of 40 individuals (19 males, 21 females were evaluated using spiral computed tomographic (CT scan with 2 mm slice thickness in axial sections and soft tissue thicknesses were measured at seven midfacial anthropological facial landmarks. Results: It was found that facial soft tissue thickness values decreased with age. Soft tissue thickness values were less in females than in males, except at ramus region. Comparing the left and right values in individuals it was found to be not significant. Conclusion: Soft tissue thickness values are an important factor in facial reconstruction and also help in forensic identification of an individual. CT scan gives a good representation of these values and hence is considered an important tool in facial reconstruction- This study has been conducted in North Indian population and further studies with larger sample size can surely add to the data regarding soft tissue thicknesses.

  14. Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.

    Science.gov (United States)

    Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong

    2017-11-01

    Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.

  15. Computationally efficient real-time interpolation algorithm for non-uniform sampled biosignals.

    Science.gov (United States)

    Guven, Onur; Eftekhar, Amir; Kindt, Wilko; Constandinou, Timothy G

    2016-06-01

    This Letter presents a novel, computationally efficient interpolation method that has been optimised for use in electrocardiogram baseline drift removal. In the authors' previous Letter three isoelectric baseline points per heartbeat are detected, and here utilised as interpolation points. As an extension from linear interpolation, their algorithm segments the interpolation interval and utilises different piecewise linear equations. Thus, the algorithm produces a linear curvature that is computationally efficient while interpolating non-uniform samples. The proposed algorithm is tested using sinusoids with different fundamental frequencies from 0.05 to 0.7 Hz and also validated with real baseline wander data acquired from the Massachusetts Institute of Technology University and Boston's Beth Israel Hospital (MIT-BIH) Noise Stress Database. The synthetic data results show an root mean square (RMS) error of 0.9 μV (mean), 0.63 μV (median) and 0.6 μV (standard deviation) per heartbeat on a 1 mVp-p 0.1 Hz sinusoid. On real data, they obtain an RMS error of 10.9 μV (mean), 8.5 μV (median) and 9.0 μV (standard deviation) per heartbeat. Cubic spline interpolation and linear interpolation on the other hand shows 10.7 μV, 11.6 μV (mean), 7.8 μV, 8.9 μV (median) and 9.8 μV, 9.3 μV (standard deviation) per heartbeat.

  16. Hyaline cartilage thickness in radiographically normal cadaveric hips: comparison of spiral CT arthrographic and macroscopic measurements.

    Science.gov (United States)

    Wyler, Annabelle; Bousson, Valérie; Bergot, Catherine; Polivka, Marc; Leveque, Eric; Vicaut, Eric; Laredo, Jean-Denis

    2007-02-01

    To assess spiral multidetector computed tomographic (CT) arthrography for the depiction of cartilage thickness in hips without cartilage loss, with evaluation of anatomic slices as the reference standard. Permission to perform imaging studies in cadaveric specimens of individuals who had willed their bodies to science was obtained from the institutional review board. Two independent observers measured the femoral and acetabular hyaline cartilage thickness of 12 radiographically normal cadaveric hips (from six women and five men; age range at death, 52-98 years; mean, 76.5 years) on spiral multidetector CT arthrographic reformations and on coronal anatomic slices. Regions of cartilage loss at gross or histologic examination were excluded. CT arthrographic and anatomic measurements in the coronal plane were compared by using Bland-Altman representation and a paired t test. Differences between mean cartilage thicknesses at the points of measurement were tested by means of analysis of variance. Interobserver and intraobserver reproducibilities were determined. At CT arthrography, mean cartilage thickness ranged from 0.32 to 2.53 mm on the femoral head and from 0.95 to 3.13 mm on the acetabulum. Observers underestimated cartilage thickness in the coronal plane by 0.30 mm +/- 0.52 (mean +/- standard error) at CT arthrography (P cartilage thicknesses at the different measurement points was significant for coronal spiral multidetector CT arthrography and anatomic measurement of the femoral head and acetabulum and for sagittal and transverse CT arthrography of the femoral head (P cartilage thickness from the periphery to the center of the joint ("gradients") were found by means of spiral multidetector CT arthrography and anatomic measurement. Spiral multidetector CT arthrography depicts cartilage thickness gradients in radiographically normal cadaveric hips. (c) RSNA, 2007.

  17. RF slice profile effects in magnetic resonance fingerprinting.

    Science.gov (United States)

    Hong, Taehwa; Han, Dongyeob; Kim, Min-Oh; Kim, Dong-Hyun

    2017-09-01

    The radio frequency (RF) slice profile effects on T1 and T2 estimation in magnetic resonance fingerprinting (MRF) are investigated with respect to time-bandwidth product (TBW), flip angle (FA) level and field inhomogeneities. Signal evolutions are generated incorporating the non-ideal slice selective excitation process using Bloch simulation and matched to the original dictionary with and without the non-ideal slice profile taken into account. For validation, phantom and in vivo experiments are performed at 3T. Both simulations and experiments results show that T1 and T2 error from non-ideal slice profile increases with increasing FA level, off-resonance, and low TBW values. Therefore, RF slice profile effects should be compensated for accurate determination of the MR parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    Science.gov (United States)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  19. Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  20. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    Science.gov (United States)

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  1. Effect of ultrasound and centrifugal force on carambola (Averrhoa carambola L.) slices during osmotic dehydration.

    Science.gov (United States)

    Barman, Nirmali; Badwaik, Laxmikant S

    2017-01-01

    Osmotic dehydration (OD) of carambola slices were carried out using glucose, sucrose, fructose and glycerol as osmotic agents with 70°Bx solute concentration, 50°C of temperature and for time of 180min. Glycerol and sucrose were selected on the basis of their higher water loss, weight reduction and lowers solid gain. Further the optimization of OD of carambola slices (5mm thick) were carried out under different process conditions of temperature (40-60°C), concentration of sucrose and glycerol (50-70°Bx), time (180min) and fruit to solution ratio (1:10) against various responses viz. water loss, solid gain, texture, rehydration ratio and sensory score according to a composite design. The optimized value for temperature, concentration of sucrose and glycerol has been found to be 50°C, 66°Bx and 66°Bx respectively. Under optimized conditions the effect of ultrasound for 10, 20, 30min and centrifugal force (2800rpm) for 15, 30, 45 and 60min on OD of carambola slices were checked. The controlled samples showed 68.14% water loss and 13.05% solid gain in carambola slices. While, the sample having 30min ultrasonic treatment showed 73.76% water loss and 9.79% solid gain; and the sample treated with centrifugal force for 60min showed 75.65% water loss and 6.76% solid gain. The results showed that with increasing in treatment time the water loss, rehydration ratio were increased and solid gain, texture were decreased. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Interpolation from Grid Lines: Linear, Transfinite and Weighted Method

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2017-01-01

    When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid l...

  3. A novel method for oxygen glucose deprivation model in organotypic spinal cord slices.

    Science.gov (United States)

    Liu, Jing-Jie; Ding, Xiao-Yan; Xiang, Li; Zhao, Feng; Huang, Sheng-Li

    2017-10-01

    This study aimed to establish a model to closely mimic spinal cord hypoxic-ischemic injury with high production and high reproducibility. Fourteen-day cultured organotypic spinal cord slices were divided into 4 groups: control (Ctrl), oxygen glucose deprived for 30min (OGD 30min), OGD 60min, and OGD 120min. The Ctrl slices were incubated with 1ml propidium iodide (PI) solution (5μg/ml) for 30min. The OGD groups were incubated with 1ml glucose-free DMEM/F12 medium and 5μl PI solution (1mg/ml) for 30min, 60min and 120min, respectively. Positive control slice was fixed by 4% paraformaldehyde for 20min. The culture medium in each group was then collected and the Lactate Dehydrogenase (LDH) level in the medium was tested using Multi-Analyte ELISArray kits. Structure and refraction of the spinal cord slices were observed by light microscope. Fluorescence intensity of PI was examined by fluorescence microscopy and was tested by IPP Software. Morphology of astrocytes was observed by immunofluorescence histochemistry. Caspase 3 and caspase 3 active in different groups were tested by Western blot. In the OGD groups, the refraction of spinal cord slices decreased and the structure was unclear. The changes of refraction and structure in the OGD 120min group were similar to that in the positive control slice. Astrocyte morphology changed significantly. With the increase of OGD time, processes became thick and twisted, and nuclear condensations became more apparent. Obvious changes in morphology were observed in the OGD 60min group, and normal morphology disappeared in the OGD 120min group. Fluorescence intensity of PI increased along with the extension of OGD time. The difference was significant between 30min and 60min, but not significant between 60min and 120min. The intensity at OGD 120min was close to that in the positive control. Compare with the Ctrl group, the OGD groups had significantly higher LDH levels and caspase 3 active/caspase 3 ratios. The values increased

  4. Digital surfaces and thicknesses of selected hydrogeologic units of the Floridan aquifer system in Florida and parts of Georgia, Alabama, and South Carolina

    Science.gov (United States)

    Williams, Lester J.; Dixon, Joann F.

    2015-01-01

    Digital surfaces and thicknesses of selected hydrogeologic units of the Floridan aquifer system were developed to define an updated hydrogeologic framework as part of the U.S. Geological Survey Groundwater Resources Program. The dataset contains structural surfaces depicting the top and base of the aquifer system, its major and minor hydrogeologic units and zones, geophysical marker horizons, and the altitude of the 10,000-milligram-per-liter total dissolved solids boundary that defines the approximate fresh and saline parts of the aquifer system. The thicknesses of selected major and minor units or zones were determined by interpolating points of known thickness or from raster surface subtraction of the structural surfaces. Additional data contained include clipping polygons; regional polygon features that represent geologic or hydrogeologic aspects of the aquifers and the minor units or zones; data points used in the interpolation; and polygon and line features that represent faults, boundaries, and other features in the aquifer system.

  5. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    Science.gov (United States)

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  6. Ice thickness measurements and volume estimates for glaciers in Norway

    Science.gov (United States)

    Andreassen, Liss M.; Huss, Matthias; Melvold, Kjetil; Elvehøy, Hallgeir; Winsvold, Solveig H.

    2014-05-01

    Whereas glacier areas in many mountain regions around the world now are well surveyed using optical satellite sensors and available in digital inventories, measurements of ice thickness are sparse in comparison and a global dataset does not exist. Since the 1980s ice thickness measurements have been carried out by ground penetrating radar on many glaciers in Norway, often as part of contract work for hydropower companies with the aim to calculate hydrological divides of ice caps. Measurements have been conducted on numerous glaciers, covering the largest ice caps as well as a few smaller mountain glaciers. However, so far no ice volume estimate for Norway has been derived from these measurements. Here, we give an overview of ice thickness measurements in Norway, and use a distributed model to interpolate and extrapolate the data to provide an ice volume estimate of all glaciers in Norway. We also compare the results to various volume-area/thickness-scaling approaches using values from the literature as well as scaling constants we obtained from ice thickness measurements in Norway. Glacier outlines from a Landsat-derived inventory from 1999-2006 together with a national digital elevation model were used as input data for the ice volume calculations. The inventory covers all glaciers in mainland Norway and consists of 2534 glaciers (3143 glacier units) covering an area of 2692 km2 ± 81 km2. To calculate the ice thickness distribution of glaciers in Norway we used a distributed model which estimates surface mass balance distribution, calculates the volumetric balance flux and converts it into thickness using the flow law for ice. We calibrated this model with ice thickness data for Norway, mainly by adjusting the mass balance gradient. Model results generally agree well with the measured values, however, larger deviations were found for some glaciers. The total ice volume of Norway was estimated to be 275 km3 ± 30 km3. From the ice thickness data set we selected

  7. Some observations on interpolating gauges and non-covariant gauges

    Indian Academy of Sciences (India)

    We discuss the viability of using interpolating gauges to define the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition defining term. We show that the boundary condition needed to maintain gauge-invariance as the interpolating parameter ...

  8. 16-slice MDCT arthrography of the shoulder: accuracy for detection of glenoid labral and rotator cuff tears

    International Nuclear Information System (INIS)

    Kim, Gang Deuk; Kim, Huoung Jun; Kim, Hye Won; Oh, Jung Taek; Juhng, Seon Kwan; Lee, Sung Ah

    2007-01-01

    We wanted to determine the diagnostic accuracy of 16-slice MDCT arthrography (CTA) for glenoid labral and rotator cuff tears of the shoulder. We enrolled forty-five patients who underwent arthroscopy after CTA for pain or instability of the shoulder joint. The CTA images were analyzed for the existence, sites and types of glenoid labral tears and the presence and severity of rotator cuff tears. We determined the sensitivity, specificity and accuracy of CTA for detecting glenoid labral and rotator cuff tears on the basis of the arthroscopy findings. At arthroscopy, there were 33 SLAP lesions (9 type I, 23 type II and 1 type III), 6 Bankart lesions and 31 rotator cuff lesions (21 supraspinatus, 9 infraspinatus and 1 subscapularis). On CTA, the sensitivity, specificity and accuracy for detecting 24 SLAP lesions, excluding the type I lesions, were 83%, 100% and 91%, the total rotator cuff tears were 90%, 100% and 98%, the full thickness supraspinatus tendon tears were 100%, 94% and 96%, and the partial thickness supraspinatus tendon tears were 29%, 100% and 89%, respectively. 16-slice MDCT arthrography has high accuracy for the diagnosis of abnormality of the glenoid labrum or rotator cuff tears and it can be a useful alternative to MRI or US

  9. 16-slice MDCT arthrography of the shoulder: accuracy for detection of glenoid labral and rotator cuff tears

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Gang Deuk; Kim, Huoung Jun; Kim, Hye Won; Oh, Jung Taek; Juhng, Seon Kwan [Wonkwang University Hospital, Iksan (Korea, Republic of); Lee, Sung Ah [Seoul Medical Center, Seoul (Korea, Republic of)

    2007-04-15

    We wanted to determine the diagnostic accuracy of 16-slice MDCT arthrography (CTA) for glenoid labral and rotator cuff tears of the shoulder. We enrolled forty-five patients who underwent arthroscopy after CTA for pain or instability of the shoulder joint. The CTA images were analyzed for the existence, sites and types of glenoid labral tears and the presence and severity of rotator cuff tears. We determined the sensitivity, specificity and accuracy of CTA for detecting glenoid labral and rotator cuff tears on the basis of the arthroscopy findings. At arthroscopy, there were 33 SLAP lesions (9 type I, 23 type II and 1 type III), 6 Bankart lesions and 31 rotator cuff lesions (21 supraspinatus, 9 infraspinatus and 1 subscapularis). On CTA, the sensitivity, specificity and accuracy for detecting 24 SLAP lesions, excluding the type I lesions, were 83%, 100% and 91%, the total rotator cuff tears were 90%, 100% and 98%, the full thickness supraspinatus tendon tears were 100%, 94% and 96%, and the partial thickness supraspinatus tendon tears were 29%, 100% and 89%, respectively. 16-slice MDCT arthrography has high accuracy for the diagnosis of abnormality of the glenoid labrum or rotator cuff tears and it can be a useful alternative to MRI or US.

  10. Efficient Algorithms and Design for Interpolation Filters in Digital Receiver

    Directory of Open Access Journals (Sweden)

    Xiaowei Niu

    2014-05-01

    Full Text Available Based on polynomial functions this paper introduces a generalized design method for interpolation filters. The polynomial-based interpolation filters can be implemented efficiently by using a modified Farrow structure with an arbitrary frequency response, the filters allow many pass- bands and stop-bands, and for each band the desired amplitude and weight can be set arbitrarily. The optimization coefficients of the interpolation filters in time domain are got by minimizing the weighted mean squared error function, then converting to solve the quadratic programming problem. The optimization coefficients in frequency domain are got by minimizing the maxima (MiniMax of the weighted mean squared error function. The degree of polynomials and the length of interpolation filter can be selected arbitrarily. Numerical examples verified the proposed design method not only can reduce the hardware cost effectively but also guarantee an excellent performance.

  11. Assessment of Myocardial Bridge and Mural Coronary Artery Using ECG-Gated 256-Slice CT Angiography: A Retrospective Study

    Directory of Open Access Journals (Sweden)

    En-sen Ma

    2013-01-01

    Full Text Available Recent clinical reports have indicated that myocardial bridge and mural coronary artery complex (MB-MCA might cause major adverse cardiac events. 256-slice CT angiography (256-slice CTA is a newly developed CT system with faster scanning and lower radiation dose compared with other CT systems. The objective of this study is to evaluate the morphological features of MB-MCA and determine its changes from diastole to systole phase using 256-slice CTA. The imaging data of 2462 patients were collected retrospectively. Two independent radiologists reviewed the collected images and the diagnosis of MB-MCA was confirmed when consistency was obtained. The length, diameter, and thickness of MB-MCA in diastole and systole phases were recorded, and changes of MB-MCA were calculated. Our results showed that among the 2462 patients examined, 336 have one or multiple MB-MCA (13.6%. Out of 389 MB-MCA segments, 235 sites were located in LAD2 (60.41%. The average diameter change of MCA in LAD2 from systole phase to diastole phase was  mm, and 34.9% of MCA have more than 50% diameter stenosis in systole phase. This study suggested that 256-slice CTA multiple-phase reconstruction technique is a reliable method to determine the changes of MB-MCA from diastole to systole phase.

  12. Thin slices of child personality: Perceptual, situational, and behavioral contributions.

    Science.gov (United States)

    Tackett, Jennifer L; Herzhoff, Kathrin; Kushner, Shauna C; Rule, Nicholas

    2016-01-01

    The present study examined whether thin-slice ratings of child personality serve as a resource-efficient and theoretically valid measurement of child personality traits. We extended theoretical work on the observability, perceptual accuracy, and situational consistency of childhood personality traits by examining intersource and interjudge agreement, cross-situational consistency, and convergent, divergent, and predictive validity of thin-slice ratings. Forty-five unacquainted independent coders rated 326 children's (ages 8-12) personality in 1 of 15 thin-slice behavioral scenarios (i.e., 3 raters per slice, for over 14,000 independent thin-slice ratings). Mothers, fathers, and children rated children's personality, psychopathology, and competence. We found robust evidence for correlations between thin-slice and mother/father ratings of child personality, within- and across-task consistency of thin-slice ratings, and convergent and divergent validity with psychopathology and competence. Surprisingly, thin-slice ratings were more consistent across situations in this child sample than previously found for adults. Taken together, these results suggest that thin slices are a valid and reliable measure to assess child personality, offering a useful method of measurement beyond questionnaires, helping to address novel questions of personality perception and consistency in childhood. (c) 2016 APA, all rights reserved).

  13. Slices

    KAUST Repository

    McCrae, James

    2011-01-01

    Minimalist object representations or shape-proxies that spark and inspire human perception of shape remain an incompletely understood, yet powerful aspect of visual communication. We explore the use of planar sections, i.e., the contours of intersection of planes with a 3D object, for creating shape abstractions, motivated by their popularity in art and engineering. We first perform a user study to show that humans do define consistent and similar planar section proxies for common objects. Interestingly, we observe a strong correlation between user-defined planes and geometric features of objects. Further we show that the problem of finding the minimum set of planes that capture a set of 3D geometric shape features is both NP-hard and not always the proxy a user would pick. Guided by the principles inferred from our user study, we present an algorithm that progressively selects planes to maximize feature coverage, which in turn influence the selection of subsequent planes. The algorithmic framework easily incorporates various shape features, while their relative importance values are computed and validated from the user study data. We use our algorithm to compute planar slices for various objects, validate their utility towards object abstraction using a second user study, and conclude showing the potential applications of the extracted planar slice shape proxies. © 2011 ACM.

  14. Interpolation-free scanning and sampling scheme for tomographic reconstructions

    International Nuclear Information System (INIS)

    Donohue, K.D.; Saniie, J.

    1987-01-01

    In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

  15. Short-term prediction method of wind speed series based on fractal interpolation

    International Nuclear Information System (INIS)

    Xiu, Chunbo; Wang, Tiantian; Tian, Meng; Li, Yanqing; Cheng, Yi

    2014-01-01

    Highlights: • An improved fractal interpolation prediction method is proposed. • The chaos optimization algorithm is used to obtain the iterated function system. • The fractal extrapolate interpolation prediction of wind speed series is performed. - Abstract: In order to improve the prediction performance of the wind speed series, the rescaled range analysis is used to analyze the fractal characteristics of the wind speed series. An improved fractal interpolation prediction method is proposed to predict the wind speed series whose Hurst exponents are close to 1. An optimization function which is composed of the interpolation error and the constraint items of the vertical scaling factors in the fractal interpolation iterated function system is designed. The chaos optimization algorithm is used to optimize the function to resolve the optimal vertical scaling factors. According to the self-similarity characteristic and the scale invariance, the fractal extrapolate interpolation prediction can be performed by extending the fractal characteristic from internal interval to external interval. Simulation results show that the fractal interpolation prediction method can get better prediction result than others for the wind speed series with the fractal characteristic, and the prediction performance of the proposed method can be improved further because the fractal characteristic of its iterated function system is similar to that of the predicted wind speed series

  16. Correlation of NTD-silicon rod and slice resistivity

    International Nuclear Information System (INIS)

    Wolverton, W.M.

    1984-01-01

    Neutron transmutation doped silicon is an electronic material which presents an opportunity to explore a high level of resistivity characterization. This is due to its excellent uniformity of dopant concentration. Appropriate resistivity measurements on the ingot raw material can be used as a predictor of slice resistivity. Correlation of finished NTD rod (i.e. ingot) resistivity to as-cut slice resistivity (after the sawing process) is addressed in the scope of this paper. Empirical data show that the shift of slice-center resistivity compared to rod-end center resistivity is a function of a new kind of rod radial-resistivity gradient. This function has two domains, and most rods are in domain ''A''. Correlating equations show how to significantly improve the prediction of slice resistivity of rods in domain ''A''. The new rod resistivity specifications have resulted in manufacturing economies in the production of NTD silicon slices

  17. Research on Electronic Transformer Data Synchronization Based on Interpolation Methods and Their Error Analysis

    Directory of Open Access Journals (Sweden)

    Pang Fubin

    2015-09-01

    Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.

  18. Interpolation functors and interpolation spaces

    CERN Document Server

    Brudnyi, Yu A

    1991-01-01

    The theory of interpolation spaces has its origin in the classical work of Riesz and Marcinkiewicz but had its first flowering in the years around 1960 with the pioneering work of Aronszajn, Calderón, Gagliardo, Krein, Lions and a few others. It is interesting to note that what originally triggered off this avalanche were concrete problems in the theory of elliptic boundary value problems related to the scale of Sobolev spaces. Later on, applications were found in many other areas of mathematics: harmonic analysis, approximation theory, theoretical numerical analysis, geometry of Banach spaces, nonlinear functional analysis, etc. Besides this the theory has a considerable internal beauty and must by now be regarded as an independent branch of analysis, with its own problems and methods. Further development in the 1970s and 1980s included the solution by the authors of this book of one of the outstanding questions in the theory of the real method, the K-divisibility problem. In a way, this book harvests the r...

  19. Investigation of the slice sensitivity profile for step-and-shoot mode multi-slice computed tomography

    International Nuclear Information System (INIS)

    Hsieh Jiang

    2001-01-01

    Multislice computed tomography (MCT) is one of the recent technology advancements in CT. Compared to single slice CT, MCT significantly improves examination time, x-ray tube efficiency, and contrast material utilization. Although the scan mode of MCT is predominately helical, step-and-shoot (axial) scans continue to be an important part of routine clinical protocols. In this paper, we present a detailed investigation on the slice sensitivity profile (SSP) of MCT in the step-and-shoot mode. Our investigation shows that, unlike single slice CT, the SSP for MCT exhibits multiple peaks and valleys resulting from intercell gaps between detector rows. To fully understand the characteristics of the SSP, we developed an analytical model to predict the behavior of MCT. We propose a simple experimental technique that can quickly and accurately measure SSP. The impact of the SSP on image artifacts and low contrast detectability is also investigated

  20. Validation study of an interpolation method for calculating whole lung volumes and masses from reduced numbers of CT-images in ponies.

    Science.gov (United States)

    Reich, H; Moens, Y; Braun, C; Kneissl, S; Noreikat, K; Reske, A

    2014-12-01

    Quantitative computer tomographic analysis (qCTA) is an accurate but time intensive method used to quantify volume, mass and aeration of the lungs. The aim of this study was to validate a time efficient interpolation technique for application of qCTA in ponies. Forty-one thoracic computer tomographic (CT) scans obtained from eight anaesthetised ponies positioned in dorsal recumbency were included. Total lung volume and mass and their distribution into four compartments (non-aerated, poorly aerated, normally aerated and hyperaerated; defined based on the attenuation in Hounsfield Units) were determined for the entire lung from all 5 mm thick CT-images, 59 (55-66) per animal. An interpolation technique validated for use in humans was then applied to calculate qCTA results for lung volumes and masses from only 10, 12, and 14 selected CT-images per scan. The time required for both procedures was recorded. Results were compared statistically using the Bland-Altman approach. The bias ± 2 SD for total lung volume calculated from interpolation of 10, 12, and 14 CT-images was -1.2 ± 5.8%, 0.1 ± 3.5%, and 0.0 ± 2.5%, respectively. The corresponding results for total lung mass were -1.1 ± 5.9%, 0.0 ± 3.5%, and 0.0 ± 3.0%. The average time for analysis of one thoracic CT-scan using the interpolation method was 1.5-2 h compared to 8 h for analysis of all images of one complete thoracic CT-scan. The calculation of pulmonary qCTA data by interpolation from 12 CT-images was applicable for equine lung CT-scans and reduced the time required for analysis by 75%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Improvements in Off Design Aeroengine Performance Prediction Using Analytic Compressor Map Interpolation

    Science.gov (United States)

    Mist'e, Gianluigi Alberto; Benini, Ernesto

    2012-06-01

    Compressor map interpolation is usually performed through the introduction of auxiliary coordinates (β). In this paper, a new analytical bivariate β function definition to be used in compressor map interpolation is studied. The function has user-defined parameters that must be adjusted to properly fit to a single map. The analytical nature of β allows for rapid calculations of the interpolation error estimation, which can be used as a quantitative measure of interpolation accuracy and also as a valid tool to compare traditional β function interpolation with new approaches (artificial neural networks, genetic algorithms, etc.). The quality of the method is analyzed by comparing the error output to the one of a well-known state-of-the-art methodology. This comparison is carried out for two different types of compressor and, in both cases, the error output using the method presented in this paper is found to be consistently lower. Moreover, an optimization routine able to locally minimize the interpolation error by shape variation of the β function is implemented. Further optimization introducing other important criteria is discussed.

  2. Spatio-temporal interpolation of precipitation during monsoon periods in Pakistan

    Science.gov (United States)

    Hussain, Ijaz; Spöck, Gunter; Pilz, Jürgen; Yu, Hwa-Lung

    2010-08-01

    Spatio-temporal estimation of precipitation over a region is essential to the modeling of hydrologic processes for water resources management. The changes of magnitude and space-time heterogeneity of rainfall observations make space-time estimation of precipitation a challenging task. In this paper we propose a Box-Cox transformed hierarchical Bayesian multivariate spatio-temporal interpolation method for the skewed response variable. The proposed method is applied to estimate space-time monthly precipitation in the monsoon periods during 1974-2000, and 27-year monthly average precipitation data are obtained from 51 stations in Pakistan. The results of transformed hierarchical Bayesian multivariate spatio-temporal interpolation are compared to those of non-transformed hierarchical Bayesian interpolation by using cross-validation. The software developed by [11] is used for Bayesian non-stationary multivariate space-time interpolation. It is observed that the transformed hierarchical Bayesian method provides more accuracy than the non-transformed hierarchical Bayesian method.

  3. Mathematical Modeling of Thin Layer Microwave Drying of Taro Slices

    Science.gov (United States)

    Kumar, Vivek; Sharma, H. K.; Singh, K.

    2016-03-01

    The present study investigated the drying kinetics of taro slices precooked in different medium viz water (WC), steam (SC) and Lemon Solution (LC) and dried at different microwave power 360, 540 and 720 W. Drying curves of all precooked slices at all microwave powers showed falling rate period along with a very short accelerating period at the beginning of the drying. At all microwave powers, higher drying rate was observed for LC slices as compared to WC and SC slices. To select a suitable drying curve, seven thin-layer drying models were fitted to the experimental data. The data revealed that the Page model was most adequate in describing the microwave drying behavior of taro slices precooked in different medium. The highest effective moisture diffusivity value of 2.11 × 10-8 m2/s was obtained for LC samples while the lowest 0.83 × 10-8 m2/s was obtained for WC taro slices. The activation energy (E a ) of LC taro slices was lower than the E a of WC and SC taro slices.

  4. Precipitation interpolation in mountainous areas

    Science.gov (United States)

    Kolberg, Sjur

    2015-04-01

    Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.

  5. The Convergence Acceleration of Two-Dimensional Fourier Interpolation

    Directory of Open Access Journals (Sweden)

    Anry Nersessian

    2008-07-01

    Full Text Available Hereby, the convergence acceleration of two-dimensional trigonometric interpolation for a smooth functions on a uniform mesh is considered. Together with theoretical estimates some numerical results are presented and discussed that reveal the potential of this method for application in image processing. Experiments show that suggested algorithm allows acceleration of conventional Fourier interpolation even for sparse meshes that can lead to an efficient image compression/decompression algorithms and also to applications in image zooming procedures.

  6. Positivity Preserving Interpolation Using Rational Bicubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2015-01-01

    Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

  7. Software Method for Computed Tomography Cylinder Data Unwrapping, Re-slicing, and Analysis

    Science.gov (United States)

    Roth, Don J.

    2013-01-01

    A software method has been developed that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography (CT). This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2D sheets (or flattened onion skins ) in addition to a series of top view slices and 3D volume rendering. The advantages of viewing the data in this fashion are as follows: (1) the use of standard and specialized image processing and analysis methods is facilitated having 2D array data versus a volume rendering; (2) accurate lateral dimensional analysis of flaws is possible in the unwrapped sheets versus volume rendering; (3) flaws in the part jump out at the inspector with the proper contrast expansion settings in the unwrapped sheets; and (4) it is much easier for the inspector to locate flaws in the unwrapped sheets versus top view slices for very thin cylinders. The method is fully automated and requires no input from the user except proper voxel dimension from the CT experiment and wall thickness of the part. The software is available in 32-bit and 64-bit versions, and can be used with binary data (8- and 16-bit) and BMP type CT image sets. The software has memory (RAM) and hard-drive based modes. The advantage of the (64-bit) RAM-based mode is speed (and is very practical for users of 64-bit Windows operating systems and computers having 16 GB or more RAM). The advantage of the hard-drive based analysis is one can work with essentially unlimited-sized data sets. Separate windows are spawned for the unwrapped/re-sliced data view and any image processing interactive capability. Individual unwrapped images and un -wrapped image series can be saved in common image formats. More information is available at http://www.grc.nasa.gov/WWW/OptInstr/ NDE_CT_CylinderUnwrapper.html.

  8. C1 Rational Quadratic Trigonometric Interpolation Spline for Data Visualization

    Directory of Open Access Journals (Sweden)

    Shengjun Liu

    2015-01-01

    Full Text Available A new C1 piecewise rational quadratic trigonometric spline with four local positive shape parameters in each subinterval is constructed to visualize the given planar data. Constraints are derived on these free shape parameters to generate shape preserving interpolation curves for positive and/or monotonic data sets. Two of these shape parameters are constrained while the other two can be set free to interactively control the shape of the curves. Moreover, the order of approximation of developed interpolant is investigated as O(h3. Numeric experiments demonstrate that our method can construct nice shape preserving interpolation curves efficiently.

  9. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  10. Treatment of Outliers via Interpolation Method with Neural Network Forecast Performances

    Science.gov (United States)

    Wahir, N. A.; Nor, M. E.; Rusiman, M. S.; Gopal, K.

    2018-04-01

    Outliers often lurk in many datasets, especially in real data. Such anomalous data can negatively affect statistical analyses, primarily normality, variance, and estimation aspects. Hence, handling the occurrences of outliers require special attention. Therefore, it is important to determine the suitable ways in treating outliers so as to ensure that the quality of the analyzed data is indeed high. As such, this paper discusses an alternative method to treat outliers via linear interpolation method. In fact, assuming outlier as a missing value in the dataset allows the application of the interpolation method to interpolate the outliers thus, enabling the comparison of data series using forecast accuracy before and after outlier treatment. With that, the monthly time series of Malaysian tourist arrivals from January 1998 until December 2015 had been used to interpolate the new series. The results indicated that the linear interpolation method, which was comprised of improved time series data, displayed better results, when compared to the original time series data in forecasting from both Box-Jenkins and neural network approaches.

  11. RETROSPECTIVE DETECTION OF INTERLEAVED SLICE ACQUISITION PARAMETERS FROM FMRI DATA

    Science.gov (United States)

    Parker, David; Rotival, Georges; Laine, Andrew; Razlighi, Qolamreza R.

    2015-01-01

    To minimize slice excitation leakage to adjacent slices, interleaved slice acquisition is nowadays performed regularly in fMRI scanners. In interleaved slice acquisition, the number of slices skipped between two consecutive slice acquisitions is often referred to as the ‘interleave parameter’; the loss of this parameter can be catastrophic for the analysis of fMRI data. In this article we present a method to retrospectively detect the interleave parameter and the axis in which it is applied. Our method relies on the smoothness of the temporal-distance correlation function, which becomes disrupted along the axis on which interleaved slice acquisition is applied. We examined this method on simulated and real data in the presence of fMRI artifacts such as physiological noise, motion, etc. We also examined the reliability of this method in detecting different types of interleave parameters and demonstrated an accuracy of about 94% in more than 1000 real fMRI scans. PMID:26161244

  12. Parallel optimization of IDW interpolation algorithm on multicore platform

    Science.gov (United States)

    Guan, Xuefeng; Wu, Huayi

    2009-10-01

    Due to increasing power consumption, heat dissipation, and other physical issues, the architecture of central processing unit (CPU) has been turning to multicore rapidly in recent years. Multicore processor is packaged with multiple processor cores in the same chip, which not only offers increased performance, but also presents significant challenges to application developers. As a matter of fact, in GIS field most of current GIS algorithms were implemented serially and could not best exploit the parallelism potential on such multicore platforms. In this paper, we choose Inverse Distance Weighted spatial interpolation algorithm (IDW) as an example to study how to optimize current serial GIS algorithms on multicore platform in order to maximize performance speedup. With the help of OpenMP, threading methodology is introduced to split and share the whole interpolation work among processor cores. After parallel optimization, execution time of interpolation algorithm is greatly reduced and good performance speedup is achieved. For example, performance speedup on Intel Xeon 5310 is 1.943 with 2 execution threads and 3.695 with 4 execution threads respectively. An additional output comparison between pre-optimization and post-optimization is carried out and shows that parallel optimization does to affect final interpolation result.

  13. Multi-slice computed tomography-assisted endoscopic transsphenoidal surgery for pituitary macroadenoma: a comparison with conventional microscopic transsphenoidal surgery.

    Science.gov (United States)

    Tosaka, Masahiko; Nagaki, Tomohito; Honda, Fumiaki; Takahashi, Katsumasa; Yoshimoto, Yuhei

    2015-11-01

    Intraoperative computed tomography (iCT) is a reliable method for the detection of residual tumour, but previous single-slice low-resolution computed tomography (CT) without coronal or sagittal reconstructions was not of adequate quality for clinical use. The present study evaluated the results of multi-slice iCT-assisted endoscopic transsphenoidal surgery for pituitary macroadenoma. This retrospective study included 30 consecutive patients with newly diagnosed or recurrent pituitary macroadenoma with supradiaphragmatic extension who underwent endoscopic transsphenoidal surgery using iCT (eTSS+iCT group), and control 30 consecutive patients who underwent conventional endoscope-assisted transsphenoidal surgery (cTSS group). The tumour volume was calculated by multiplying the tumour area by the slice thickness. Visual acuity and visual field were estimated by the visual impairment score (VIS). The resection extent, (preoperative tumour volume - postoperative residual tumour volume)/preoperative tumour volume, was 98.9% (median) in the eTSS+iCT group and 91.7% in the cTSS group, and had significant difference between the groups (P = 0.04). Greater than 95 and >90% removal rates were significantly higher in the eTSS+iCT group than in the cTSS group (P = 0.02 and P = 0.001, respectively). However, improvement in VIS showed no significant difference between the groups. The rate of complications also showed no significant difference. Multi-slice iCT-assisted endoscopic transsphenoidal surgery may improve the resection extent of pituitary macroadenoma. Multi-slice iCT may have advantages over intraoperative magnetic resonance imaging in less expensive, short acquisition time, and that special protection against magnetic fields is not needed.

  14. Accurate B-spline-based 3-D interpolation scheme for digital volume correlation

    Science.gov (United States)

    Ren, Maodong; Liang, Jin; Wei, Bin

    2016-12-01

    An accurate and efficient 3-D interpolation scheme, based on sampling theorem and Fourier transform technique, is proposed to reduce the sub-voxel matching error caused by intensity interpolation bias in digital volume correlation. First, the influence factors of the interpolation bias are investigated theoretically using the transfer function of an interpolation filter (henceforth filter) in the Fourier domain. A law that the positional error of a filter can be expressed as a function of fractional position and wave number is found. Then, considering the above factors, an optimized B-spline-based recursive filter, combining B-spline transforms and least squares optimization method, is designed to virtually eliminate the interpolation bias in the process of sub-voxel matching. Besides, given each volumetric image containing different wave number ranges, a Gaussian weighting function is constructed to emphasize or suppress certain of wave number ranges based on the Fourier spectrum analysis. Finally, a novel software is developed and series of validation experiments were carried out to verify the proposed scheme. Experimental results show that the proposed scheme can reduce the interpolation bias to an acceptable level.

  15. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    Science.gov (United States)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  16. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    Science.gov (United States)

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  17. Water-activity of dehydrated guava slices sweeteners

    International Nuclear Information System (INIS)

    Ayub, M.; Zeb, A.; Ullah, J.

    2005-01-01

    A study was carried out to investigate the individual and combined effect of caloric sweeteners (sucrose, glucose and fructose) and non-caloric sweeteners (saccharine, cyclamate and aspartame) along with antioxidants (citric acid and ascorbic acid) and chemical preservatives (potassium metabisulphite and potassium sorbate) on the water-activity (a/sub w/) of dehydrated guava slices. Different dilutions of caloric sweeteners (20, 30, 40 and 50 degree brix (bx) and non-caloric sweeteners (equivalent to sucrose sweetness) were used. Guava slices were osmotically dehydrated in these solutions and then dehydrated initially at 0 and then at 60 degree C to final moisture-content of 20-25%. Guava slices prepared with sucrose: glucose 7:3 potassium metabisulphite, ascorbic acid and citric acid produced best quality products, which have minimum (a/sub w/) and best overall sensory characteristics. The analysis showed that treatments and their various concentrations had a significant effect (p=0.05) on (a/sub w/) of dehydrated guava slices. (author)

  18. Fetal brain volumetry through MRI volumetric reconstruction and segmentation

    Science.gov (United States)

    Estroff, Judy A.; Barnewolt, Carol E.; Connolly, Susan A.; Warfield, Simon K.

    2013-01-01

    Purpose Fetal MRI volumetry is a useful technique but it is limited by a dependency upon motion-free scans, tedious manual segmentation, and spatial inaccuracy due to thick-slice scans. An image processing pipeline that addresses these limitations was developed and tested. Materials and methods The principal sequences acquired in fetal MRI clinical practice are multiple orthogonal single-shot fast spin echo scans. State-of-the-art image processing techniques were used for inter-slice motion correction and super-resolution reconstruction of high-resolution volumetric images from these scans. The reconstructed volume images were processed with intensity non-uniformity correction and the fetal brain extracted by using supervised automated segmentation. Results Reconstruction, segmentation and volumetry of the fetal brains for a cohort of twenty-five clinically acquired fetal MRI scans was done. Performance metrics for volume reconstruction, segmentation and volumetry were determined by comparing to manual tracings in five randomly chosen cases. Finally, analysis of the fetal brain and parenchymal volumes was performed based on the gestational age of the fetuses. Conclusion The image processing pipeline developed in this study enables volume rendering and accurate fetal brain volumetry by addressing the limitations of current volumetry techniques, which include dependency on motion-free scans, manual segmentation, and inaccurate thick-slice interpolation. PMID:20625848

  19. Spatial interpolation of point velocities in stream cross-section

    Directory of Open Access Journals (Sweden)

    Hasníková Eliška

    2015-03-01

    Full Text Available The most frequently used instrument for measuring velocity distribution in the cross-section of small rivers is the propeller-type current meter. Output of measuring using this instrument is point data of a tiny bulk. Spatial interpolation of measured data should produce a dense velocity profile, which is not available from the measuring itself. This paper describes the preparation of interpolation models.

  20. Interactive Slice of the CMS detector

    CERN Multimedia

    Davis, Siona Ruth

    2016-01-01

    This slice shows a colorful cross-section of the CMS detector with all parts of the detector labelled. Viewers are invited to click on buttons associated with five types of particles to see what happens when each type interacts with the sections of the detector. The five types of particles users can select to send through the slice are muons, electrons, neutral hadrons, charged hadrons and photons. Supplementary information on each type of particles is given. Useful for inclusion into general talks on CMS etc. *Animated CMS "slice" for Powerpoint (Mac & PC) Original version - 2004 Updated version - July 2010 *Six slides required - first is a set of buttons; others are for each particle type (muon, electron, charged/neutral hadron, photon) Recommend putting slide 1 anywhere in your presentation and the rest at the end

  1. Delimiting areas of endemism through kernel interpolation.

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  2. Delimiting areas of endemism through kernel interpolation.

    Directory of Open Access Journals (Sweden)

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  3. Development of a bread slicing machine from locally sourced ...

    African Journals Online (AJOL)

    This paper presents the development of a bread slicing machine which is a mechanical device that is used for slicing bread instead of the crude cumbersome and unhygienic method of manual slicing of bread. In an attempt to facilitate the final processing of bread which is a common daily food requirement of most Nigerians ...

  4. Interpolation of diffusion weighted imaging datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

    2014-01-01

    anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...

  5. Multi-dimensional cubic interpolation for ICF hydrodynamics simulation

    International Nuclear Information System (INIS)

    Aoki, Takayuki; Yabe, Takashi.

    1991-04-01

    A new interpolation method is proposed to solve the multi-dimensional hyperbolic equations which appear in describing the hydrodynamics of inertial confinement fusion (ICF) implosion. The advection phase of the cubic-interpolated pseudo-particle (CIP) is greatly improved, by assuming the continuities of the second and the third spatial derivatives in addition to the physical value and the first derivative. These derivatives are derived from the given physical equation. In order to evaluate the new method, Zalesak's example is tested, and we obtain successfully good results. (author)

  6. Geometry Processing of Conventionally Produced Mouse Brain Slice Images.

    Science.gov (United States)

    Agarwal, Nitin; Xu, Xiangmin; Gopi, M

    2018-04-21

    Brain mapping research in most neuroanatomical laboratories relies on conventional processing techniques, which often introduce histological artifacts such as tissue tears and tissue loss. In this paper we present techniques and algorithms for automatic registration and 3D reconstruction of conventionally produced mouse brain slices in a standardized atlas space. This is achieved first by constructing a virtual 3D mouse brain model from annotated slices of Allen Reference Atlas (ARA). Virtual re-slicing of the reconstructed model generates ARA-based slice images corresponding to the microscopic images of histological brain sections. These image pairs are aligned using a geometric approach through contour images. Histological artifacts in the microscopic images are detected and removed using Constrained Delaunay Triangulation before performing global alignment. Finally, non-linear registration is performed by solving Laplace's equation with Dirichlet boundary conditions. Our methods provide significant improvements over previously reported registration techniques for the tested slices in 3D space, especially on slices with significant histological artifacts. Further, as one of the application we count the number of neurons in various anatomical regions using a dataset of 51 microscopic slices from a single mouse brain. To the best of our knowledge the presented work is the first that automatically registers both clean as well as highly damaged high-resolutions histological slices of mouse brain to a 3D annotated reference atlas space. This work represents a significant contribution to this subfield of neuroscience as it provides tools to neuroanatomist for analyzing and processing histological data. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Performance of an Interpolated Stochastic Weather Generator in Czechia and Nebraska

    Science.gov (United States)

    Dubrovsky, M.; Trnka, M.; Hayes, M. J.; Svoboda, M. D.; Semeradova, D.; Metelka, L.; Hlavinka, P.

    2008-12-01

    Met&Roll is a WGEN-like parametric four-variate daily weather generator (WG), with an optional extension allowing the user to generate additional variables (i.e. wind and water vapor pressure). It is designed to produce synthetic weather series representing present and/or future climate conditions to be used as an input into various models (e.g. crop growth and rainfall runoff models). The present contribution will summarize recent experiments, in which we tested the performance of the interpolated WG, with the aim to examine whether the WG may be used to produce synthetic weather series even for sites having no meteorological observations. The experiments being discussed include: (1) the comparison of various interpolation methods where the performance of the candidate methods is compared in terms of the accuracy of the interpolation for selected WG parameters; (2) assessing the ability of the interpolated WG in the territories of Czechia and Nebraska to reproduce extreme temperature and precipitation characteristics; (3) indirect validation of the interpolated WG in terms of the modeled crop yields simulated by STICS crop growth model (in Czechia); and (4) indirect validation of interpolated WG in terms of soil climate regime characteristics simulated by the SoilClim model (Czechia and Nebraska). The experiments are based on observed daily weather series from two regions: Czechia (area = 78864 km2, 125 stations available) and Nebraska (area = 200520 km2, 28 stations available). Even though Nebraska exhibits a much lower density of stations, this is offset by the state's relatively flat topography, which is an advantage in using the interpolated WG. Acknowledgements: The present study is supported by the AMVIS-KONTAKT project (ME 844) and the GAAV Grant Agency (project IAA300420806).

  8. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    Directory of Open Access Journals (Sweden)

    Lixin Li

    2014-09-01

    Full Text Available Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate

  9. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    Science.gov (United States)

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-01-01

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation

  10. Fast inverse distance weighting-based spatiotemporal interpolation: a web-based application of interpolating daily fine particulate matter PM2:5 in the contiguous U.S. using parallel programming and k-d tree.

    Science.gov (United States)

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-09-03

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation

  11. A Hybrid Method for Interpolating Missing Data in Heterogeneous Spatio-Temporal Datasets

    Directory of Open Access Journals (Sweden)

    Min Deng

    2016-02-01

    Full Text Available Space-time interpolation is widely used to estimate missing or unobserved values in a dataset integrating both spatial and temporal records. Although space-time interpolation plays a key role in space-time modeling, existing methods were mainly developed for space-time processes that exhibit stationarity in space and time. It is still challenging to model heterogeneity of space-time data in the interpolation model. To overcome this limitation, in this study, a novel space-time interpolation method considering both spatial and temporal heterogeneity is developed for estimating missing data in space-time datasets. The interpolation operation is first implemented in spatial and temporal dimensions. Heterogeneous covariance functions are constructed to obtain the best linear unbiased estimates in spatial and temporal dimensions. Spatial and temporal correlations are then considered to combine the interpolation results in spatial and temporal dimensions to estimate the missing data. The proposed method is tested on annual average temperature and precipitation data in China (1984–2009. Experimental results show that, for these datasets, the proposed method outperforms three state-of-the-art methods—e.g., spatio-temporal kriging, spatio-temporal inverse distance weighting, and point estimation model of biased hospitals-based area disease estimation methods.

  12. Can a polynomial interpolation improve on the Kaplan-Yorke dimension?

    International Nuclear Information System (INIS)

    Richter, Hendrik

    2008-01-01

    The Kaplan-Yorke dimension can be derived using a linear interpolation between an h-dimensional Lyapunov exponent λ (h) >0 and an h+1-dimensional Lyapunov exponent λ (h+1) <0. In this Letter, we use a polynomial interpolation to obtain generalized Lyapunov dimensions and study the relationships among them for higher-dimensional systems

  13. Interpolation of polytopic control Lyapunov functions for discrete–time linear systems

    NARCIS (Netherlands)

    Nguyen, T.T.; Lazar, M.; Spinu, V.; Boje, E.; Xia, X.

    2014-01-01

    This paper proposes a method for interpolating two (or more) polytopic control Lyapunov functions (CLFs) for discrete--time linear systems subject to polytopic constraints, thereby combining different control objectives. The corresponding interpolated CLF is used for synthesis of a stabilizing

  14. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  15. Interpolant Tree Automata and their Application in Horn Clause Verification

    Directory of Open Access Journals (Sweden)

    Bishoksan Kafle

    2016-07-01

    Full Text Available This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this paper. The role of an interpolant tree automaton is to provide a generalisation of a spurious counterexample during refinement, capturing a possibly infinite set of spurious counterexample traces. In our approach these traces are then eliminated using a transformation of the Horn clauses. We compare this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead.

  16. Design of interpolation functions for subpixel-accuracy stereo-vision systems.

    Science.gov (United States)

    Haller, Istvan; Nedevschi, Sergiu

    2012-02-01

    Traditionally, subpixel interpolation in stereo-vision systems was designed for the block-matching algorithm. During the evaluation of different interpolation strategies, a strong correlation was observed between the type of the stereo algorithm and the subpixel accuracy of the different solutions. Subpixel interpolation should be adapted to each stereo algorithm to achieve maximum accuracy. In consequence, it is more important to propose methodologies for interpolation function generation than specific function shapes. We propose two such methodologies based on data generated by the stereo algorithms. The first proposal uses a histogram to model the environment and applies histogram equalization to an existing solution adapting it to the data. The second proposal employs synthetic images of a known environment and applies function fitting to the resulted data. The resulting function matches the algorithm and the data as best as possible. An extensive evaluation set is used to validate the findings. Both real and synthetic test cases were employed in different scenarios. The test results are consistent and show significant improvements compared with traditional solutions. © 2011 IEEE

  17. A temporal interpolation approach for dynamic reconstruction in perfusion CT

    International Nuclear Information System (INIS)

    Montes, Pau; Lauritsch, Guenter

    2007-01-01

    This article presents a dynamic CT reconstruction algorithm for objects with time dependent attenuation coefficient. Projection data acquired over several rotations are interpreted as samples of a continuous signal. Based on this idea, a temporal interpolation approach is proposed which provides the maximum temporal resolution for a given rotational speed of the CT scanner. Interpolation is performed using polynomial splines. The algorithm can be adapted to slow signals, reducing the amount of data acquired and the computational cost. A theoretical analysis of the approximations made by the algorithm is provided. In simulation studies, the temporal interpolation approach is compared with three other dynamic reconstruction algorithms based on linear regression, linear interpolation, and generalized Parker weighting. The presented algorithm exhibits the highest temporal resolution for a given sampling interval. Hence, our approach needs less input data to achieve a certain quality in the reconstruction than the other algorithms discussed or, equivalently, less x-ray exposure and computational complexity. The proposed algorithm additionally allows the possibility of using slow rotating scanners for perfusion imaging purposes

  18. The topology of large-scale structure. VI - Slices of the universe

    Science.gov (United States)

    Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.

    1992-03-01

    Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.

  19. The topology of large-scale structure. VI - Slices of the universe

    Science.gov (United States)

    Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.

    1992-01-01

    Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.

  20. Interpolant tree automata and their application in Horn clause verification

    DEFF Research Database (Denmark)

    Kafle, Bishoksan; Gallagher, John Patrick

    2016-01-01

    This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this ......This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way...... clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead....

  1. High-resolution multi-slice PET

    International Nuclear Information System (INIS)

    Yasillo, N.J.; Chintu Chen; Ordonez, C.E.; Kapp, O.H.; Sosnowski, J.; Beck, R.N.

    1992-01-01

    This report evaluates the progress to test the feasibility and to initiate the design of a high resolution multi-slice PET system. The following specific areas were evaluated: detector development and testing; electronics configuration and design; mechanical design; and system simulation. The design and construction of a multiple-slice, high-resolution positron tomograph will provide substantial improvements in the accuracy and reproducibility of measurements of the distribution of activity concentrations in the brain. The range of functional brain research and our understanding of local brain function will be greatly extended when the development of this instrumentation is completed

  2. A method to generate fully multi-scale optimal interpolation by combining efficient single process analyses, illustrated by a DINEOF analysis spiced with a local optimal interpolation

    Directory of Open Access Journals (Sweden)

    J.-M. Beckers

    2014-10-01

    Full Text Available We present a method in which the optimal interpolation of multi-scale processes can be expanded into a succession of simpler interpolations. First, we prove how the optimal analysis of a superposition of two processes can be obtained by different mathematical formulations involving iterations and analysis focusing on a single process. From the different mathematical equivalent formulations, we then select the most efficient ones by analyzing the behavior of the different possibilities in a simple and well-controlled test case. The clear guidelines deduced from this experiment are then applied to a real situation in which we combine large-scale analysis of hourly Spinning Enhanced Visible and Infrared Imager (SEVIRI satellite images using data interpolating empirical orthogonal functions (DINEOF with a local optimal interpolation using a Gaussian covariance. It is shown that the optimal combination indeed provides the best reconstruction and can therefore be exploited to extract the maximum amount of useful information from the original data.

  3. Evaluation of the relative biological effectiveness of carbon ion beams in the cerebellum using the rat organotypic slice culture system

    International Nuclear Information System (INIS)

    Yoshida, Yukari; Katoh, Hiroyuki; Nakano, Takashi; Suzuki, Yoshiyuki; Al-Jahdari, Wael S.; Shirai, Katsuyuki; Hamada, Nobuyuki; Funayama, Tomoo; Sakashita, Tetsuya; Kobayashi, Yasuhiko

    2012-01-01

    The purpose of this study was to clarify the relative biological effectiveness (RBE) values of carbon ion (C) beams in normal brain tissues, a rat organotypic slice culture system was used. The cerebellum was dissected from 10-day-old Wistar rats, cut parasagittally into approximately 600-μm-thick slices and cultivated using a membrane-based culture system with a liquid-air interface. Slices were irradiated with 140 kV X-rays and 18.3 MeV/amu C-beams (linear energy transfer=108 keV/μm). After irradiation, the slices were evaluated histopathologically using hematoxylin and eosin staining, and apoptosis was quantified using the TdT-mediated dUTP-biotin nick-end labeling (TUNEL) assay. Disorganization of the external granule cell layer (EGL) and apoptosis of the external granule cells (EGCs) were induced within 24 h after exposure to doses of more than 5 Gy from C-beams and X-rays. In the early postnatal cerebellum, morphological changes following exposure to C-beams were similar to those following exposure to X-rays. The RBEs values of C-beams using the EGL disorganization and the EGC TUNEL index endpoints ranged from 1.4 to 1.5. This system represents a useful model for assaying the biological effects of radiation on the brain, especially physiological and time-dependent phenomena. (author)

  4. Immature rat brain slices exposed to oxygen-glucose deprivation as an in vitro model of neonatal hypoxic-ischemic encephalopathy.

    Science.gov (United States)

    Fernández-López, David; Martínez-Orgado, José; Casanova, Ignacio; Bonet, Bartolomé; Leza, Juan Carlos; Lorenzo, Pedro; Moro, Maria Angeles; Lizasoain, Ignacio

    2005-06-30

    To analyze whether exposure to oxygen-glucose deprivation (OGD) of immature rat brain slices might reproduce the main pathophysiologic events leading to neuronal death in neonatal hypoxic-ischemic encephalopathy (NHIE), 500 microm-thick brain slices were obtained from 7-day-old Wistar rats, and incubated in oxygenated physiological solution. In OGD group, oxygen and glucose were removed from the medium for 10-30 min (n = 25); then, slices were re-incubated in normal medium. In control group the medium composition remained unchanged (CG, n = 30). Medium samples were obtained every 30 min for 3 h. To analyze neuronal damage, slices were stained with Nissl and CA1 area of hippocampus and cortex were observed under microscopy. In addition, neuronal death was quantified as LDH released to the medium determined by spectrophotometry. Additionally, medium glutamate (Glu) levels were determined by HPLC and those of TNFalpha by ELISA, whereas inducible nitric oxide synthase expression was determined by Western blot performed on slices homogenate. Optimal OGD time was established in 20 min. After OGD, a significant decrease in the number of neurones in hippocampus and cortex was observed. LDH release was maximal at 30 min, when it was five-fold greater than in CG. Furthermore, medium Glu concentrations were 200 times greater than CG levels at the end of OGD period. A linear relationship between Glu and LDH release was demonstrated. Finally, 3 h after OGD a significant induction of iNOS as well as an increase in TNFalpha release were observed. In conclusion, OGD appears as a feasible and reproducible in vitro model, leading to a neuronal damage, which is physiopathologically similar to that found in NHIE.

  5. Dynamic Stability Analysis Using High-Order Interpolation

    Directory of Open Access Journals (Sweden)

    Juarez-Toledo C.

    2012-10-01

    Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.

  6. Digital x-ray tomosynthesis with interpolated projection data for thin slab objects

    Science.gov (United States)

    Ha, S.; Yun, J.; Kim, H. K.

    2017-11-01

    In relation with a thin slab-object inspection, we propose a digital tomosynthesis reconstruction with fewer numbers of measured projections in combinations with additional virtual projections, which are produced by interpolating the measured projections. Hence we can reconstruct tomographic images with less few-view artifacts. The projection interpolation assumes that variations in cone-beam ray path-lengths through an object are negligible and the object is rigid. The interpolation is performed in the projection-space domain. Pixel values in the interpolated projection are the weighted sum of pixel values of the measured projections considering their projection angles. The experimental simulation shows that the proposed method can enhance the contrast-to-noise performance in reconstructed images while sacrificing the spatial resolving power.

  7. Wavelet-Smoothed Interpolation of Masked Scientific Data for JPEG 2000 Compression

    Energy Technology Data Exchange (ETDEWEB)

    Brislawn, Christopher M. [Los Alamos National Laboratory

    2012-08-13

    How should we manage scientific data with 'holes'? Some applications, like JPEG 2000, expect logically rectangular data, but some sources, like the Parallel Ocean Program (POP), generate data that isn't defined on certain subsets. We refer to grid points that lack well-defined, scientifically meaningful sample values as 'masked' samples. Wavelet-smoothing is a highly scalable interpolation scheme for regions with complex boundaries on logically rectangular grids. Computation is based on forward/inverse discrete wavelet transforms, so runtime complexity and memory scale linearly with respect to sample count. Efficient state-of-the-art minimal realizations yield small constants (O(10)) for arithmetic complexity scaling, and in-situ implementation techniques make optimal use of memory. Implementation in two dimensions using tensor product filter banks is straighsorward and should generalize routinely to higher dimensions. No hand-tuning required when the interpolation mask changes, making the method aeractive for problems with time-varying masks. Well-suited for interpolating undefined samples prior to JPEG 2000 encoding. The method outperforms global mean interpolation, as judged by both SNR rate-distortion performance and low-rate artifact mitigation, for data distributions whose histograms do not take the form of sharply peaked, symmetric, unimodal probability density functions. These performance advantages can hold even for data whose distribution differs only moderately from the peaked unimodal case, as demonstrated by POP salinity data. The interpolation method is very general and is not tied to any particular class of applications, could be used for more generic smooth interpolation.

  8. Interpolation of rational matrix functions

    CERN Document Server

    Ball, Joseph A; Rodman, Leiba

    1990-01-01

    This book aims to present the theory of interpolation for rational matrix functions as a recently matured independent mathematical subject with its own problems, methods and applications. The authors decided to start working on this book during the regional CBMS conference in Lincoln, Nebraska organized by F. Gilfeather and D. Larson. The principal lecturer, J. William Helton, presented ten lectures on operator and systems theory and the interplay between them. The conference was very stimulating and helped us to decide that the time was ripe for a book on interpolation for matrix valued functions (both rational and non-rational). When the work started and the first partial draft of the book was ready it became clear that the topic is vast and that the rational case by itself with its applications is already enough material for an interesting book. In the process of writing the book, methods for the rational case were developed and refined. As a result we are now able to present the rational case as an indepe...

  9. Okounkov's BC-Type Interpolation Macdonald Polynomials and Their q=1 Limit

    NARCIS (Netherlands)

    Koornwinder, T.H.

    2015-01-01

    This paper surveys eight classes of polynomials associated with A-type and BC-type root systems: Jack, Jacobi, Macdonald and Koornwinder polynomials and interpolation (or shifted) Jack and Macdonald polynomials and their BC-type extensions. Among these the BC-type interpolation Jack polynomials were

  10. Tumor tissue slice cultures as a platform for analyzing tissue-penetration and biological activities of nanoparticles.

    Science.gov (United States)

    Merz, Lea; Höbel, Sabrina; Kallendrusch, Sonja; Ewe, Alexander; Bechmann, Ingo; Franke, Heike; Merz, Felicitas; Aigner, Achim

    2017-03-01

    The success of therapeutic nanoparticles depends, among others, on their ability to penetrate a tissue for actually reaching the target cells, and their efficient cellular uptake in the context of intact tissue and stroma. Various nanoparticle modifications have been implemented for altering physicochemical and biological properties. Their analysis, however, so far mainly relies on cell culture experiments which only poorly reflect the in vivo situation, or is based on in vivo experiments that are often complicated by whole-body pharmacokinetics and are rather tedious especially when analyzing larger nanoparticle sets. For the more precise analysis of nanoparticle properties at their desired site of action, efficient ex vivo systems closely mimicking in vivo tissue properties are needed. In this paper, we describe the setup of organotypic tumor tissue slice cultures for the analysis of tissue-penetrating properties and biological activities of nanoparticles. As a model system, we employ 350μm thick slice cultures from different tumor xenograft tissues, and analyze modified or non-modified polyethylenimine (PEI) complexes as well as their lipopolyplex derivatives for siRNA delivery. The described conditions for tissue slice preparation and culture ensure excellent tissue preservation for at least 14days, thus allowing for prolonged experimentation and analysis. When using fluorescently labeled siRNA for complex visualization, fluorescence microscopy of cryo-sectioned tissue slices reveals different degrees of nanoparticle tissue penetration, dependent on their surface charge. More importantly, the determination of siRNA-mediated knockdown efficacies of an endogenous target gene, the oncogenic survival factor Survivin, reveals the possibility to accurately assess biological nanoparticle activities in situ, i.e. in living cells in their original environment. Taken together, we establish tumor (xenograft) tissue slices for the accurate and facile ex vivo assessment of

  11. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    Science.gov (United States)

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for

  12. Image interpolation via graph-based Bayesian label propagation.

    Science.gov (United States)

    Xianming Liu; Debin Zhao; Jiantao Zhou; Wen Gao; Huifang Sun

    2014-03-01

    In this paper, we propose a novel image interpolation algorithm via graph-based Bayesian label propagation. The basic idea is to first create a graph with known and unknown pixels as vertices and with edge weights encoding the similarity between vertices, then the problem of interpolation converts to how to effectively propagate the label information from known points to unknown ones. This process can be posed as a Bayesian inference, in which we try to combine the principles of local adaptation and global consistency to obtain accurate and robust estimation. Specially, our algorithm first constructs a set of local interpolation models, which predict the intensity labels of all image samples, and a loss term will be minimized to keep the predicted labels of the available low-resolution (LR) samples sufficiently close to the original ones. Then, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on all samples. Moreover, a graph-Laplacian-based manifold regularization term is incorporated to penalize the global smoothness of intensity labels, such smoothing can alleviate the insufficient training of the local models and make them more robust. Finally, we construct a unified objective function to combine together the global loss of the locally linear regression, square error of prediction bias on the available LR samples, and the manifold regularization term. It can be solved with a closed-form solution as a convex optimization problem. Experimental results demonstrate that the proposed method achieves competitive performance with the state-of-the-art image interpolation algorithms.

  13. Interpolation of vector fields from human cardiac DT-MRI

    International Nuclear Information System (INIS)

    Yang, F; Zhu, Y M; Rapacchi, S; Robini, M; Croisille, P; Luo, J H

    2011-01-01

    There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.

  14. Improvement of image quality using interpolated projection data estimation method in SPECT

    International Nuclear Information System (INIS)

    Takaki, Akihiro; Soma, Tsutomu; Murase, Kenya; Kojima, Akihiro; Asao, Kimie; Kamada, Shinya; Matsumoto, Masanori

    2009-01-01

    General data acquisition for single photon emission computed tomography (SPECT) is performed in 90 or 60 directions, with a coarse pitch of approximately 4-6 deg for a rotation of 360 deg or 180 deg, using a gamma camera. No data between adjacent projections will be sampled under these circumstances. The aim of the study was to develop a method to improve SPECT image quality by generating lacking projection data through interpolation of data obtained with a coarse pitch such as 6 deg. The projection data set at each individual degree in 360 directions was generated by a weighted average interpolation method from the projection data acquired with a coarse sampling angle (interpolated projection data estimation processing method, IPDE method). The IPDE method was applied to the numerical digital phantom data, actual phantom data and clinical brain data with Tc-99m ethyle cysteinate dimer (ECD). All SPECT images were reconstructed by the filtered back-projection method and compared with the original SPECT images. The results confirmed that streak artifacts decreased by apparently increasing a sampling number in SPECT after interpolation and also improved signal-to-noise (S/N) ratio of the root mean square uncertainty value. Furthermore, the normalized mean square error values, compared with standard images, had similar ones after interpolation. Moreover, the contrast and concentration ratios increased their effects after interpolation. These results indicate that effective improvement of image quality can be expected with interpolation. Thus, image quality and the ability to depict images can be improved while maintaining the present acquisition time and image quality. In addition, this can be achieved more effectively than at present even if the acquisition time is reduced. (author)

  15. Enhancement of low sampling frequency recordings for ECG biometric matching using interpolation.

    Science.gov (United States)

    Sidek, Khairul Azami; Khalil, Ibrahim

    2013-01-01

    Electrocardiogram (ECG) based biometric matching suffers from high misclassification error with lower sampling frequency data. This situation may lead to an unreliable and vulnerable identity authentication process in high security applications. In this paper, quality enhancement techniques for ECG data with low sampling frequency has been proposed for person identification based on piecewise cubic Hermite interpolation (PCHIP) and piecewise cubic spline interpolation (SPLINE). A total of 70 ECG recordings from 4 different public ECG databases with 2 different sampling frequencies were applied for development and performance comparison purposes. An analytical method was used for feature extraction. The ECG recordings were segmented into two parts: the enrolment and recognition datasets. Three biometric matching methods, namely, Cross Correlation (CC), Percent Root-Mean-Square Deviation (PRD) and Wavelet Distance Measurement (WDM) were used for performance evaluation before and after applying interpolation techniques. Results of the experiments suggest that biometric matching with interpolated ECG data on average achieved higher matching percentage value of up to 4% for CC, 3% for PRD and 94% for WDM. These results are compared with the existing method when using ECG recordings with lower sampling frequency. Moreover, increasing the sample size from 56 to 70 subjects improves the results of the experiment by 4% for CC, 14.6% for PRD and 0.3% for WDM. Furthermore, higher classification accuracy of up to 99.1% for PCHIP and 99.2% for SPLINE with interpolated ECG data as compared of up to 97.2% without interpolation ECG data verifies the study claim that applying interpolation techniques enhances the quality of the ECG data. Crown Copyright © 2012. Published by Elsevier Ireland Ltd. All rights reserved.

  16. Validation of China-wide interpolated daily climate variables from 1960 to 2011

    Science.gov (United States)

    Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang

    2015-02-01

    Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based

  17. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Mingjian Sun

    2015-01-01

    Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

  18. Introduction to bit slices and microprogramming

    International Nuclear Information System (INIS)

    Van Dam, A.

    1981-01-01

    Bit-slice logic blocks are fourth-generation LSI components which are natural extensions of traditional mulitplexers, registers, decoders, counters, ALUs, etc. Their functionality is controlled by microprogramming, typically to implement CPUs and peripheral controllers where both speed and easy programmability are required for flexibility, ease of implementation and debugging, etc. Processors built from bit-slice logic give the designer an alternative for approaching the programmibility of traditional fixed-instruction-set microprocessors with a speed closer to that of hardwired random logic. (orig.)

  19. Interpolation of property-values between electron numbers is inconsistent with ensemble averaging

    Energy Technology Data Exchange (ETDEWEB)

    Miranda-Quintana, Ramón Alain [Laboratory of Computational and Theoretical Chemistry, Faculty of Chemistry, University of Havana, Havana (Cuba); Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada); Ayers, Paul W. [Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1 (Canada)

    2016-06-28

    In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integer electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.

  20. Gravitational collapse of charged dust shell and maximal slicing condition

    International Nuclear Information System (INIS)

    Maeda, Keiichi

    1980-01-01

    The maximal slicing condition is a good time coordinate condition qualitatively when pursuing the gravitational collapse by the numerical calculation. The analytic solution of the gravitational collapse under the maximal slicing condition is given in the case of a spherical charged dust shell and the behavior of time slices with this coordinate condition is investigated. It is concluded that under the maximal slicing condition we can pursue the gravitational collapse until the radius of the shell decreases to about 0.7 x (the radius of the event horizon). (author)

  1. Preservation of low slice emittance in bunch compressors

    Directory of Open Access Journals (Sweden)

    S. Bettoni

    2016-03-01

    Full Text Available Minimizing the dilution of the electron beam emittance is crucial for the performance of accelerators, in particular for free electron laser facilities, where the length of the machine and the efficiency of the lasing process depend on it. Measurements performed at the SwissFEL Injector Test Facility revealed an increase in slice emittance after compressing the bunch even for moderate compression factors. The phenomenon was experimentally studied by characterizing the dependence of the effect on beam and machine parameters relevant for the bunch compression. The reproduction of these measurements in simulation required the use of a 3D beam dynamics model along the bunch compressor that includes coherent synchrotron radiation. Our investigations identified transverse effects, such as coherent synchrotron radiation and transverse space charge as the sources of the observed emittance dilution, excluding other effects, such as chromatic effects on single slices or spurious dispersion. We also present studies, both experimental and simulation based, on the effect of the optics mismatch of the slices on the variation of the slice emittance along the bunch. After a corresponding reoptimization of the beam optics in the test facility we reached slice emittances below 200 nm for the central slices along the longitudinal dimension with a moderate increase up to 300 nm in the head and tail for a compression factor of 7.5 and a bunch charge of 200 pC, equivalent to a final current of 150 A, at about 230 MeV energy.

  2. Tumor Slice Culture: A New Avatar in Personalized Oncology

    Science.gov (United States)

    2017-09-01

    AWARD NUMBER: W81XWH-16-1-0149 TITLE: Tumor Slice Culture: A New Avatar in Personalized Oncology PRINCIPAL INVESTIGATOR: Raymond Yeung...CONTRACT NUMBER Tumor Slice Culture: A New Avatar in Personalized Oncology 5b. GRANT NUMBER W81XWH-16-1-0149 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...10 Annual Report 2017: Tumor Slice Culture: A new avatar for personalized oncology 1. INTRODUCTION: The goal of this research is to advance our

  3. A study of interpolation method in diagnosis of carpal tunnel syndrome

    Directory of Open Access Journals (Sweden)

    Alireza Ashraf

    2013-01-01

    Full Text Available Context: The low correlation between the patients′ signs and symptoms of carpal tunnel syndrome (CTS and results of electrodiagnostic tests makes the diagnosis challenging in mild cases. Interpolation is a mathematical method for finding median nerve conduction velocity (NCV exactly at carpal tunnel site. Therefore, it may be helpful in diagnosis of CTS in patients with equivocal test results. Aim: The aim of this study is to evaluate interpolation method as a CTS diagnostic test. Settings and Design: Patients with two or more clinical symptoms and signs of CTS in a median nerve territory with 3.5 ms ≤ distal median sensory latency <4.6 ms from those who came to our electrodiagnostic clinics and also, age matched healthy control subjects were recruited in the study. Materials and Methods: Median compound motor action potential and median sensory nerve action potential latencies were measured by a MEDLEC SYNERGY VIASIS electromyography and conduction velocities were calculated by both routine method and interpolation technique. Statistical Analysis Used: Chi-square and Student′s t-test were used for comparing group differences. Cut-off points were calculated using receiver operating characteristic curve. Results: A sensitivity of 88%, specificity of 67%, positive predictive value (PPV and negative predictive value (NPV of 70.8% and 84.7% were obtained for median motor NCV and a sensitivity of 98.3%, specificity of 91.7%, PPV and NPV of 91.9% and 98.2% were obtained for median sensory NCV with interpolation technique. Conclusions: Median motor interpolation method is a good technique, but it has less sensitivity and specificity than median sensory interpolation method.

  4. Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation

    KAUST Repository

    Murarasu, Alin; Weidendorfer, Josef

    2012-01-01

    bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation

  5. Spectral Compressive Sensing with Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Dadkhahi, Hamid; F. Duarte, Marco

    2013-01-01

    . In this paper, we introduce a greedy recovery algorithm that leverages a band-exclusion function and a polar interpolation function to address these two issues in spectral compressive sensing. Our algorithm is geared towards line spectral estimation from compressive measurements and outperforms most existing...

  6. EBSDinterp 1.0: A MATLAB® Program to Perform Microstructurally Constrained Interpolation of EBSD Data.

    Science.gov (United States)

    Pearce, Mark A

    2015-08-01

    EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.

  7. TU-EF-204-11: Impact of Using Multi-Slice Training Sets On the Performance of a Channelized Hotelling Observer in a Low-Contrast Detection Task in CT

    Energy Technology Data Exchange (ETDEWEB)

    Favazza, C; Yu, L; Leng, S; McCollough, C [Mayo Clinic, Rochester, MN (United States)

    2015-06-15

    Purpose: To investigate using multiple CT image slices from a single acquisition as independent training images for a channelized Hotelling observer (CHO) model to reduce the number of repeated scans for CHO-based CT image quality assessment. Methods: We applied a previously validated CHO model to detect low contrast disk objects formed from cross-sectional images of three epoxy-resin-based rods (diameters: 3, 5, and 9 mm; length: ∼5cm). The rods were submerged in a 35x 25 cm2 iodine-doped water filled phantom, yielding-15 HU object contrast. The phantom was scanned 100 times with and without the rods present. Scan and reconstruction parameters include: 5 mm slice thickness at 0.5 mm intervals, 120 kV, 480 Quality Reference mAs, and a 128-slice scanner. The CHO’s detectability index was evaluated as a function of factors related to incorporating multi-slice image data: object misalignment along the z-axis, inter-slice pixel correlation, and number of unique slice locations. In each case, the CHO training set was fixed to 100 images. Results: Artificially shifting the object’s center position by as much as 3 pixels in any direction relative to the Gabor channel filters had insignificant impact on object detectability. An inter-slice pixel correlation of >∼0.2 yielded positive bias in the model’s performance. Incorporating multi-slice image data yielded slight negative bias in detectability with increasing number of slices, likely due to physical variations in the objects. However, inclusion of image data from up to 5 slice locations yielded detectability indices within measurement error of the single slice value. Conclusion: For the investigated model and task, incorporating image data from 5 different slice locations of at least 5 mm intervals into the CHO model yielded detectability indices within measurement error of the single slice value. Consequently, this methodology would Result in a 5-fold reduction in number of image acquisitions. This project

  8. TU-EF-204-11: Impact of Using Multi-Slice Training Sets On the Performance of a Channelized Hotelling Observer in a Low-Contrast Detection Task in CT

    International Nuclear Information System (INIS)

    Favazza, C; Yu, L; Leng, S; McCollough, C

    2015-01-01

    Purpose: To investigate using multiple CT image slices from a single acquisition as independent training images for a channelized Hotelling observer (CHO) model to reduce the number of repeated scans for CHO-based CT image quality assessment. Methods: We applied a previously validated CHO model to detect low contrast disk objects formed from cross-sectional images of three epoxy-resin-based rods (diameters: 3, 5, and 9 mm; length: ∼5cm). The rods were submerged in a 35x 25 cm2 iodine-doped water filled phantom, yielding-15 HU object contrast. The phantom was scanned 100 times with and without the rods present. Scan and reconstruction parameters include: 5 mm slice thickness at 0.5 mm intervals, 120 kV, 480 Quality Reference mAs, and a 128-slice scanner. The CHO’s detectability index was evaluated as a function of factors related to incorporating multi-slice image data: object misalignment along the z-axis, inter-slice pixel correlation, and number of unique slice locations. In each case, the CHO training set was fixed to 100 images. Results: Artificially shifting the object’s center position by as much as 3 pixels in any direction relative to the Gabor channel filters had insignificant impact on object detectability. An inter-slice pixel correlation of >∼0.2 yielded positive bias in the model’s performance. Incorporating multi-slice image data yielded slight negative bias in detectability with increasing number of slices, likely due to physical variations in the objects. However, inclusion of image data from up to 5 slice locations yielded detectability indices within measurement error of the single slice value. Conclusion: For the investigated model and task, incorporating image data from 5 different slice locations of at least 5 mm intervals into the CHO model yielded detectability indices within measurement error of the single slice value. Consequently, this methodology would Result in a 5-fold reduction in number of image acquisitions. This project

  9. Convergence acceleration of quasi-periodic and quasi-periodic-rational interpolations by polynomial corrections

    OpenAIRE

    Lusine Poghosyan

    2014-01-01

    The paper considers convergence acceleration of the quasi-periodic and the quasi-periodic-rational interpolations by application of polynomial corrections. We investigate convergence of the resultant quasi-periodic-polynomial and quasi-periodic-rational-polynomial interpolations and derive exact constants of the main terms of asymptotic errors in the regions away from the endpoints. Results of numerical experiments clarify behavior of the corresponding interpolations for moderate number of in...

  10. Importance of interpolation and coincidence errors in data fusion

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2018-02-01

    Full Text Available The complete data fusion (CDF method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  11. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    Science.gov (United States)

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  12. Image interpolation used in three-dimensional range data compression.

    Science.gov (United States)

    Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian

    2016-05-20

    Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.

  13. Importance of interpolation and coincidence errors in data fusion

    Science.gov (United States)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  14. Research on image reconstruction of DR/SSCT security inspection system

    International Nuclear Information System (INIS)

    Li Jian; Cong Peng

    2008-01-01

    On the basis of DR (Digital Radiography)/CT security inspection system, DR/SSCT (single slice spiral CT) security inspection system was developed. This spiral CT system can improve the CT system's drawbacks. The research work includes in replacing the former data acquisition system by a new system which can acquire projection data of multi-slices and devising the SSCT reconstruction algorithms. Simulation experiments and practical experiments were devised to contrast several algorithms. Interpolation technique was operated in detectors data in order to improve the algorithms. In conclusion, the system exploits an algorithm of weighted average of 360 degree LI (Linear Interpolation) and JH-HI (Jiang Hsieh-Half scan Interpolation). (authors)

  15. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    Science.gov (United States)

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  16. SPATIOTEMPORAL VISUALIZATION OF TIME-SERIES SATELLITE-DERIVED CO2 FLUX DATA USING VOLUME RENDERING AND GPU-BASED INTERPOLATION ON A CLOUD-DRIVEN DIGITAL EARTH

    Directory of Open Access Journals (Sweden)

    S. Wu

    2017-10-01

    Full Text Available The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  17. Interpolation Filter Design for Hearing-Aid Audio Class-D Output Stage Application

    DEFF Research Database (Denmark)

    Pracný, Peter; Bruun, Erik; Llimos Muntal, Pere

    2012-01-01

    This paper deals with a design of a digital interpolation filter for a 3rd order multi-bit ΣΔ modulator with over-sampling ratio OSR = 64. The interpolation filter and the ΣΔ modulator are part of the back-end of an audio signal processing system in a hearing-aid application. The aim in this paper...... is to compare this design to designs presented in other state-of-the-art works ranging from hi-fi audio to hearing-aids. By performing comparison, trends and tradeoffs in interpolation filter design are indentified and hearing-aid specifications are derived. The possibilities for hardware reduction...... in the interpolation filter are investigated. Proposed design simplifications presented here result in the least hardware demanding combination of oversampling ratio, number of stages and number of filter taps among a number of filters reported for audio applications....

  18. Interpolated sagittal and coronal reconstruction of CT images in the screening of neck abnormalities

    International Nuclear Information System (INIS)

    Koga, Issei

    1983-01-01

    Recontructed sagittal and coronal images were analyzed for their usefulness during clinical applications and to determine the correct use of recontruction techniques. Recontructed stereoscopic images can be formed by continuous or interrupted image reconstruction using interpolation. This study showed that lesions less than 10 mm in diameter should be made continuously and recontructed with uninterrupted technique. However, 5 mm interrupted distances are acceptable for interpolated reconstruction except in cases of lesions less than 10 mm in diameter. Clinically, interpolated reconstruction is not adequated for semicircular lesions less than 10 mm. Blood vessels and linear lesions are good condiated for the application of interpolated recontruction. Reconstruction of images using interrupted interpolation is therefore recommended for screening and for demonstrating correct stereoscopic information, except cases of small lesions less than 10 mm in diameter. Results of this study underscore the fact that obscure information in transverse CT images should be routinely utilized by interporating recontruction techniques, if transverse images are not made continuously. Interpolated recontruction may be helpful in obtaining stereoscopic information. (author)

  19. A simple method for multiday imaging of slice cultures.

    Science.gov (United States)

    Seidl, Armin H; Rubel, Edwin W

    2010-01-01

    The organotypic slice culture (Stoppini et al. A simple method for organotypic cultures of nervous tissue. 1991;37:173-182) has become the method of choice to answer a variety of questions in neuroscience. For many experiments, however, it would be beneficial to image or manipulate a slice culture repeatedly, for example, over the course of many days. We prepared organotypic slice cultures of the auditory brainstem of P3 and P4 mice and kept them in vitro for up to 4 weeks. Single cells in the auditory brainstem were transfected with plasmids expressing fluorescent proteins by way of electroporation (Haas et al. Single-cell electroporation for gene transfer in vivo. 2001;29:583-591). The culture was then placed in a chamber perfused with oxygenated ACSF and the labeled cell imaged with an inverted wide-field microscope repeatedly for multiple days, recording several time-points per day, before returning the slice to the incubator. We describe a simple method to image a slice culture preparation during the course of multiple days and over many continuous hours, without noticeable damage to the tissue or photobleaching. Our method uses a simple, inexpensive custom-built insulator constructed around the microscope to maintain controlled temperature and uses a perfusion chamber as used for in vitro slice recordings. (c) 2009 Wiley-Liss, Inc.

  20. Rhie-Chow interpolation in strong centrifugal fields

    Science.gov (United States)

    Bogovalov, S. V.; Tronin, I. V.

    2015-10-01

    Rhie-Chow interpolation formulas are derived from the Navier-Stokes and continuity equations. These formulas are generalized to gas dynamics in strong centrifugal fields (as high as 106 g) occurring in gas centrifuges.

  1. Kuu plaat : Interpol Antics. Plaadid kauplusest Lasering

    Index Scriptorium Estoniae

    2005-01-01

    Heliplaatidest: "Interpol Antics", Scooter "Mind the Gap", Slide-Fifty "The Way Ahead", Psyhhoterror "Freddy, löö esimesena!", Riho Sibul "Must", Bossacucanova "Uma Batida Diferente", "Biscantorat - Sound of the spirit from Glenstal Abbey"

  2. NOAA Daily Optimum Interpolation Sea Surface Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA 1/4° daily Optimum Interpolation Sea Surface Temperature (or daily OISST) is an analysis constructed by combining observations from different platforms...

  3. Modelling and experimental validation of thin layer indirect solar drying of mango slices

    Energy Technology Data Exchange (ETDEWEB)

    Dissa, A.O.; Bathiebo, J.; Kam, S.; Koulidiati, J. [Laboratoire de Physique et de Chimie de l' Environnement (LPCE), Unite de Formation et de Recherche en Sciences Exactes et Appliquee (UFR/SEA), Universite de Ouagadougou, Avenue Charles de Gaulle, BP 7021 Kadiogo (Burkina Faso); Savadogo, P.W. [Laboratoire Sol Eau Plante, Institut de l' Environnement et de Recherches Agricoles, 01 BP 476, Ouagadougou (Burkina Faso); Desmorieux, H. [Laboratoire d' Automatisme et de Genie des Procedes (LAGEP), UCBL1-CNRS UMR 5007-CPE Lyon, Bat.308G, 43 bd du 11 Nov. 1918 Villeurbanne, Universite Claude Bernard Lyon1, Lyon (France)

    2009-04-15

    The thin layer solar drying of mango slices of 8 mm thick was simulated and experimented using a solar dryer designed and constructed in laboratory. Under meteorological conditions of harvest period of mangoes, the results showed that 3 'typical days' of drying were necessary to reach the range of preservation water contents. During these 3 days of solar drying, 50%, 40% and 5% of unbound water were eliminated, respectively, at the first, second and the third day. The final water content obtained was about 16 {+-} 1.33% d.b. (13.79% w.b.). This final water content and the corresponding water activity (0.6 {+-} 0.02) were in accordance with previous work. The drying rates with correction for shrinkage and the critical water content were experimentally determined. The critical water content was close to 70% of the initial water content and the drying rates were reduced almost at 6% of their maximum value at night. The thin layer drying model made it possible to simulate suitably the solar drying kinetics of mango slices with a correlation coefficient of r{sup 2} = 0.990. This study thus contributed to the setting of solar drying time of mango and to the establishment of solar drying rates' curves of this fruit. (author)

  4. Randomized interpolative decomposition of separated representations

    Science.gov (United States)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  5. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  6. Scientific data interpolation with low dimensional manifold model

    Science.gov (United States)

    Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

    2018-01-01

    We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  7. Scientific data interpolation with low dimensional manifold model

    International Nuclear Information System (INIS)

    Zhu, Wei; Wang, Bao; Barnard, Richard C.; Hauck, Cory D.

    2017-01-01

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  8. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

  9. Endogenous 24S-hydroxycholesterol modulates NMDAR-mediated function in hippocampal slices.

    Science.gov (United States)

    Sun, Min-Yu; Izumi, Yukitoshi; Benz, Ann; Zorumski, Charles F; Mennerick, Steven

    2016-03-01

    N-methyl-D-aspartate receptors (NMDARs), a major subtype of glutamate receptors mediating excitatory transmission throughout the central nervous system (CNS), play critical roles in governing brain function and cognition. Because NMDAR dysfunction contributes to the etiology of neurological and psychiatric disorders including stroke and schizophrenia, NMDAR modulators are potential drug candidates. Our group recently demonstrated that the major brain cholesterol metabolite, 24S-hydroxycholesterol (24S-HC), positively modulates NMDARs when exogenously administered. Here, we studied whether endogenous 24S-HC regulates NMDAR activity in hippocampal slices. In CYP46A1(-/-) (knockout; KO) slices where endogenous 24S-HC is greatly reduced, NMDAR tone, measured as NMDAR-to-α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR) excitatory postsynaptic current (EPSC) ratio, was reduced. This difference translated into more NMDAR-driven spiking in wild-type (WT) slices compared with KO slices. Application of SGE-301, a 24S-HC analog, had comparable potentiating effects on NMDAR EPSCs in both WT and KO slices, suggesting that endogenous 24S-HC does not saturate its NMDAR modulatory site in ex vivo slices. KO slices did not differ from WT slices in either spontaneous neurotransmission or in neuronal intrinsic excitability, and exhibited LTP indistinguishable from WT slices. However, KO slices exhibited higher resistance to persistent NMDAR-dependent depression of synaptic transmission induced by oxygen-glucose deprivation (OGD), an effect restored by SGE-301. Together, our results suggest that loss of positive NMDAR tone does not elicit compensatory changes in excitability or transmission, but it protects transmission against NMDAR-mediated dysfunction. We expect that manipulating this endogenous NMDAR modulator may offer new treatment strategies for neuropsychiatric dysfunction. Copyright © 2016 the American Physiological Society.

  10. A Hybrid Interpolation Method for Geometric Nonlinear Spatial Beam Elements with Explicit Nodal Force

    Directory of Open Access Journals (Sweden)

    Huiqing Fang

    2016-01-01

    Full Text Available Based on geometrically exact beam theory, a hybrid interpolation is proposed for geometric nonlinear spatial Euler-Bernoulli beam elements. First, the Hermitian interpolation of the beam centerline was used for calculating nodal curvatures for two ends. Then, internal curvatures of the beam were interpolated with a second interpolation. At this point, C1 continuity was satisfied and nodal strain measures could be consistently derived from nodal displacement and rotation parameters. The explicit expression of nodal force without integration, as a function of global parameters, was founded by using the hybrid interpolation. Furthermore, the proposed beam element can be degenerated into linear beam element under the condition of small deformation. Objectivity of strain measures and patch tests are also discussed. Finally, four numerical examples are discussed to prove the validity and effectivity of the proposed beam element.

  11. NOAA Optimum Interpolation (OI) SST V2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...

  12. Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation

    KAUST Repository

    Murarasu, Alin

    2012-12-01

    The well-known power wall resulting in multi-cores requires special techniques for speeding up applications. In this sense, parallelization plays a crucial role. Besides standard serial optimizations, techniques such as input specialization can also bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation is an inherently hierarchical method of interpolation employed for example in computational steering applications for decompressing highdimensional simulation data. In this context, improving the speedup is essential for real-time visualization. Using input specialization, we report a speedup of up to 9x over the nonspecialized version. The paper covers the steps we took to reach this speedup by means of input adaptivity. Our algorithms will be integrated in fastsg, a library for fast sparse grid interpolation. © 2012 IEEE.

  13. Researches Regarding The Circular Interpolation Algorithms At CNC Laser Cutting Machines

    Science.gov (United States)

    Tîrnovean, Mircea Sorin

    2015-09-01

    This paper presents an integrated simulation approach for studying the circular interpolation regime of CNC laser cutting machines. The circular interpolation algorithm is studied, taking into consideration the numerical character of the system. A simulation diagram, which is able to generate the kinematic inputs for the feed drives of the CNC laser cutting machine is also presented.

  14. Relationship of radiation dose and spiral pitch for multi-slice CT system

    International Nuclear Information System (INIS)

    Song Shaojuan; Wang Wei; Liu Chuanya

    2006-01-01

    Objective: To study the relations of radiation dose and spiral pitch for multi-slice CT system. Methods: 16 mm dose phantom with solidose 300/400 pen-style ion chamber inserted into each of five holes in turn was scanned with different spiral pitch by LightSpeed 16-slice CT and Sensation 16-slice and 64-slice CT and radiation dose. Results: CTDI vol of axial scan and spiral scan for the three types of CT system are: (1) LightSpeed 16-slice CT: 28.9 (axial), 51.4 (pitch 0.562), 30.8 (pitch 0.938) and 16.5 ( pitch 1.75 ); (2) Sensation 16-slice CT: 41.2(axial) and 40.3(pitch 0.5) ,41.5(pitch 1) and 43.2(pitch 1.5); (3) Sensation 64- slice CT: 41.2(axial) and 40.3(pitch 0.5),41.5(pitch 1),43.2(pitch 1.5). Conclusions: For LightSpeed 16-slice CT, the measured radiation dose decreased with the increase of spiral pitch, the image quality could keep constant only if we increase mAs. While for Sensation 16-slice and 64-slice CT system, the measured radiation dose was identical for different pitch, and the image quality was identical because of the use of mAs auto control technique The mAs should be adjusted in different way according to the type of CT system when the pitch is changed in daily operation. (authors)

  15. An application of gain-scheduled control using state-space interpolation to hydroactive gas bearings

    DEFF Research Database (Denmark)

    Theisen, Lukas Roy Svane; Camino, Juan F.; Niemann, Hans Henrik

    2016-01-01

    with a gain-scheduling strategy using state-space interpolation, which avoids both the performance loss and the increase of controller order associated to the Youla parametrisation. The proposed state-space interpolation for gain-scheduling is applied for mass imbalance rejection for a controllable gas...... bearing scheduled in two parameters. Comparisons against the Youla-based scheduling demonstrate the superiority of the state-space interpolation....

  16. Using high-order methods on adaptively refined block-structured meshes - discretizations, interpolations, and filters.

    Energy Technology Data Exchange (ETDEWEB)

    Ray, Jaideep; Lefantzi, Sophia; Najm, Habib N.; Kennedy, Christopher A.

    2006-01-01

    Block-structured adaptively refined meshes (SAMR) strive for efficient resolution of partial differential equations (PDEs) solved on large computational domains by clustering mesh points only where required by large gradients. Previous work has indicated that fourth-order convergence can be achieved on such meshes by using a suitable combination of high-order discretizations, interpolations, and filters and can deliver significant computational savings over conventional second-order methods at engineering error tolerances. In this paper, we explore the interactions between the errors introduced by discretizations, interpolations and filters. We develop general expressions for high-order discretizations, interpolations, and filters, in multiple dimensions, using a Fourier approach, facilitating the high-order SAMR implementation. We derive a formulation for the necessary interpolation order for given discretization and derivative orders. We also illustrate this order relationship empirically using one and two-dimensional model problems on refined meshes. We study the observed increase in accuracy with increasing interpolation order. We also examine the empirically observed order of convergence, as the effective resolution of the mesh is increased by successively adding levels of refinement, with different orders of discretization, interpolation, or filtering.

  17. ERRORS MEASUREMENT OF INTERPOLATION METHODS FOR GEOID MODELS: STUDY CASE IN THE BRAZILIAN REGION

    Directory of Open Access Journals (Sweden)

    Daniel Arana

    Full Text Available Abstract: The geoid is an equipotential surface regarded as the altimetric reference for geodetic surveys and it therefore, has several practical applications for engineers. In recent decades the geodetic community has concentrated efforts on the development of highly accurate geoid models through modern techniques. These models are supplied through regular grids which users need to make interpolations. Yet, little information can be obtained regarding the most appropriate interpolation method to extract information from the regular grid of geoidal models. The use of an interpolator that does not represent the geoid surface appropriately can impair the quality of geoid undulations and consequently the height transformation. This work aims to quantify the magnitude of error that comes from a regular mesh of geoid models. The analysis consisted of performing a comparison between the interpolation of the MAPGEO2015 program and three interpolation methods: bilinear, cubic spline and neural networks Radial Basis Function. As a result of the experiments, it was concluded that 2.5 cm of the 18 cm error of the MAPGEO2015 validation is caused by the use of interpolations in the 5'x5' grid.

  18. Raman microscopy of freeze-dried mouse eyeball-slice in conjunction with the "in vivo cryotechnique".

    Science.gov (United States)

    Terada, Nobuo; Ohno, Nobuhiko; Saitoh, Sei; Fujii, Yasuhisa; Ohguro, Hiroshi; Ohno, Shinichi

    2007-07-01

    The wavelength of Raman-scattered light depends on the molecular composition of the substance. This is the first attempt to acquire Raman spectra of a mouse eyeball removed from a living mouse, in which the eyeball was preserved using the "in vivo cryotechnique" followed by freeze-drying. Eyeballs were cryofixed using a rapid freezing cryotechnique, and then sliced in the cryostat machine. The slices were sandwiched between glass slides, freeze-dried, and analyzed with confocal Raman microscopy. Important areas including various eyeball tissue layers were selected using bright-field microscopy, and then the Raman spectra were obtained at 240 locations. Four typical patterns of Raman spectra were electronically mapped on the specimen images obtained by the bright-field microscopy. Tissue organization was confirmed by embedding the same eyeball slice used for Raman spectra into epoxy resin and the thick sections were prepared with the inverted capsule method. Each Raman spectral pattern represents a different histological layer in the eyeball which was mapped by comparing the images of toluidine blue staining and Raman mapping with different colors. In the choroid and pigment cell layer, the Raman spectrum had two peaks, corresponding to melanin. Some of the peaks of the Raman spectra obtained from the blood vessels in sclera and the photoreceptor layer were similar to those obtained from the purified hemoglobin and rhodopsin proteins, respectively. Our experimental protocol can distinguish different tissue components with Raman microscopy; therefore, this method can be very useful for examining the distribution of a biological structures and/or chemical components in rapidly frozen freeze-dried tissue.

  19. Continuous Slice Functional Calculus in Quaternionic Hilbert Spaces

    Science.gov (United States)

    Ghiloni, Riccardo; Moretti, Valter; Perotti, Alessandro

    2013-04-01

    The aim of this work is to define a continuous functional calculus in quaternionic Hilbert spaces, starting from basic issues regarding the notion of spherical spectrum of a normal operator. As properties of the spherical spectrum suggest, the class of continuous functions to consider in this setting is the one of slice quaternionic functions. Slice functions generalize the concept of slice regular function, which comprises power series with quaternionic coefficients on one side and that can be seen as an effective generalization to quaternions of holomorphic functions of one complex variable. The notion of slice function allows to introduce suitable classes of real, complex and quaternionic C*-algebras and to define, on each of these C*-algebras, a functional calculus for quaternionic normal operators. In particular, we establish several versions of the spectral map theorem. Some of the results are proved also for unbounded operators. However, the mentioned continuous functional calculi are defined only for bounded normal operators. Some comments on the physical significance of our work are included.

  20. Reconstruction of reflectance data using an interpolation technique.

    Science.gov (United States)

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  1. The modal surface interpolation method for damage localization

    Science.gov (United States)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  2. Computer-assisted detection of pulmonary nodules: evaluation of diagnostic performance using an expert knowledge-based detection system with variable reconstruction slice thickness settings

    International Nuclear Information System (INIS)

    Marten, Katharina; Grillhoesl, Andreas; Seyfarth, Tobias; Rummeny, Ernst J.; Engelke, Christoph; Obenauer, Silvia

    2005-01-01

    The purpose of this study was to evaluate the performance of a computer-assisted diagnostic (CAD) tool using various reconstruction slice thicknesses (RST). Image data of 20 patients undergoing multislice CT for pulmonary metastasis were reconstructed at 4.0, 2.0 and 0.75 mm RST and assessed by two blinded radiologists (R1 and R2) and CAD. Data were compared against an independent reference standard. Nodule subgroups (diameter >10, 4-10, <4 mm) were assessed separately. Statistical methods were the ROC analysis and Mann-Whitney Utest. CAD was outperformed by readers at 4.0 mm (Az = 0.18, 0.62 and 0.69 for CAD, R1 and R2, respectively; P<0.05), comparable at 2.0 mm (Az = 0.57, 0.70 and 0.69 for CAD, R1 and R2, respectively), and superior using 0.75 mm RST (Az = 0.80, 0.70 and 0.70 and sensitivity = 0.74, 0.53 and 0.53 for CAD, R1 and R2, respectively; P<0.05). Reader performances were significantly enhanced by CAD (Az = 0.93 and 0.95 for R1 + CAD and R2 + CAD, respectively, P<0.05). The CAD advantage was best for nodules <10 mm (detection rates = 93.3, 89.9, 47.9 and 47.9% for R1 + CAD, R2 + CAD, R1 and R2, respectively). CAD using 0.75 mm RST outperformed radiologists in nodules below 10 mm in diameter and should be used to replace a second radiologist. CAD is not recommended for 4.0 mm RST. (orig.)

  3. A Study on the Improvement of Digital Periapical Images using Image Interpolation Methods

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    Image resampling is of particular interest in digital radiology. When resampling an image to a new set of coordinate, there appears blocking artifacts and image changes. To enhance image quality, interpolation algorithms have been used. Resampling is used to increase the number of points in an image to improve its appearance for display. The process of interpolation is fitting a continuous function to the discrete points in the digital image. The purpose of this study was to determine the effects of the seven interpolation functions when image resampling in digital periapical images. The images were obtained by Digora, CDR and scanning of Ektaspeed plus periapical radiograms on the dry skull and human subject. The subjects were exposed to intraoral X-ray machine at 60 kVp and 70 kVp with exposure time varying between 0.01 and 0.50 second. To determine which interpolation method would provide the better image, seven functions were compared ; (1) nearest neighbor (2) linear (3) non-linear (4) facet model (5) cubic convolution (6) cubic spline (7) gray segment expansion. And resampled images were compared in terms of SNR (Signal to Noise Ratio) and MTF (Modulation Transfer Function) coefficient value. The obtained results were as follows ; 1. The highest SNR value (75.96 dB) was obtained with cubic convolution method and the lowest SNR value (72.44 dB) was obtained with facet model method among seven interpolation methods. 2. There were significant differences of SNR values among CDR, Digora and film scan (P 0.05). 4. There were significant differences of MTF coefficient values between linear interpolation method and the other six interpolation methods (P<0.05). 5. The speed of computation time was the fastest with nearest neighbor method and the slowest with non-linear method. 6. The better image was obtained with cubic convolution, cubic spline and gray segment method in ROC analysis. 7. The better sharpness of edge was obtained with gray segment expansion method

  4. Thermoluminescence results on slices from a Hiroshima tile UHFSFT03

    International Nuclear Information System (INIS)

    Stoneham, Doreen

    1987-01-01

    As was reported at the May 1984 Utah thermoluminescence (TL) workshop, high fired tiles and porcelain fragments can be sliced into 200 μm sections with constant surface area. When conventional pre-dose measurements were carried out on these slices the doses evaluated were in good agreement with results obtained by other workers using conventional quartz separation techniques. There are several advantages in using slices. First, less sample is needed as about 50 consecutive slices can be cut from a block measuring typically 1 cm 2 cross section and 2 cm in length. There are no problems with securing grains to the plate or loss of grains during measurement. Hypothetically there is less damage to the grains when they are cut slowly under cold water than when they are crushed. The disadvantage is that other minerals besides quartz are present in the slice and the signal is weaker than that obtained using quartz inclusions

  5. Modeling and Realization of a Bearingless Flux-Switching Slice Motor

    Directory of Open Access Journals (Sweden)

    Wolfgang Gruber

    2017-03-01

    Full Text Available This work introduces a novel bearingless slice motor design: the bearingless flux-switching slice motor. In contrast to state-of-the-art bearingless slice motors, the rotor in this new design does not include any permanent rotor magnets. This offers advantages for disposable devices, such as those used in the medical industry, and extends the range of bearingless slice motors toward high-temperature applications. In this study, our focus is on the analytical modeling of the suspension force torque generation of a single coil and the bearingless motor. We assessed motor performance in relation to motor topology by applying performance factors. A prototype motor was optimized, designed, and manufactured. We also presented the state-of-the-art nonlinear feedback control scheme used. The motor was operated, and both static and dynamic measurements were taken on a test bench, thus successfully demonstrating the functionality and applicability of the novel bearingless slice motor concept.

  6. Interaction-Strength Interpolation Method for Main-Group Chemistry : Benchmarking, Limitations, and Perspectives

    NARCIS (Netherlands)

    Fabiano, E.; Gori-Giorgi, P.; Seidl, M.W.J.; Della Sala, F.

    2016-01-01

    We have tested the original interaction-strength-interpolation (ISI) exchange-correlation functional for main group chemistry. The ISI functional is based on an interpolation between the weak and strong coupling limits and includes exact-exchange as well as the Görling–Levy second-order energy. We

  7. Evaluation of Teeth and Supporting Structures on Digital Radiograms using Interpolation Methods

    International Nuclear Information System (INIS)

    Koh, Kwang Joon; Chang, Kee Wan

    1999-01-01

    To determine the effect of interpolation functions when processing the digital periapical images. The digital images were obtained by Digora and CDR system on the dry skull and human subject. 3 oral radiologists evaluated the 3 portions of each processed image using 7 interpolation methods and ROC curves were obtained by trapezoidal methods. The highest Az value(0.96) was obtained with cubic spline method and the lowest Az value(0.03) was obtained with facet model method in Digora system. The highest Az value(0.79) was obtained with gray segment expansion method and the lowest Az value(0.07) was obtained with facet model method in CDR system. There was significant difference of Az value in original image between Digora and CDR system at alpha=0.05 level. There were significant differences of Az values between Digora and CDR images with cubic spline method, facet model method, linear interpolation method and non-linear interpolation method at alpha= 0.1 level.

  8. Selection of the optimal interpolation method for groundwater observations in lahore, pakistan

    International Nuclear Information System (INIS)

    Mahmood, K.; Ali, S.R.; Haider, A.; Tehseen, T.; Kanwal, S.

    2014-01-01

    This study was carried out to find an optimum method of interpolation for the depth values of groundwater in Lahore metropolitan, Pakistan. The methods of interpolation considered in the study were inverse distance weight (IDW), spline, simple Kriging, ordinary Kriging and universal Kriging. Initial analysis of the data suggests that the data was negatively skewed with value of skewness -1.028. The condition of normality was approximated by transforming the data using a box-cox transformation with lambda value of 3.892; the skewness value reduced to -0.00079. The results indicate that simple Kriging method is optimum for interpolation of groundwater observations for the used dataset with lowest bias of 0.00997, highest correlation coefficient with value 0.9434, mean absolute error 1.95 and root mean square error 3.19 m. Smooth and uniform contours with well described central depression zon in the city, as suggested by this studies, also supports the optimised interpolation method. (author)

  9. Sample Data Synchronization and Harmonic Analysis Algorithm Based on Radial Basis Function Interpolation

    Directory of Open Access Journals (Sweden)

    Huaiqing Zhang

    2014-01-01

    Full Text Available The spectral leakage has a harmful effect on the accuracy of harmonic analysis for asynchronous sampling. This paper proposed a time quasi-synchronous sampling algorithm which is based on radial basis function (RBF interpolation. Firstly, a fundamental period is evaluated by a zero-crossing technique with fourth-order Newton’s interpolation, and then, the sampling sequence is reproduced by the RBF interpolation. Finally, the harmonic parameters can be calculated by FFT on the synchronization of sampling data. Simulation results showed that the proposed algorithm has high accuracy in measuring distorted and noisy signals. Compared to the local approximation schemes as linear, quadric, and fourth-order Newton interpolations, the RBF is a global approximation method which can acquire more accurate results while the time-consuming is about the same as Newton’s.

  10. Sugar uptake and starch biosynthesis by slices of developing maize endosperm

    International Nuclear Information System (INIS)

    Felker, F.C.; Liu, Kangchien; Shannon, J.C.

    1990-01-01

    14 C-Sugar uptake and incorporation into starch by slices of developing maize (Zea mays L.) endosperm were examined and compared with sugar uptake by maize endosperm-derived suspension cultures. Rates of sucrose, fructose, and D- and L-glucose uptake by slices were similar, whereas uptake rates for these sugars differed greatly in suspension cultures. Concentration dependence of sucrose, fructose, and D-glucose uptake was biphasic (consisting of linear plus saturable components) with suspension cultures but linear with slices. These and other differences suggest that endosperm slices are freely permeable to sugars. After diffusion into the slices, sugars were metabolized and incorporated into starch. Starch synthesis, but not sugar accumulation, was greatly reduced by 2.5 millimolar p-chloromercuribenzenesulfonic acid and 0.1 millimolar carbonyl cyanide m-chlorophenylhydrazone. Starch synthesis was dependent on kernel age and incubation temperature, but not on external pH (5 through 8). Competing sugars generally did not affect the distribution of 14 C among the soluble sugars extracted from endosperm slices incubated in 14 C-sugars. Competing hexoses reduced the incorporation of 14 C into starch, but competing sucrose did not, suggesting that sucrose is not a necessary intermediate in starch biosynthesis. The bidirectional permeability of endosperm slices to sugars makes the characterization of sugar transport into endosperm slices impossible, however the model system is useful for experiments dealing with starch biosynthesis which occurs in the metabolically active tissue

  11. Material and Thickness Grading for Aeroelastic Tailoring of the Common Research Model Wing Box

    Science.gov (United States)

    Stanford, Bret K.; Jutte, Christine V.

    2014-01-01

    This work quantifies the potential aeroelastic benefits of tailoring a full-scale wing box structure using tailored thickness distributions, material distributions, or both simultaneously. These tailoring schemes are considered for the wing skins, the spars, and the ribs. Material grading utilizes a spatially-continuous blend of two metals: Al and Al+SiC. Thicknesses and material fraction variables are specified at the 4 corners of the wing box, and a bilinear interpolation is used to compute these parameters for the interior of the planform. Pareto fronts detailing the conflict between static aeroelastic stresses and dynamic flutter boundaries are computed with a genetic algorithm. In some cases, a true material grading is found to be superior to a single-material structure.

  12. Inoculating against eyewitness suggestibility via interpolated verbatim vs. gist testing.

    Science.gov (United States)

    Pansky, Ainat; Tenenboim, Einat

    2011-01-01

    In real-life situations, eyewitnesses often have control over the level of generality in which they choose to report event information. In the present study, we adopted an early-intervention approach to investigate to what extent eyewitness memory may be inoculated against suggestibility, following two different levels of interpolated reporting: verbatim and gist. After viewing a target event, participants responded to interpolated questions that required reporting of target details at either the verbatim or the gist level. After 48 hr, both groups of participants were misled about half of the target details and were finally tested for verbatim memory of all the details. The findings were consistent with our predictions: Whereas verbatim testing was successful in completely inoculating against suggestibility, gist testing did not reduce it whatsoever. These findings are particularly interesting in light of the comparable testing effects found for these two modes of interpolated testing.

  13. Interpolating string field theories

    International Nuclear Information System (INIS)

    Zwiebach, B.

    1992-01-01

    This paper reports that a minimal area problem imposing different length conditions on open and closed curves is shown to define a one-parameter family of covariant open-closed quantum string field theories. These interpolate from a recently proposed factorizable open-closed theory up to an extended version of Witten's open string field theory capable of incorporating on shell closed strings. The string diagrams of the latter define a new decomposition of the moduli spaces of Riemann surfaces with punctures and boundaries based on quadratic differentials with both first order and second order poles

  14. Quantitative parameters to compare image quality of non-invasive coronary angiography with 16-slice, 64-slice and dual-source computed tomography

    International Nuclear Information System (INIS)

    Burgstahler, Christof; Reimann, Anja; Brodoefel, Harald; Tsiflikas, Ilias; Thomas, Christoph; Heuschmid, Martin; Daferner, Ulrike; Drosch, Tanja; Schroeder, Stephen; Herberts, Tina

    2009-01-01

    Multi-slice computed tomography (MSCT) is a non-invasive modality to visualize coronary arteries with an overall good image quality. Improved spatial and temporal resolution of 64-slice and dual-source computed tomography (DSCT) scanners are supposed to have a positive impact on diagnostic accuracy and image quality. However, quantitative parameters to compare image quality of 16-slice, 64-slice MSCT and DSCT are missing. A total of 256 CT examinations were evaluated (Siemens, Sensation 16: n=90; Siemens Sensation 64: n=91; Siemens Definition: n=75). Mean Hounsfield units (HU) were measured in the cavum of the left ventricle (LV), the ascending aorta (Ao), the left ventricular myocardium (My) and the proximal part of the left main (LM), the left anterior descending artery (LAD), the right coronary artery (RCA) and the circumflex artery (CX). Moreover, the ratio of intraluminal attenuation (HU) to myocardial attenuation was assessed for all coronary arteries. Clinical data [body mass index (BMI), gender, heart rate] were accessible for all patients. Mean attenuation (CA) of the coronary arteries was significantly higher for DSCT in comparison to 64- and 16-slice MSCT within the RCA [347±13 vs. 254±14 (64-MSCT) vs. 233±11 (16-MSCT) HU], LM (362±11/275 ± 12/262±9), LAD (332±17/248±19/219±14) and LCX (310±12/210±13/221±10, all p<0.05), whereas there was no significant difference between DSCT and 64-MSCT for the LV, the Ao and My. Heart rate had a significant impact on CA ratio in 16-slice and 64-slice CT only (p<0.05). BMI had no impact on the CA ratio in DSCT only (p<0.001). Improved spatial and temporal resolution of dual-source CT is associated with better opacification of the coronary arteries and a better contrast with the myocardium, which is independent of heart rate. In comparison to MSCT, opacification of the coronary arteries at DSCT is not affected by BMI. The main advantage of DSCT lies with the heart rate independency, which might have a

  15. Interpolating precipitation and its relation to runoff and non-point source pollution.

    Science.gov (United States)

    Chang, Chia-Ling; Lo, Shang-Lien; Yu, Shaw-L

    2005-01-01

    When rainfall spatially varies, complete rainfall data for each region with different rainfall characteristics are very important. Numerous interpolation methods have been developed for estimating unknown spatial characteristics. However, no interpolation method is suitable for all circumstances. In this study, several methods, including the arithmetic average method, the Thiessen Polygons method, the traditional inverse distance method, and the modified inverse distance method, were used to interpolate precipitation. The modified inverse distance method considers not only horizontal distances but also differences between the elevations of the region with no rainfall records and of its surrounding rainfall stations. The results show that when the spatial variation of rainfall is strong, choosing a suitable interpolation method is very important. If the rainfall is uniform, the precipitation estimated using any interpolation method would be quite close to the actual precipitation. When rainfall is heavy in locations with high elevation, the rainfall changes with the elevation. In this situation, the modified inverse distance method is much more effective than any other method discussed herein for estimating the rainfall input for WinVAST to estimate runoff and non-point source pollution (NPSP). When the spatial variation of rainfall is random, regardless of the interpolation method used to yield rainfall input, the estimation errors of runoff and NPSP are large. Moreover, the relationship between the relative error of the predicted runoff and predicted pollutant loading of SS is high. However, the pollutant concentration is affected by both runoff and pollutant export, so the relationship between the relative error of the predicted runoff and the predicted pollutant concentration of SS may be unstable.

  16. Pixel Interpolation Methods

    OpenAIRE

    Mintěl, Tomáš

    2009-01-01

    Tato diplomová práce se zabývá akcelerací interpolačních metod s využitím GPU a architektury NVIDIA (R) CUDA TM. Grafický výstup je reprezentován demonstrační aplikací pro transformaci obrazu nebo videa s použitím vybrané interpolace. Časově kritické části kódu jsou přesunuty na GPU a vykonány paralelně. Pro práci s obrazem a videem jsou použity vysoce optimalizované algoritmy z knihovny OpenCV, od firmy Intel. This master's thesis deals with acceleration of pixel interpolation methods usi...

  17. Imaging system design and image interpolation based on CMOS image sensor

    Science.gov (United States)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  18. Exploring the Role of Genetic Algorithms and Artificial Neural Networks for Interpolation of Elevation in Geoinformation Models

    Science.gov (United States)

    Bagheri, H.; Sadjadi, S. Y.; Sadeghian, S.

    2013-09-01

    One of the most significant tools to study many engineering projects is three-dimensional modelling of the Earth that has many applications in the Geospatial Information System (GIS), e.g. creating Digital Train Modelling (DTM). DTM has numerous applications in the fields of sciences, engineering, design and various project administrations. One of the most significant events in DTM technique is the interpolation of elevation to create a continuous surface. There are several methods for interpolation, which have shown many results due to the environmental conditions and input data. The usual methods of interpolation used in this study along with Genetic Algorithms (GA) have been optimised and consisting of polynomials and the Inverse Distance Weighting (IDW) method. In this paper, the Artificial Intelligent (AI) techniques such as GA and Neural Networks (NN) are used on the samples to optimise the interpolation methods and production of Digital Elevation Model (DEM). The aim of entire interpolation methods is to evaluate the accuracy of interpolation methods. Universal interpolation occurs in the entire neighbouring regions can be suggested for larger regions, which can be divided into smaller regions. The results obtained from applying GA and ANN individually, will be compared with the typical method of interpolation for creation of elevations. The resulting had performed that AI methods have a high potential in the interpolation of elevations. Using artificial networks algorithms for the interpolation and optimisation based on the IDW method with GA could be estimated the high precise elevations.

  19. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.; Efendiev, Yalchin R.; Galvis, Juan; Ghommem, Mehdi

    2014-01-01

    residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully

  20. Precision-cut kidney slices (PCKS to study development of renal fibrosis and efficacy of drug targeting ex vivo

    Directory of Open Access Journals (Sweden)

    Fariba Poosti

    2015-10-01

    Full Text Available Renal fibrosis is a serious clinical problem resulting in the greatest need for renal replacement therapy. No adequate preventive or curative therapy is available that could be clinically used to target renal fibrosis specifically. The search for new efficacious treatment strategies is therefore warranted. Although in vitro models using homogeneous cell populations have contributed to the understanding of the pathogenetic mechanisms involved in renal fibrosis, these models poorly mimic the complex in vivo milieu. Therefore, we here evaluated a precision-cut kidney slice (PCKS model as a new, multicellular ex vivo model to study the development of fibrosis and its prevention using anti-fibrotic compounds. Precision-cut slices (200-300 μm thickness were prepared from healthy C57BL/6 mouse kidneys using a Krumdieck tissue slicer. To induce changes mimicking the fibrotic process, slices were incubated with TGFβ1 (5 ng/ml for 48 h in the presence or absence of the anti-fibrotic cytokine IFNγ (1 µg/ml or an IFNγ conjugate targeted to PDGFRβ (PPB-PEG-IFNγ. Following culture, tissue viability (ATP-content and expression of α-SMA, fibronectin, collagen I and collagen III were determined using real-time PCR and immunohistochemistry. Slices remained viable up to 72 h of incubation, and no significant effects of TGFβ1 and IFNγ on viability were observed. TGFβ1 markedly increased α-SMA, fibronectin and collagen I mRNA and protein expression levels. IFNγ and PPB-PEG-IFNγ significantly reduced TGFβ1-induced fibronectin, collagen I and collagen III mRNA expression, which was confirmed by immunohistochemistry. The PKCS model is a novel tool to test the pathophysiology of fibrosis and to screen the efficacy of anti-fibrotic drugs ex vivo in a multicellular and pro-fibrotic milieu. A major advantage of the slice model is that it can be used not only for animal but also for (fibrotic human kidney tissue.

  1. Nuclear data banks generation by interpolation

    International Nuclear Information System (INIS)

    Castillo M, J. A.

    1999-01-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks

  2. MRI-derived measurements of fibrous-cap and lipid-core thickness: the potential for identifying vulnerable carotid plaques in vivo

    International Nuclear Information System (INIS)

    Trivedi, Rikin A.; U-King-Im, Jean-Marie; Graves, Martin J.; Horsley, Jo; Goddard, Martin; Kirkpatrick, Peter J.; Gillard, Jonathan H.

    2004-01-01

    Vulnerable plaques have thin fibrous caps overlying large necrotic lipid cores. Recent studies have shown that high-resolution MR imaging can identify these components. We set out to determine whether in vivo high-resolution MRI could quantify this aspect of the vulnerable plaque. Forty consecutive patients scheduled for carotid endarterectomy underwent pre-operative in vivo multi-sequence MR imaging of the carotid artery. Individual plaque constituents were characterised on MR images. Fibrous-cap and lipid-core thickness was measured on MRI and histology images. Bland-Altman plots were generated to determine the level of agreement between the two methods. Multi-sequence MRI identified 133 corresponding MR and histology slices. Plaque calcification or haemorrhage was seen in 47 of these slices. MR and histology derived fibrous cap-lipid-core thickness ratios showed strong agreement with a mean difference between MR and histology ratios of 0.02 (±0.04). The intra-class correlation coefficient between two readers for measurements was 0.87 (95% confidence interval, 0.73 and 0.93). Multi-sequence, high-resolution MR imaging accurately quantified the relative thickness of fibrous-cap and lipid-core components of carotid atheromatous plaques. This may prove to be a useful tool to characterise vulnerable plaques in vivo. (orig.)

  3. MRI-derived measurements of fibrous-cap and lipid-core thickness: the potential for identifying vulnerable carotid plaques in vivo

    Energy Technology Data Exchange (ETDEWEB)

    Trivedi, Rikin A. [Addenbrooke' s Hospital, University Department of Radiology, Cambridge (United Kingdom); Addenbrooke' s Hospital, Academic Department of Neurosurgery, Cambridge (United Kingdom); U-King-Im, Jean-Marie; Graves, Martin J. [Addenbrooke' s Hospital, University Department of Radiology, Cambridge (United Kingdom); Horsley, Jo; Goddard, Martin [Papworth Hospital, Department of Histopathology, Papworth Everard (United Kingdom); Kirkpatrick, Peter J. [Addenbrooke' s Hospital, Academic Department of Neurosurgery, Cambridge (United Kingdom); Gillard, Jonathan H. [Addenbrooke' s Hospital, University Department of Radiology, Cambridge (United Kingdom); Addenbrooke' s Hospital, Hills Road, Box 219, Cambridge (United Kingdom)

    2004-09-01

    Vulnerable plaques have thin fibrous caps overlying large necrotic lipid cores. Recent studies have shown that high-resolution MR imaging can identify these components. We set out to determine whether in vivo high-resolution MRI could quantify this aspect of the vulnerable plaque. Forty consecutive patients scheduled for carotid endarterectomy underwent pre-operative in vivo multi-sequence MR imaging of the carotid artery. Individual plaque constituents were characterised on MR images. Fibrous-cap and lipid-core thickness was measured on MRI and histology images. Bland-Altman plots were generated to determine the level of agreement between the two methods. Multi-sequence MRI identified 133 corresponding MR and histology slices. Plaque calcification or haemorrhage was seen in 47 of these slices. MR and histology derived fibrous cap-lipid-core thickness ratios showed strong agreement with a mean difference between MR and histology ratios of 0.02 ({+-}0.04). The intra-class correlation coefficient between two readers for measurements was 0.87 (95% confidence interval, 0.73 and 0.93). Multi-sequence, high-resolution MR imaging accurately quantified the relative thickness of fibrous-cap and lipid-core components of carotid atheromatous plaques. This may prove to be a useful tool to characterise vulnerable plaques in vivo. (orig.)

  4. Optimum quantization and interpolation of projections in X-ray computerized tomography

    International Nuclear Information System (INIS)

    Vajnberg, Eh.I.; Fajngojz, M.L.

    1984-01-01

    Two methods to increase the accuracy of image reconstruction due to optimization of quantization and interpolation of proections with separate reduction of the main types of errors are described and experimentally studied. A high metrological and calculation efficiency of increasing the count frequency in the reconstructed tomogram 2-4 times is found. The optimum structure of interpolation functions of a minimum extent is calculated

  5. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    Science.gov (United States)

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  6. Patch-based frame interpolation for old films via the guidance of motion paths

    Science.gov (United States)

    Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi

    2018-04-01

    Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.

  7. Conservative method for determination of material thickness used in shielding of veterinary facilities

    International Nuclear Information System (INIS)

    Lava, Deise D.; Borges, Diogo da S.; Affonso, Renato R.W.; Moreira, Maria de L.; Guimaraes, Antonio C.F.

    2014-01-01

    For determination of an effective method for shielding of veterinary rooms, was provided shielding methods generally used in rooms which works with X-ray production and radiotherapy. Every calculation procedure is based in traditional variables used to transmission calculation. The thickness of the materials used for primary and secondary shieldings are obtained to respect the limits set by the Brazilian National Nuclear Energy Commission (CNEN). This work presents the development of a computer code in order to serve as a practical tool for determining rapid and effective materials and their thicknesses to shield veterinary facilities. The code determines transmission values of the shieldings and compares them with data from transmission 'maps' provided by NCRP-148 report. These 'maps' were added to the algorithm through interpolation techniques of curves of materials used for shielding. Each interpolation generates about 1,000,000 points that are used to generate a new curve. The new curve is subjected to regression techniques, which makes possible to obtain nine degree polynomial, and exponential equations. These equations whose variables consist of transmission of values, enable trace all the points of this curve with high precision. The data obtained from the algorithm were satisfactory with official data presented by the National Council of Radiation Protection and Measurements (NCRP) and can contribute as a practical tool for verification of shielding of veterinary facilities that require using Radiotherapy techniques and X-ray production

  8. Capitalizing Resolving Power of Density Gradient Ultracentrifugation by Freezing and Precisely Slicing Centrifuged Solution: Enabling Identification of Complex Proteins from Mitochondria by Matrix Assisted Laser Desorption/Ionization Time-of-Flight Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Haiqing Yu

    2016-01-01

    Full Text Available Density gradient centrifugation is widely utilized for various high purity sample preparations, and density gradient ultracentrifugation (DGU is often used for more resolution-demanding purification of organelles and protein complexes. Accurately locating different isopycnic layers and precisely extracting solutions from these layers play a critical role in achieving high-resolution DGU separations. In this technique note, we develop a DGU procedure by freezing the solution rapidly (but gently after centrifugation to fix the resolved layers and by slicing the frozen solution to fractionate the sample. Because the thickness of each slice can be controlled to be as thin as 10 micrometers, we retain virtually all the resolution produced by DGU. To demonstrate the effectiveness of this method, we fractionate complex V from HeLa mitochondria using a conventional technique and this freezing-slicing (F-S method. The comparison indicates that our F-S method can reduce complex V layer thicknesses by ~40%. After fractionation, we analyze complex V proteins directly on a matrix assisted laser desorption/ionization, time-of-flight mass spectrometer. Twelve out of fifteen subunits of complex V are positively identified. Our method provides a practical protocol to identify proteins from complexes, which is useful to investigate biomolecular complexes and pathways in various conditions and cell types.

  9. Pyrethroid insecticides evoke neurotransmitter release from rabbit striatal slices

    International Nuclear Information System (INIS)

    Eells, J.T.; Dubocovich, M.L.

    1988-01-01

    The effects of the synthetic pyrethroid insecticide fenvalerate ([R,S]-alpha-cyano-3-phenoxybenzyl[R,S]-2-(4-chlorophenyl)-3- methylbutyrate) on neurotransmitter release in rabbit brain slices were investigated. Fenvalerate evoked a calcium-dependent release of [ 3 H]dopamine and [ 3 H]acetylcholine from rabbit striatal slices that was concentration-dependent and specific for the toxic stereoisomer of the insecticide. The release of [ 3 H]dopamine and [ 3 H]acetylcholine by fenvalerate was modulated by D2 dopamine receptor activation and antagonized completely by the sodium channel blocker, tetrodotoxin. These findings are consistent with an action of fenvalerate on the voltage-dependent sodium channels of the presynaptic membrane resulting in membrane depolarization, and the release of dopamine and acetylcholine by a calcium-dependent exocytotic process. In contrast to results obtained in striatal slices, fenvalerate did not elicit the release of [ 3 H]norepinephrine or [ 3 H]acetylcholine from rabbit hippocampal slices indicative of regional differences in sensitivity to type II pyrethroid actions

  10. Axisymmetric buckling analysis of laterally restrained thick annular plates using a hybrid numerical method

    Energy Technology Data Exchange (ETDEWEB)

    Malekzadeh, P. [Department of Mechanical Engineering, Persian Gulf University, Bushehr 75168 (Iran, Islamic Republic of); Center of Excellence for Computational Mechanics, Shiraz University, Shiraz (Iran, Islamic Republic of)], E-mail: malekzadeh@pgu.ac.ir; Ouji, A. [Department of Civil Engineering, Persian Gulf University, Bushehr 75168 (Iran, Islamic Republic of); Islamic Azad University, Larestan Branch, Larestan (Iran, Islamic Republic of)

    2008-11-15

    The buckling analysis of annular thick plates with lateral supports such as two-parameter elastic foundations or ring supports is investigated using an elasticity based hybrid numerical method. For this purpose, firstly, the displacement components are perturbed around the pre-buckling state, which is located using the elasticity theory. Then, by decomposing the plate into a set of sub-domain in the form of co-axial annular plates, the buckling equations are discretized through the radial direction using global interpolation functions in conjunction with the principle of virtual work. The resulting differential equations are solved using the differential quadrature method. The method has the capability of modeling the arbitrary boundary conditions either at the inner and outer edges of thin-to-thick plates and with different types of lateral restraints. The fast rate of convergence of the method is demonstrated and comparison studies are carried out to establish its accuracy and versatility for thin-to-thick plates.

  11. Assessment of renal artery stenosis of transplanted kidney by time resolved gadolinium-enhanced three-dimensional MR angiography. Preliminary phantom study and clinical evaluation

    International Nuclear Information System (INIS)

    Hayano, Toshio

    2001-01-01

    The purpose of this study was to determine a suitable imaging parameters of time-resolved Gd-enhanced three-dimensional MR angiography (TRE3DMRA) for the evaluation of renal artery stenosis of transplanted kidneys and to investigate the usefulness of TRE3DMRA in 166 clinical cases. Source images were obtained 3dFLASH with zero-filling interpolation (turbo MRA) using Siemens Magneton 1.5T. Acrylate tubes with 6 mm inner diameter filled with diluted Gd-DTPA were used as special phantoms. In the tubes, 25%, 50%, and 75% stenosis were made for simulating arterial stenosis, respectively. According to our clinical experiences, we decided 10 seconds or less acquisition time to obtaining renal artery images without overlapping with renal veins. To determine slice thickness, the degrees of stenosis of the phantom images obtained 8-second acquisition time in variable slice thickness were independently interpreted with visual inspection by two experienced diagnostic radiologists. One hundred sixty-six patients underwent renal transplantation were evaluated clinically. Using a power injector, 0.1 mmol/kg Gd-DTPA was injected after the test scan with 1 ml Gd-DTPA for the determination of acquisition timing. MR images were obtained in the following imaging parameters; 4-mm slice thickness and 8-second acquisition time based on the results of phantom studies. Source images were noted in oblique coronal direction encompassing the entire renal arteries from iliac arteries to renal hili. Based on phantom study, slice thickness must be less than 4-mm to demonstrate the significant stenotic portion (>50%) of the phantom simulating transplanted renal artery. In 150 of 166 patients, excellent images of renal arteries were obtained without overlapping with renal veins. Causes of poor images were mainly inadequate timing of image acquisition. We can decide the imaging parameters of TRE3DMRA for the evaluation of renal artery stenosis of transplanted kidneys. Using these parameters, in 150

  12. Evaluation of intense rainfall parameters interpolation methods for the Espírito Santo State

    Directory of Open Access Journals (Sweden)

    José Eduardo Macedo Pezzopane

    2009-12-01

    Full Text Available Intense rainfalls are often responsible for the occurrence of undesirable processes in agricultural and forest areas, such as surface runoff, soil erosion and flooding. The knowledge of intense rainfall spatial distribution is important to agricultural watershed management, soil conservation and to the design of hydraulic structures. The present paper evaluated methods of spatial interpolation of the intense rainfall parameters (“K”, “a”, “b” and “c” for the Espírito Santo State, Brazil. Were compared real intense rainfall rates with those calculated by the interpolated intense rainfall parameters, considering different durations and return periods. Inverse distance to the 5th power IPD5 was the spatial interpolation method with better performance to spatial interpolated intense rainfall parameters.

  13. Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating

    Science.gov (United States)

    Chen, Liangji; Guo, Guangsong; Li, Huiying

    2017-07-01

    NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.

  14. Interpolating and sampling sequences in finite Riemann surfaces

    OpenAIRE

    Ortega-Cerda, Joaquim

    2007-01-01

    We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.

  15. Interpolation of daily rainfall using spatiotemporal models and clustering

    KAUST Repository

    Militino, A. F.

    2014-06-11

    Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

  16. Interpolation of daily rainfall using spatiotemporal models and clustering

    KAUST Repository

    Militino, A. F.; Ugarte, M. D.; Goicoa, T.; Genton, Marc G.

    2014-01-01

    Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

  17. The effect of interpolation methods in temperature and salinity trends in the Western Mediterranean

    Directory of Open Access Journals (Sweden)

    M. VARGAS-YANEZ

    2012-04-01

    Full Text Available Temperature and salinity data in the historical record are scarce and unevenly distributed in space and time and the estimation of linear trends is sensitive to different factors. In the case of the Western Mediterranean, previous works have studied the sensitivity of these trends to the use of bathythermograph data, the averaging methods or the way in which gaps in time series are dealt with. In this work, a new factor is analysed: the effect of data interpolation. Temperature and salinity time series are generated averaging existing data over certain geographical areas and also by means of interpolation. Linear trends from both types of time series are compared. There are some differences between both estimations for some layers and geographical areas, while in other cases the results are consistent. Those results which do not depend on the use of interpolated or non-interpolated data, neither are influenced by data analysis methods can be considered as robust ones. Those results influenced by the interpolation process or the factors analysed in previous sensitivity tests are not considered as robust results.

  18. The effect of propofol on CA1 pyramidal cell excitability and GABAA-mediated inhibition in the rat hippocampal slice.

    Science.gov (United States)

    Albertson, T E; Walby, W F; Stark, L G; Joy, R M

    1996-05-24

    An in vitro paired-pulse orthodromic stimulation technique was used to examine the effects of propofol on excitatory afferent terminals, CA1 pyramidal cells and recurrent collateral evoked inhibition in the rat hippocampal slice. Hippocampal slices 400 microns thick were perfused with oxygenated artificial cerebrospinal fluid, and electrodes were placed in the CA1 region to record extracellular field population spike (PS) or excitatory postsynaptic potential (EPSP) responses to stimulation of Schaffer collateral/commissural fibers. Gamma-aminobutyric acid (GABA)-mediated recurrent inhibition was measured using a paired-pulse technique. The major effect of propofol (7-28 microM) was a dose and time dependent increase in the intensity and duration of GABA-mediated inhibition. This propofol effect could be rapidly and completely reversed by exposure to known GABAA antagonists, including picrotoxin, bicuculline and pentylenetetrazol. It was also reversed by the chloride channel antagonist, 4,4'-diisothiocyanostilbene-2,2'-disulfonic acid (DIDS). It was not antagonized by central (flumazenil) or peripheral (PK11195) benzodiazepine antagonists. Reversal of endogenous inhibition was also noted with the antagonists picrotoxin and pentylenetetrazol. Input/output curves constructed using stimulus propofol caused only a small enhancement of EPSPs at higher stimulus intensities but had no effect on PS amplitudes. These studies are consistent with propofol having a GABAA-chloride channel mechanism causing its effect on recurrent collateral evoked inhibition in the rat hippocampal slice.

  19. An adaptive interpolation scheme for molecular potential energy surfaces

    Science.gov (United States)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  20. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  1. Optical histology: a method to visualize microvasculature in thick tissue sections of mouse brain.

    Directory of Open Access Journals (Sweden)

    Austin J Moy

    Full Text Available The microvasculature is the network of blood vessels involved in delivering nutrients and gases necessary for tissue survival. Study of the microvasculature often involves immunohistological methods. While useful for visualizing microvasculature at the µm scale in specific regions of interest, immunohistology is not well suited to visualize the global microvascular architecture in an organ. Hence, use of immunohistology precludes visualization of the entire microvasculature of an organ, and thus impedes study of global changes in the microvasculature that occur in concert with changes in tissue due to various disease states. Therefore, there is a critical need for a simple, relatively rapid technique that will facilitate visualization of the microvascular network of an entire tissue.The systemic vasculature of a mouse is stained with the fluorescent lipophilic dye DiI using a method called "vessel painting". The brain, or other organ of interest, is harvested and fixed in 4% paraformaldehyde. The organ is then sliced into 1 mm sections and optically cleared, or made transparent, using FocusClear, a proprietary optical clearing agent. After optical clearing, the DiI-labeled tissue microvasculature is imaged using confocal fluorescence microscopy and adjacent image stacks tiled together to produce a depth-encoded map of the microvasculature in the tissue slice. We demonstrated that the use of optical clearing enhances both the tissue imaging depth and the estimate of the vascular density. Using our "optical histology" technique, we visualized microvasculature in the mouse brain to a depth of 850 µm.Presented here are maps of the microvasculature in 1 mm thick slices of mouse brain. Using combined optical clearing and optical imaging techniques, we devised a methodology to enhance the visualization of the microvasculature in thick tissues. We believe this technique could potentially be used to generate a three-dimensional map of the

  2. Resource slicing in virtual wireless networks: a survey

    OpenAIRE

    Richart, Matias; Baliosian De Lazzari, Javier Ernesto; Serrat Fernández, Juan; Gorricho Moreno, Juan Luis

    2016-01-01

    New architectural and design approaches for radio access networks have appeared with the introduction of network virtualization in the wireless domain. One of these approaches splits the wireless network infrastructure into isolated virtual slices under their own management, requirements, and characteristics. Despite the advances in wireless virtualization, there are still many open issues regarding the resource allocation and isolation of wireless slices. Because of the dynamics and share...

  3. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

  4. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

  5. Oversampling of digitized images. [effects on interpolation in signal processing

    Science.gov (United States)

    Fischel, D.

    1976-01-01

    Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

  6. Verification-Driven Slicing of UML/OCL Models

    DEFF Research Database (Denmark)

    Shaikh, Asadullah; Clarisó Viladrosa, Robert; Wiil, Uffe Kock

    2010-01-01

    computational complexity can limit their scalability. In this paper, we consider a specific static model (UML class diagrams annotated with unrestricted OCL constraints) and a specific property to verify (satisfiability, i.e., “is it possible to create objects without violating any constraint?”). Current...... approaches to this problem have an exponential worst-case runtime. We propose a technique to improve their scalability by partitioning the original model into submodels (slices) which can be verified independently and where irrelevant information has been abstracted. The definition of the slicing procedure...

  7. Technique for image interpolation using polynomial transforms

    NARCIS (Netherlands)

    Escalante Ramírez, B.; Martens, J.B.; Haskell, G.G.; Hang, H.M.

    1993-01-01

    We present a new technique for image interpolation based on polynomial transforms. This is an image representation model that analyzes an image by locally expanding it into a weighted sum of orthogonal polynomials. In the discrete case, the image segment within every window of analysis is

  8. A Parallel Strategy for High-speed Interpolation of CNC Using Data Space Constraint Method

    Directory of Open Access Journals (Sweden)

    Shuan-qiang Yang

    2013-12-01

    Full Text Available A high-speed interpolation scheme using parallel computing is proposed in this paper. The interpolation method is divided into two tasks, namely, the rough task executing in PC and the fine task in the I/O card. During the interpolation procedure, the double buffers are constructed to exchange the interpolation data between the two tasks. Then, the data space constraint method is adapted to ensure the reliable and continuous data communication between the two buffers. Therefore, the proposed scheme can be realized in the common distribution of the operation systems without real-time performance. The high-speed and high-precision motion control can be achieved as well. Finally, an experiment is conducted on the self-developed CNC platform, the test results are shown to verify the proposed method.

  9. LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table

  10. An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA

    Directory of Open Access Journals (Sweden)

    Jiye HUANG

    2014-05-01

    Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  11. Slice through an LHC bending magnet

    CERN Multimedia

    Slice through an LHC superconducting dipole (bending) magnet. The slice includes a cut through the magnet wiring (niobium titanium), the beampipe and the steel magnet yokes. Particle beams in the Large Hadron Collider (LHC) have the same energy as a high-speed train, squeezed ready for collision into a space narrower than a human hair. Huge forces are needed to control them. Dipole magnets (2 poles) are used to bend the paths of the protons around the 27 km ring. Quadrupole magnets (4 poles) focus the proton beams and squeeze them so that more particles collide when the beams’ paths cross. There are 1232 15m long dipole magnets in the LHC.

  12. Preparation of positional renal slices for study of cell-specific toxicity.

    Science.gov (United States)

    Ruegg, C E; Gandolfi, A J; Nagle, R B; Krumdieck, C L; Brendel, K

    1987-04-01

    To reduce structural complexity, rabbit kidneys were sliced perpendicular to their cortical-papillary axis to isolate four distinct cell groupings. This positional orientation allows identification of each renal cell type based on its location within the slice. A mechanical slicer was used to make several precision-cut slices rapidly from an oriented cylindrical core of renal tissue, with minimal tissue trauma. Slices were then submerged under a gently circulating oxygenated media in a fritted glass support system that maintains viability (intracellular K+/DNA ratio) and structural integrity (histology) for at least 30 h. A high dose of mercuric chloride (10(-3) M) was used to demonstrate the structural and biochemical changes of intoxicated slices. This method provides a controlled subchronic in vitro system for the study of the individual cell types involved in cell-specific renal toxicities and may also be a useful tool for addressing other pharmacological and physiological research questions.

  13. Automated pulmonary nodule volumetry with an optimized algorithm - accuracy at different slice thicknesses compared to unidimensional and bidimentional measurements

    International Nuclear Information System (INIS)

    Vogel, M.N.; Schmuecker, S.; Maksimovich, O.; Claussen, C.D.; Horger, M.; Vonthein, R.; Bethge, W.; Dicken, V.

    2008-01-01

    Purpose: This in-vivo study quantifies the accuracy of automated pulmonary nodule volumetry in reconstructions with different slice thicknesses (ST) of clinical routine CT scans. The accuracy of volumetry is compared to that of unidimensional and bidimensional measurements. Materials and Methods: 28 patients underwent contrast enhanced 64-row CT scans of the chest and abdomen obtained in the clinical routine. All scans were reconstructed with 1, 3, and 5 mm ST. Volume, maximum axial diameter, and areas following the guidelines of Response Evaluation Criteria in Solid Tumors (RECIST) and the World Health Organization (WHO) were measured in all 101 lesions located in the overlap region of both scans using the new software tool OncoTreat (MeVis, Deutschland). The accuracy of quantifications in both scans was evaluated using the Bland and Altmann method. The reproducibility of measurements in dependence on the ST was compared using the likelihood ratio Chi-squared test. Results: A total of 101 nodules were identified in all patients. Segmentation was considered successful in 88.1% of the cases without local manual correction which was deliberately not employed in this study. For 80 nodules all 6 measurements were successful. These were statistically evaluated. The volumes were in the range 0.1 to 15.6 ml. Of all 80 lesions, 34 (42%) had direct contact to the pleura parietalis oder diaphragmalis and were termed parapleural, 32 (40%) were paravascular, 7 (9%) both parapleural and paravascular, the remaining 21 (27%) were free standing in the lung. The trueness differed significantly (Chi-square 7.22, p value 0.027) and was best with an ST of 3 mm and worst at 5 mm. Differences in precision were not significant (Chi-square 5.20, p value 0.074). The limits of agreement for an ST of 3 mm were ± 17.5% of the mean volume for volumetry, for maximum diameters ± 1.3 mm, and ± 31.8% for the calculated areas. Conclusion: Automated volumetry of pulmonary nodules using Onco

  14. Comparison of elevation and remote sensing derived products as auxiliary data for climate surface interpolation

    Science.gov (United States)

    Alvarez, Otto; Guo, Qinghua; Klinger, Robert C.; Li, Wenkai; Doherty, Paul

    2013-01-01

    Climate models may be limited in their inferential use if they cannot be locally validated or do not account for spatial uncertainty. Much of the focus has gone into determining which interpolation method is best suited for creating gridded climate surfaces, which often a covariate such as elevation (Digital Elevation Model, DEM) is used to improve the interpolation accuracy. One key area where little research has addressed is in determining which covariate best improves the accuracy in the interpolation. In this study, a comprehensive evaluation was carried out in determining which covariates were most suitable for interpolating climatic variables (e.g. precipitation, mean temperature, minimum temperature, and maximum temperature). We compiled data for each climate variable from 1950 to 1999 from approximately 500 weather stations across the Western United States (32° to 49° latitude and −124.7° to −112.9° longitude). In addition, we examined the uncertainty of the interpolated climate surface. Specifically, Thin Plate Spline (TPS) was used as the interpolation method since it is one of the most popular interpolation techniques to generate climate surfaces. We considered several covariates, including DEM, slope, distance to coast (Euclidean distance), aspect, solar potential, radar, and two Normalized Difference Vegetation Index (NDVI) products derived from Advanced Very High Resolution Radiometer (AVHRR) and Moderate Resolution Imaging Spectroradiometer (MODIS). A tenfold cross-validation was applied to determine the uncertainty of the interpolation based on each covariate. In general, the leading covariate for precipitation was radar, while DEM was the leading covariate for maximum, mean, and minimum temperatures. A comparison to other products such as PRISM and WorldClim showed strong agreement across large geographic areas but climate surfaces generated in this study (ClimSurf) had greater variability at high elevation regions, such as in the Sierra

  15. An overview of 5G network slicing architecture

    Science.gov (United States)

    Chen, Qiang; Wang, Xiaolei; Lv, Yingying

    2018-05-01

    With the development of mobile communication technology, the traditional single network model has been unable to meet the needs of users, and the demand for differentiated services is increasing. In order to solve this problem, the fifth generation of mobile communication technology came into being, and as one of the key technologies of 5G, network slice is the core technology of network virtualization and software defined network, enabling network slices to flexibly provide one or more network services according to users' needs[1]. Each slice can independently tailor the network functions according to the requirements of the business scene and the traffic model and manage the layout of the corresponding network resources, to improve the flexibility of network services and the utilization of resources, and enhance the robustness and reliability of the whole network [2].

  16. A novel lung slice system with compromised antioxidant defenses

    Energy Technology Data Exchange (ETDEWEB)

    Hardwick, S.J.; Adam, A.; Cohen, G.M. (Univ. of London (England)); Smith, L.L. (Imperial Chemical Industries PLC, Cheshire (England))

    1990-04-01

    In order to facilitate the study of oxidative stress in lung tissue, rat lung slices with impaired antioxidant defenses were prepared and used. Incubation of lung slices with the antineoplastic agent 1,3-bis(2-chloroethyl)-1-nitrosourea (BCNU) (100 {mu}M) in an amino acid-rich medium for 45 min produced a near-maximal (approximately 85%), irreversible inhibition of glutathione reductase, accompanied by only a modest (approximately 15%) decrease in pulmonary nonprotein sulfhydryls (NPSH) and no alteration in intracellular ATP, NADP{sup +}, and NADPH levels. The amounts of NADP(H), ATP, and NPSH were stable over a 4-hr incubation period following the removal from BCNU. The viability of the system was further evaluated by measuring the rate of evolution of {sup 14}CO{sub 2} from D-({sup 14}C(U))-glucose. The rates of evolution were almost identical in the compromised system when compared with control slices over a 4-hr time period. By using slices with compromised oxidative defenses, preliminary results have been obtained with paraquat, nitrofurantoin, and 2,3-dimethoxy-1,4-naphthoquinone.

  17. A novel lung slice system with compromised antioxidant defenses

    International Nuclear Information System (INIS)

    Hardwick, S.J.; Adam, A.; Cohen, G.M.; Smith, L.L.

    1990-01-01

    In order to facilitate the study of oxidative stress in lung tissue, rat lung slices with impaired antioxidant defenses were prepared and used. Incubation of lung slices with the antineoplastic agent 1,3-bis(2-chloroethyl)-1-nitrosourea (BCNU) (100 μM) in an amino acid-rich medium for 45 min produced a near-maximal (approximately 85%), irreversible inhibition of glutathione reductase, accompanied by only a modest (approximately 15%) decrease in pulmonary nonprotein sulfhydryls (NPSH) and no alteration in intracellular ATP, NADP + , and NADPH levels. The amounts of NADP(H), ATP, and NPSH were stable over a 4-hr incubation period following the removal from BCNU. The viability of the system was further evaluated by measuring the rate of evolution of 14 CO 2 from D-[ 14 C(U)]-glucose. The rates of evolution were almost identical in the compromised system when compared with control slices over a 4-hr time period. By using slices with compromised oxidative defenses, preliminary results have been obtained with paraquat, nitrofurantoin, and 2,3-dimethoxy-1,4-naphthoquinone

  18. Error analysis of the microradiographical determination of mineral content in mineralised tissue slices

    International Nuclear Information System (INIS)

    Jong, E. de J. de; Bosch, J.J. ten

    1985-01-01

    The microradiographic method, used to measure the mineral content in slices of mineralised tissues as a function of position, is analysed. The total error in the measured mineral content is split into systematic errors per microradiogram and random noise errors. These errors are measured quantitatively. Predominant contributions to systematic errors appear to be x-ray beam inhomogeneity, the determination of the step wedge thickness and stray light in the densitometer microscope, while noise errors are under the influence of the choice of film, the value of the optical film transmission of the microradiographic image and the area of the densitometer window. Optimisation criteria are given. The authors used these criteria, together with the requirement that the method be fast and easy to build an optimised microradiographic system. (author)

  19. Interpolation-Based Condensation Model Reduction Part 1: Frequency Window Reduction Method Application to Structural Acoustics

    National Research Council Canada - National Science Library

    Ingel, R

    1999-01-01

    ... (which require derivative information) interpolation functions as well as standard Lagrangian functions, which can be linear, quadratic or cubic, have been used to construct the interpolation windows...

  20. Spatial and spectral interpolation of ground-motion intensity measure observations

    Science.gov (United States)

    Worden, Charles; Thompson, Eric M.; Baker, Jack W.; Bradley, Brendon A.; Luco, Nicolas; Wilson, David

    2018-01-01

    Following a significant earthquake, ground‐motion observations are available for a limited set of locations and intensity measures (IMs). Typically, however, it is desirable to know the ground motions for additional IMs and at locations where observations are unavailable. Various interpolation methods are available, but because IMs or their logarithms are normally distributed, spatially correlated, and correlated with each other at a given location, it is possible to apply the conditional multivariate normal (MVN) distribution to the problem of estimating unobserved IMs. In this article, we review the MVN and its application to general estimation problems, and then apply the MVN to the specific problem of ground‐motion IM interpolation. In particular, we present (1) a formulation of the MVN for the simultaneous interpolation of IMs across space and IM type (most commonly, spectral response at different oscillator periods) and (2) the inclusion of uncertain observation data in the MVN formulation. These techniques, in combination with modern empirical ground‐motion models and correlation functions, provide a flexible framework for estimating a variety of IMs at arbitrary locations.

  1. Homography Propagation and Optimization for Wide-Baseline Street Image Interpolation.

    Science.gov (United States)

    Nie, Yongwei; Zhang, Zhensong; Sun, Hanqiu; Su, Tan; Li, Guiqing

    2017-10-01

    Wide-baseline street image interpolation is useful but very challenging. Existing approaches either rely on heavyweight 3D reconstruction or computationally intensive deep networks. We present a lightweight and efficient method which uses simple homography computing and refining operators to estimate piecewise smooth homographies between input views. To achieve the goal, we show how to combine homography fitting and homography propagation together based on reliable and unreliable superpixel discrimination. Such a combination, other than using homography fitting only, dramatically increases the accuracy and robustness of the estimated homographies. Then, we integrate the concepts of homography and mesh warping, and propose a novel homography-constrained warping formulation which enforces smoothness between neighboring homographies by utilizing the first-order continuity of the warped mesh. This further eliminates small artifacts of overlapping, stretching, etc. The proposed method is lightweight and flexible, allows wide-baseline interpolation. It improves the state of the art and demonstrates that homography computation suffices for interpolation. Experiments on city and rural datasets validate the efficiency and effectiveness of our method.

  2. The crustal thickness of Australia

    Science.gov (United States)

    Clitheroe, G.; Gudmundsson, O.; Kennett, B.L.N.

    2000-01-01

    We investigate the crustal structure of the Australian continent using the temporary broadband stations of the Skippy and Kimba projects and permanent broadband stations. We isolate near-receiver information, in the form of crustal P-to-S conversions, using the receiver function technique. Stacked receiver functions are inverted for S velocity structure using a Genetic Algorithm approach to Receiver Function Inversion (GARFI). From the resulting velocity models we are able to determine the Moho depth and to classify the width of the crust-mantle transition for 65 broadband stations. Using these results and 51 independent estimates of crustal thickness from refraction and reflection profiles, we present a new, improved, map of Moho depth for the Australian continent. The thinnest crust (25 km) occurs in the Archean Yilgarn Craton in Western Australia; the thickest crust (61 km) occurs in Proterozoic central Australia. The average crustal thickness is 38.8 km (standard deviation 6.2 km). Interpolation error estimates are made using kriging and fall into the range 2.5-7.0 km. We find generally good agreement between the depth to the seismologically defined Moho and xenolith-derived estimates of crustal thickness beneath northeastern Australia. However, beneath the Lachlan Fold Belt the estimates are not in agreement, and it is possible that the two techniques are mapping differing parts of a broad Moho transition zone. The Archean cratons of Western Australia appear to have remained largely stable since cratonization, reflected in only slight variation of Moho depth. The largely Proterozoic center of Australia shows relatively thicker crust overall as well as major Moho offsets. We see evidence of the margin of the contact between the Precambrian craton and the Tasman Orogen, referred to as the Tasman Line. Copyright 2000 by the American Geophysical Union.

  3. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    Science.gov (United States)

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  4. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    Directory of Open Access Journals (Sweden)

    Peilu Liu

    2017-10-01

    Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  5. Equilibrium initial data for moving puncture simulations: the stationary 1 + log slicing

    International Nuclear Information System (INIS)

    Baumgarte, T W; Matera, K; Etienne, Z B; Liu, Y T; Shapiro, S L; Taniguchi, K; Murchadha, N O

    2009-01-01

    We discuss a 'stationary 1 + log' slicing condition for the construction of solutions to Einstein's constraint equations. For stationary spacetimes, these initial data give a stationary foliation when evolved with 'moving puncture' gauge conditions that are often used in black hole evolutions. The resulting slicing is time independent and agrees with the slicing generated by being dragged along a timelike Killing vector of the spacetime. When these initial data are evolved with moving puncture gauge conditions, numerical errors arising from coordinate evolution should be minimized. While these properties appear very promising, suggesting that this slicing condition should be an attractive alternative to, for example, maximal slicing, we demonstrate in this paper that solutions can be constructed only for a small class of problems. For binary black hole initial data, in particular, it is often assumed that there exists an approximate helical Killing vector that generates the binary's orbit. We show that 1 + log slices that are stationary with respect to such a helical Killing vector cannot be asymptotically flat, unless the spacetime possesses an additional axial Killing vector.

  6. Electron slicing for the generation of tunable femtosecond soft x-ray pulses from a free electron laser and slice diagnostics

    Directory of Open Access Journals (Sweden)

    S. Di Mitri

    2013-04-01

    Full Text Available We present the experimental results of femtosecond slicing an ultrarelativistic, high brightness electron beam with a collimator. In contrast to some qualitative considerations reported in Phys. Rev. Lett. 92, 074801 (2004PRLTAO0031-900710.1103/PhysRevLett.92.074801, we first demonstrate that the collimation process preserves the slice beam quality, in agreement with our theoretical expectations, and that the collimation is compatible with the operation of a linear accelerator in terms of beam transport, radiation dose, and collimator heating. Accordingly, the collimated beam can be used for the generation of stable femtosecond soft x-ray pulses of tunable duration, from either a self-amplified spontaneous emission or an externally seeded free electron laser. The proposed method also turns out to be a more compact and cheaper solution for electron slice diagnostics than the commonly used radio frequency deflecting cavities and has minimal impact on the machine design.

  7. Two-dimensional interpolation with experimental data smoothing

    International Nuclear Information System (INIS)

    Trejbal, Z.

    1989-01-01

    A method of two-dimensional interpolation with smoothing of time statistically deflected points is developed for processing of magnetic field measurements at the U-120M field measurements at the U-120M cyclotron. Mathematical statement of initial requirements and the final result of relevant algebraic transformations are given. 3 refs

  8. Comparison of Spatial Interpolation Schemes for Rainfall Data and Application in Hydrological Modeling

    Directory of Open Access Journals (Sweden)

    Tao Chen

    2017-05-01

    Full Text Available The spatial distribution of precipitation is an important aspect of water-related research. The use of different interpolation schemes in the same catchment may cause large differences and deviations from the actual spatial distribution of rainfall. Our study analyzes different methods of spatial rainfall interpolation at annual, daily, and hourly time scales to provide a comprehensive evaluation. An improved regression-based scheme is proposed using principal component regression with residual correction (PCRR and is compared with inverse distance weighting (IDW and multiple linear regression (MLR interpolation methods. In this study, the meso-scale catchment of the Fuhe River in southeastern China was selected as a typical region. Furthermore, a hydrological model HEC-HMS was used to calculate streamflow and to evaluate the impact of rainfall interpolation methods on the results of the hydrological model. Results show that the PCRR method performed better than the other methods tested in the study and can effectively eliminate the interpolation anomalies caused by terrain differences between observation points and surrounding areas. Simulated streamflow showed different characteristics based on the mean, maximum, minimum, and peak flows. The results simulated by PCRR exhibited the lowest streamflow error and highest correlation with measured values at the daily time scale. The application of the PCRR method is found to be promising because it considers multicollinearity among variables.

  9. Image interpolation and denoising for division of focal plane sensors using Gaussian processes.

    Science.gov (United States)

    Gilboa, Elad; Cunningham, John P; Nehorai, Arye; Gruev, Viktor

    2014-06-16

    Image interpolation and denoising are important techniques in image processing. These methods are inherent to digital image acquisition as most digital cameras are composed of a 2D grid of heterogeneous imaging sensors. Current polarization imaging employ four different pixelated polarization filters, commonly referred to as division of focal plane polarization sensors. The sensors capture only partial information of the true scene, leading to a loss of spatial resolution as well as inaccuracy of the captured polarization information. Interpolation is a standard technique to recover the missing information and increase the accuracy of the captured polarization information. Here we focus specifically on Gaussian process regression as a way to perform a statistical image interpolation, where estimates of sensor noise are used to improve the accuracy of the estimated pixel information. We further exploit the inherent grid structure of this data to create a fast exact algorithm that operates in ����(N(3/2)) (vs. the naive ���� (N³)), thus making the Gaussian process method computationally tractable for image data. This modeling advance and the enabling computational advance combine to produce significant improvements over previously published interpolation methods for polarimeters, which is most pronounced in cases of low signal-to-noise ratio (SNR). We provide the comprehensive mathematical model as well as experimental results of the GP interpolation performance for division of focal plane polarimeter.

  10. Kriging interpolation in seismic attribute space applied to the South Arne Field, North Sea

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Schiøtt, Christian

    2010-01-01

    Seismic attributes can be used to guide interpolation in-between and extrapolation away from well log locations using for example linear regression, neural networks, and kriging. Kriging-based estimation methods (and most other types of interpolation/extrapolation techniques) are intimately linke...

  11. (Non)perturbative gravity, nonlocality, and nice slices

    International Nuclear Information System (INIS)

    Giddings, Steven B.

    2006-01-01

    Perturbative dynamics of gravity is investigated for high-energy scattering and in black hole backgrounds. In the latter case, a straightforward perturbative analysis fails, in a close parallel to the failure of the former when the impact parameter reaches the Schwarzschild radius. This suggests a flaw in a semiclassical description of physics on spatial slices that intersect both outgoing Hawking radiation and matter that has carried information into a black hole; such slices are instrumental in a general argument for black hole information loss. This indicates a possible role for the proposal that nonperturbative gravitational physics is intrinsically nonlocal

  12. The clinical efficacy of 1 mm-slice CT of the middle ear

    International Nuclear Information System (INIS)

    Noda, Kazuhiro; Noiri, Teruhisa; Doi, Katsumi; Koizuka, Izumi; Tanaka, Hisashi; Mishiro, Yasuo; Okumura, Shin-ichi; Kubo, Takeshi

    2000-01-01

    The efficacy of the preoperative 1 mm-slice CT for evaluating the condition of the ossicular chain and the facial canal was assessed. CT findings were compared with the operative findings of middle ears in 120 cases of chronic otitis media or cholesteatoma that underwent tympanoplasty. The reliability of 1 mm-slice CT in detecting any defect of the ossicular chain was much superior to those of 2 mm-slice CT previously reported, and the difference between them is essential for preoperative information. On the other hand, thinner slice than 1 mm may be unnecessary, especially in routine use. (author)

  13. The clinical efficacy of 1 mm-slice CT of the middle ear

    Energy Technology Data Exchange (ETDEWEB)

    Noda, Kazuhiro; Noiri, Teruhisa [Kawanishi Municipal Hospital, Hyogo (Japan); Doi, Katsumi; Koizuka, Izumi; Tanaka, Hisashi; Mishiro, Yasuo; Okumura, Shin-ichi; Kubo, Takeshi

    2000-02-01

    The efficacy of the preoperative 1 mm-slice CT for evaluating the condition of the ossicular chain and the facial canal was assessed. CT findings were compared with the operative findings of middle ears in 120 cases of chronic otitis media or cholesteatoma that underwent tympanoplasty. The reliability of 1 mm-slice CT in detecting any defect of the ossicular chain was much superior to those of 2 mm-slice CT previously reported, and the difference between them is essential for preoperative information. On the other hand, thinner slice than 1 mm may be unnecessary, especially in routine use. (author)

  14. Bi-local baryon interpolating fields with two flavors

    Energy Technology Data Exchange (ETDEWEB)

    Dmitrasinovic, V. [Belgrade University, Institute of Physics, Pregrevica 118, Zemun, P.O. Box 57, Beograd (RS); Chen, Hua-Xing [Institutos de Investigacion de Paterna, Departamento de Fisica Teorica and IFIC, Centro Mixto Universidad de Valencia-CSIC, Valencia (Spain); Peking University, Department of Physics and State Key Laboratory of Nuclear Physics and Technology, Beijing (China)

    2011-02-15

    We construct bi-local interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We use the restrictions following from the Pauli principle to derive relations/identities among the baryon operators with identical quantum numbers. Such relations that follow from the combined spatial, Dirac, color, and isospin Fierz transformations may be called the (total/complete) Fierz identities. These relations reduce the number of independent baryon operators with any given spin and isospin. We also study the Abelian and non-Abelian chiral transformation properties of these fields and place them into baryon chiral multiplets. Thus we derive the independent baryon interpolating fields with given values of spin (Lorentz group representation), chiral symmetry (U{sub L}(2) x U{sub R}(2) group representation) and isospin appropriate for the first angular excited states of the nucleon. (orig.)

  15. Angular interpolations and splice options for three-dimensional transport computations

    International Nuclear Information System (INIS)

    Abu-Shumays, I.K.; Yehnert, C.E.

    1996-01-01

    New, accurate and mathematically rigorous angular Interpolation strategies are presented. These strategies preserve flow and directionality separately over each octant of the unit sphere, and are based on a combination of spherical harmonics expansions and least squares algorithms. Details of a three-dimensional to three-dimensional (3-D to 3-D) splice method which utilizes the new angular interpolations are summarized. The method has been implemented in a multidimensional discrete ordinates transport computer program. Various features of the splice option are illustrated by several applications to a benchmark Dog-Legged Void Neutron (DLVN) streaming and transport experimental assembly

  16. Implementing fuzzy polynomial interpolation (FPI and fuzzy linear regression (LFR

    Directory of Open Access Journals (Sweden)

    Maria Cristina Floreno

    1996-05-01

    Full Text Available This paper presents some preliminary results arising within a general framework concerning the development of software tools for fuzzy arithmetic. The program is in a preliminary stage. What has been already implemented consists of a set of routines for elementary operations, optimized functions evaluation, interpolation and regression. Some of these have been applied to real problems.This paper describes a prototype of a library in C++ for polynomial interpolation of fuzzifying functions, a set of routines in FORTRAN for fuzzy linear regression and a program with graphical user interface allowing the use of such routines.

  17. Interpolation/penalization applied for strength design of 3D thermoelastic structures

    DEFF Research Database (Denmark)

    Pedersen, Pauli; Pedersen, Niels L.

    2012-01-01

    compliance. This is proved for thermoelastic structures by sensitivity analysis of compliance that facilitates localized determination of sensitivities, and the compliance is not identical to the total elastic energy (twice strain energy). An explicit formula for the difference is derived and numerically...... parameter interpolation in explicit form is preferred, and the influence of interpolation on compliance sensitivity analysis is included. For direct strength maximization the sensitivity analysis of local von Mises stresses is demanding. An applied recursive procedure to obtain uniform energy density...

  18. Slice through an LHC focusing magnet

    CERN Multimedia

    Slice through an LHC superconducting quadrupole (focusing) magnet. The slice includes a cut through the magnet wiring (niobium titanium), the beampipe and the steel magnet yokes. Particle beams in the Large Hadron Collider (LHC) have the same energy as a high-speed train, squeezed ready for collision into a space narrower than a human hair. Huge forces are needed to control them. Dipole magnets (2 poles) are used to bend the paths of the protons around the 27 km ring. Quadrupole magnets (4 poles) focus the proton beams and squeeze them so that more particles collide when the beams’ paths cross. Bringing beams into collision requires a precision comparable to making two knitting needles collide, launched from either side of the Atlantic Ocean.

  19. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    Science.gov (United States)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With

  20. Free-breathing cardiac MR stress perfusion with real-time slice tracking.

    Science.gov (United States)

    Basha, Tamer A; Roujol, Sébastien; Kissinger, Kraig V; Goddu, Beth; Berg, Sophie; Manning, Warren J; Nezafat, Reza

    2014-09-01

    To develop a free-breathing cardiac MR perfusion sequence with slice tracking for use after physical exercise. We propose to use a leading navigator, placed immediately before each 2D slice acquisition, for tracking the respiratory motion and updating the slice location in real-time. The proposed sequence was used to acquire CMR perfusion datasets in 12 healthy adult subjects and 8 patients. Images were compared with the conventional perfusion (i.e., without slice tracking) results from the same subjects. The location and geometry of the myocardium were quantitatively analyzed, and the perfusion signal curves were calculated from both sequences to show the efficacy of the proposed sequence. The proposed sequence was significantly better compared with the conventional perfusion sequence in terms of qualitative image scores. Changes in the myocardial location and geometry decreased by 50% in the slice tracking sequence. Furthermore, the proposed sequence had signal curves that are smoother and less noisy. The proposed sequence significantly reduces the effect of the respiratory motion on the image acquisition in both rest and stress perfusion scans. Copyright © 2013 Wiley Periodicals, Inc.

  1. The use of transport and diffusion equations in the three-dimensional reconstruction of computerized tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    Pires, Sandrerley Ramos, E-mail: sandrerley@eee.ufg.br [Escola de Engenharia Eletrica e de Computacao - EEEC, Universidade Federal de Goias - UFG, Goiania, GO (Brazil); Flores, Edna Lucia; Pires, Dulcineia Goncalves F.; Carrijo, Gilberto Arantes; Veiga, Antonio Claudio Paschoarelli [Faculdade de Engenharia Eletrica - FEELT, Universidade Federal de Uberlandia - UFU, Uberlandia, MG (Brazil); Barcelos, Celia Aparecida Z. [Faculdade de Matematica, Universidade Federal de Uberlandia - UFU, Uberlandia, MG (Brazil)

    2012-09-15

    The visualization of a computerized tomographic (TC) exam in 3D increases the quality of the medical diagnosis and, consequently, the success probability in the treatment. To obtain a high quality image it is necessary to obtain slices which are close to one another. Motivated towards the goal of reaching an improved balance between quantity of slices and visualization quality, this research work presents a digital inpainting technique of 3D interpolation for CT slices used in the visualization of human body structures. The inpainting is carried out via non-linear partial differential equations (PDE). The PDE's have been used, in the image-processing context to fill in the damaged regions in a digital 2D image. Inspired by this idea, this article proposes an interpolation method for the filling in of the empty regions between the CT slices. To do it, considering the high similarity between two consecutive real slice, the first step of the proposed method is to create the virtual slices. The virtual slices contain all similarity between the intercalated slices and, when there are not similarities between real slices, the virtual slices will contain indefinite portions. In the second step of the proposed method, the created virtual slices will be used together with the real slices images, in the reconstruction of the structure in three dimensions, mapped onto the exam. The proposed method is capable of reconstructing the curvatures of the patient's internal structures without using slices that are close to one another. The experiments carried out show the proposed method's efficiency. (author)

  2. Blind Authentication Using Periodic Properties ofInterpolation

    Czech Academy of Sciences Publication Activity Database

    Mahdian, Babak; Saic, Stanislav

    2008-01-01

    Roč. 3, č. 3 (2008), s. 529-538 ISSN 1556-6013 R&D Projects: GA ČR GA102/08/0470 Institutional research plan: CEZ:AV0Z10750506 Keywords : image forensics * digital forgery * image tampering * interpolation detection * resampling detection Subject RIV: IN - Informatics, Computer Science Impact factor: 2.230, year: 2008

  3. Interpolation in Time Series: An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

    Directory of Open Access Journals (Sweden)

    Mathieu Lepot

    2017-10-01

    Full Text Available A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are many methods and criteria to estimate efficiencies of these methods, but uncertainties on the interpolated values are rarely calculated. Furthermore, while they are estimated according to standard methods, the prediction uncertainty is not taken into account: a discussion is thus presented on the uncertainty estimation of interpolated/extrapolated data. Finally, some suggestions for further research and a new method are proposed.

  4. Comparison of spatial interpolation techniques to predict soil properties in the colombian piedmont eastern plains

    Directory of Open Access Journals (Sweden)

    Mauricio Castro Franco

    2017-07-01

    Full Text Available Context: Interpolating soil properties at field-scale in the Colombian piedmont eastern plains is challenging due to: the highly and complex variable nature of some processes; the effects of the soil; the land use; and the management. While interpolation techniques are being adapted to include auxiliary information of these effects, the soil data are often difficult to predict using conventional techniques of spatial interpolation. Method: In this paper, we evaluated and compared six spatial interpolation techniques: Inverse Distance Weighting (IDW, Spline, Ordinary Kriging (KO, Universal Kriging (UK, Cokriging (Ckg, and Residual Maximum Likelihood-Empirical Best Linear Unbiased Predictor (REML-EBLUP, from conditioned Latin Hypercube as a sampling strategy. The ancillary information used in Ckg and REML-EBLUP was indexes calculated from a digital elevation model (MDE. The “Random forest” algorithm was used for selecting the most important terrain index for each soil properties. Error metrics were used to validate interpolations against cross validation. Results: The results support the underlying assumption that HCLc captured adequately the full distribution of variables of ancillary information in the Colombian piedmont eastern plains conditions. They also suggest that Ckg and REML-EBLUP perform best in the prediction in most of the evaluated soil properties. Conclusions: Mixed interpolation techniques having auxiliary soil information and terrain indexes, provided a significant improvement in the prediction of soil properties, in comparison with other techniques.

  5. Excitatory and inhibitory pathways modulate kainate excitotoxicity in hippocampal slice cultures

    DEFF Research Database (Denmark)

    Casaccia-Bonnefil, P; Benedikz, Eirikur; Rai, R

    1993-01-01

    In organotypic hippocampal slice cultures, kainate (KA) specifically induces cell loss in the CA3 region while N-methyl-D-aspartate induces cell loss in the CA1 region. The sensitivity of slice cultures to KA toxicity appears only after 2 weeks in vitro which parallels the appearance of mossy...... fibers. KA toxicity is potentiated by co-application with the GABA-A antagonist, picrotoxin. These data suggest that the excitotoxicity of KA in slice cultures is modulated by both excitatory and inhibitory synapses....

  6. 64-slice multidetector coronary CT angiography: in vitro evaluation of 68 different stents

    International Nuclear Information System (INIS)

    Maintz, David; Seifarth, Harald; Rink, Michael; Oezguen, Murat; Heindel, Walter; Fischbach, Roman; Raupach, Rainer; Flohr, Thomas; Sommer, Torsten

    2006-01-01

    The purpose of this study was to test a large sample of different coronary artery stents using four image reconstruction approaches with respect to lumen visualization, lumen attenuation, and image noise in 64-slice multidetector-row computed tomography (MDCT) in vitro and to provide a catalogue of currently used coronary artery stents when imaged with state-of the-art MDCT. We examined 68 different coronary artery stents (57 stainless steel, four cobalt-chromium, one cobalt-alloy, two nitinol, four tantalum) in a coronary artery phantom (vessel diameter 3 mm, intravascular attenuation 250 HU, extravascular density -70). Stents were imaged in axial orientation with standard parameters: 32x0.6 collimation, pitch 0.24, 680 mAs, 120 kV, rotation time 0.37 s. Four different image reconstructions were obtained with varying convolution kernels and section thicknesses: (1) soft, 0.6 mm, (2) soft, 0.75, (3) medium soft, 0.6, and (4) stent-optimized sharp, 0.6. To evaluate visualization characteristics of of the stent, the lumen diameter, intraluminal density and noise were measured. The high-resolution kernel offered significantly better average lumen visualization (57% ±10%) and more realistic lumen attenuation (222 HU ±66 HU) at the expense of increased noise (15.3 HU ±3.7 HU) compared with the soft and medium-soft CT angiography (CTA) protocol (p<0.001 for all). Stents with a lumen visibility of more than 66% were: Arthos pico, Driver, Flex, Nexus2, S7, Tenax complete, Vision (all 67%), Symbiot, Teneo (70%), and Radius (73%). Only ten stents showed a lumen visibility of less than 50%. Stent lumen visibility largely varies depending on the stent type. Even with the improved spatial resolution of 64-slice CT, a stent-optimized kernel remains beneficial for stent visualization when compared with the standard medium-soft CTA protocol. Using 64-slice CT and high-resolution kernel, the majority of stent products show a lumen visibility of more than 50% of the stent

  7. The diagnostic value of multi-slice spiral CT virtual bronchoscopy in tracheal and bronchial disease

    International Nuclear Information System (INIS)

    Han Ying; Ma Daqing

    2006-01-01

    Objective: To assess the diagnostic value of multi-slice spiral CT virtual bronchoscopy (CTVB) in tracheal and bronchial disease. Methods: Forty-two patients including central lung cancer (n=35), endobronchial tuberculosis (n=3), intrabronchial benign tumor (n=3), and intrabronchial foreign body (n=1) were examined by using multi-slice spiral CT examinations. All the final diagnosis were proved by pathology except 1 patient with endoluminal foreign body was proved by clinic. All patients were scanned on GE Lightspeed 99 scanner, using 10 mm collimation, pitch of 1.35, and reconstructed at 1 mm intervals and 1.25 mm thickness. The chest images of transverse CT and virtual bronchoscopy were viewed by two separate radiologists who were familiar with the tracheal and bronchial anatomy. Results: Among the 42 patients, the tumor of trachea and bronchial lumen appeared as masses in 22 of 35 patients with central lung cancer and bronchial stenosis was found in 13 of 35 patients with central lung cancer, and bronchial wall thickening was revealed on transverse CT in all 35 cases. 3 patients of endobronchial tuberculosis showed bronchial lumen narrowing on CTVB, the bronchial wall thickening was revealed on transverse CT, and the length of the wall thickening was long. 3 patients with intrabronchial benign tumor showed nodules in trachea and bronchial lumen on CTVB, and without wall thickening on transverse CT. CTVB could detect the occlusion of bronchial lumen in 1 patient with intrabronchial foreign body and CTVB was able to visualize the areas beyond stenosis, and the bronchial wall was without thickening on transverse CT. Conclusion: Multi- slice spiral CTVB could reflect the morphology of tracheal and bronchial disease. Combined with transverse CT, it could provide diagnostic reference value for bronchial disease. (authors)

  8. Investigation of lung nodule detectability in low-dose 320-slice computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Silverman, J. D.; Paul, N. S.; Siewerdsen, J. H. [Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Department of Medical Imaging, Toronto General Hospital, Toronto, Ontario M5G 2C6 (Canada); Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario M5G 2M9 (Canada); Ontario Cancer Institute, Princess Margaret Hospital, Toronto, Ontario M5G 2M9 (Canada) and Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 2M9 (Canada)

    2009-05-15

    Low-dose imaging protocols in chest CT are important in the screening and surveillance of suspicious and indeterminate lung nodules. Techniques that maintain nodule detectability yet permit dose reduction, particularly for large body habitus, were investigated. The objective of this study was to determine the extent to which radiation dose can be minimized while maintaining diagnostic performance through knowledgeable selection of reconstruction techniques. A 320-slice volumetric CT scanner (Aquilion ONE, Toshiba Medical Systems) was used to scan an anthropomorphic phantom at doses ranging from {approx}0.1 mGy up to that typical of low-dose CT (LDCT, {approx}5 mGy) and diagnostic CT ({approx}10 mGy). Radiation dose was measured via Farmer chamber and MOSFET dosimetry. The phantom presented simulated nodules of varying size and contrast within a heterogeneous background, and chest thickness was varied through addition of tissue-equivalent bolus about the chest. Detectability of a small solid lung nodule (3.2 mm diameter, -37 HU, typically the smallest nodule of clinical significance in screening and surveillance) was evaluated as a function of dose, patient size, reconstruction filter, and slice thickness by means of nine-alternative forced-choice (9AFC) observer tests to quantify nodule detectability. For a given reconstruction filter, nodule detectability decreased sharply below a threshold dose level due to increased image noise, especially for large body size. However, nodule detectability could be maintained at lower doses through knowledgeable selection of (smoother) reconstruction filters. For large body habitus, optimal filter selection reduced the dose required for nodule detection by up to a factor of {approx}3 (from {approx}3.3 mGy for sharp filters to {approx}1.0 mGy for the optimal filter). The results indicate that radiation dose can be reduced below the current low-dose (5 mGy) and ultralow-dose (1 mGy) levels with knowledgeable selection of

  9. Generation of nuclear data banks through interpolation

    International Nuclear Information System (INIS)

    Castillo M, J.A.

    1999-01-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, used to generate Nuclear Data Banks employing bi cubic polynomial interpolation, taking as independent variables the uranium and gadolinium percents. Two proposals were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed to obtain the interpolating polynomial and later, the corresponding linear equations system. In the solution of this system the Gaussian elimination method with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validations test, a comparison was made between the values obtained with INTPOLBI and INTERTEG (created at the Instituto de Investigaciones Electricas with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks. (Author)

  10. DrawFromDrawings: 2D Drawing Assistance via Stroke Interpolation with a Sketch Database.

    Science.gov (United States)

    Matsui, Yusuke; Shiratori, Takaaki; Aizawa, Kiyoharu

    2017-07-01

    We present DrawFromDrawings, an interactive drawing system that provides users with visual feedback for assistance in 2D drawing using a database of sketch images. Following the traditional imitation and emulation training from art education, DrawFromDrawings enables users to retrieve and refer to a sketch image stored in a database and provides them with various novel strokes as suggestive or deformation feedback. Given regions of interest (ROIs) in the user and reference sketches, DrawFromDrawings detects as-long-as-possible (ALAP) stroke segments and the correspondences between user and reference sketches that are the key to computing seamless interpolations. The stroke-level interpolations are parametrized with the user strokes, the reference strokes, and new strokes created by warping the reference strokes based on the user and reference ROI shapes, and the user study indicated that the interpolation could produce various reasonable strokes varying in shapes and complexity. DrawFromDrawings allows users to either replace their strokes with interpolated strokes (deformation feedback) or overlays interpolated strokes onto their strokes (suggestive feedback). The other user studies on the feedback modes indicated that the suggestive feedback enabled drawers to develop and render their ideas using their own stroke style, whereas the deformation feedback enabled them to finish the sketch composition quickly.

  11. Fast dose kernel interpolation using Fourier transform with application to permanent prostate brachytherapy dosimetry.

    Science.gov (United States)

    Liu, Derek; Sloboda, Ron S

    2014-05-01

    Boyer and Mok proposed a fast calculation method employing the Fourier transform (FT), for which calculation time is independent of the number of seeds but seed placement is restricted to calculation grid points. Here an interpolation method is described enabling unrestricted seed placement while preserving the computational efficiency of the original method. The Iodine-125 seed dose kernel was sampled and selected values were modified to optimize interpolation accuracy for clinically relevant doses. For each seed, the kernel was shifted to the nearest grid point via convolution with a unit impulse, implemented in the Fourier domain. The remaining fractional shift was performed using a piecewise third-order Lagrange filter. Implementation of the interpolation method greatly improved FT-based dose calculation accuracy. The dose distribution was accurate to within 2% beyond 3 mm from each seed. Isodose contours were indistinguishable from explicit TG-43 calculation. Dose-volume metric errors were negligible. Computation time for the FT interpolation method was essentially the same as Boyer's method. A FT interpolation method for permanent prostate brachytherapy TG-43 dose calculation was developed which expands upon Boyer's original method and enables unrestricted seed placement. The proposed method substantially improves the clinically relevant dose accuracy with negligible additional computation cost, preserving the efficiency of the original method.

  12. Anisotropic interpolation theorems of Musielak-Orlicz type

    Directory of Open Access Journals (Sweden)

    Jinxia Li

    2016-10-01

    Full Text Available Abstract Anisotropy is a common attribute of Nature, which shows different characterizations in different directions of all or part of the physical or chemical properties of an object. The anisotropic property, in mathematics, can be expressed by a fairly general discrete group of dilations { A k : k ∈ Z } $\\{A^{k}: k\\in\\mathbb{Z}\\}$ , where A is a real n × n $n\\times n$ matrix with all its eigenvalues λ satisfy | λ | > 1 $|\\lambda|>1$ . Let φ : R n × [ 0 , ∞ → [ 0 , ∞ $\\varphi: \\mathbb{R}^{n}\\times[0, \\infty\\to[0,\\infty$ be an anisotropic Musielak-Orlicz function such that φ ( x , ⋅ $\\varphi(x,\\cdot$ is an Orlicz function and φ ( ⋅ , t $\\varphi(\\cdot,t$ is a Muckenhoupt A ∞ ( A $\\mathbb {A}_{\\infty}(A$ weight. The aim of this article is to obtain two anisotropic interpolation theorems of Musielak-Orlicz type, which are weighted anisotropic extension of Marcinkiewicz interpolation theorems. The above results are new even for the isotropic weighted settings.

  13. Cochlear Implant Electrode Localization Using an Ultra-High Resolution Scan Mode on Conventional 64-Slice and New Generation 192-Slice Multi-Detector Computed Tomography.

    Science.gov (United States)

    Carlson, Matthew L; Leng, Shuai; Diehn, Felix E; Witte, Robert J; Krecke, Karl N; Grimes, Josh; Koeller, Kelly K; Bruesewitz, Michael R; McCollough, Cynthia H; Lane, John I

    2017-08-01

    A new generation 192-slice multi-detector computed tomography (MDCT) clinical scanner provides enhanced image quality and superior electrode localization over conventional MDCT. Currently, accurate and reliable cochlear implant electrode localization using conventional MDCT scanners remains elusive. Eight fresh-frozen cadaveric temporal bones were implanted with full-length cochlear implant electrodes. Specimens were subsequently scanned with conventional 64-slice and new generation 192-slice MDCT scanners utilizing ultra-high resolution modes. Additionally, all specimens were scanned with micro-CT to provide a reference criterion for electrode position. Images were reconstructed according to routine temporal bone clinical protocols. Three neuroradiologists, blinded to scanner type, reviewed images independently to assess resolution of individual electrodes, scalar localization, and severity of image artifact. Serving as the reference standard, micro-CT identified scalar crossover in one specimen; imaging of all remaining cochleae demonstrated complete scala tympani insertions. The 192-slice MDCT scanner exhibited improved resolution of individual electrodes (p implant imaging compared with conventional MDCT. This technology provides important feedback regarding electrode position and course, which may help in future optimization of surgical technique and electrode design.

  14. Interpolation Inequalities and Spectral Estimates for Magnetic Operators

    Science.gov (United States)

    Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael

    2018-05-01

    We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.

  15. The visibility of mandibular canal on orthoradial and oblique CBCT slices at molar implant sites

    International Nuclear Information System (INIS)

    Alkhader, Mustafa; Jarab, Fadi; Shaweesh, Ashraf; Hudieb, Malik

    2016-01-01

    The aim of the present study was to compare visibility of the mandibular canal on cone beam computed tomography (CBCT)-based orthoradial and oblique slices at molar implant sites. CBCT images for 132 mandibular molar implant sites were selected for the study. After generating orthoradial and oblique slices, two observers evaluated the visibility of the mandibular canal using three-point scoring scale (1-3, good to excellent). Wilcoxon signed-rank test compared the visibility scores of the two slices. Both orthoradial and oblique slices obtained from CBCT had only very good to excellent mandibular canal visibility scores. At 114 mandibular molar implant sites, the visibility score was equal on both orthoradial and oblique slices. Although the visibility score was higher on orthoradial slices for 12 implant sites, the visibility score was higher for six implant sites on oblique slices and the difference was not significant. Therefore, the visibility of the mandibular canal was excellent and comparable on most of orthoradial and oblique slices obtained from CBCT images

  16. The analysis of decimation and interpolation in the linear canonical transform domain.

    Science.gov (United States)

    Xu, Shuiqing; Chai, Yi; Hu, Youqiang; Huang, Lei; Feng, Li

    2016-01-01

    Decimation and interpolation are the two basic building blocks in the multirate digital signal processing systems. As the linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing, it is worthwhile and interesting to analyze the decimation and interpolation in the LCT domain. In this paper, the definition of equivalent filter in the LCT domain have been given at first. Then, by applying the definition, the direct implementation structure and polyphase networks for decimator and interpolator in the LCT domain have been proposed. Finally, the perfect reconstruction expressions for differential filters in the LCT domain have been presented as an application. The proposed theorems in this study are the bases for generalizations of the multirate signal processing in the LCT domain, which can advance the filter banks theorems in the LCT domain.

  17. Potential problems with interpolating fields

    Energy Technology Data Exchange (ETDEWEB)

    Birse, Michael C. [The University of Manchester, Theoretical Physics Division, School of Physics and Astronomy, Manchester (United Kingdom)

    2017-11-15

    A potential can have features that do not reflect the dynamics of the system it describes but rather arise from the choice of interpolating fields used to define it. This is illustrated using a toy model of scattering with two coupled channels. A Bethe-Salpeter amplitude is constructed which is a mixture of the waves in the two channels. The potential derived from this has a strong repulsive core, which arises from the admixture of the closed channel in the wave function and not from the dynamics of the model. (orig.)

  18. Spatiotemporal interpolation of elevation changes derived from satellite altimetry for Jakobshavn Isbræ, Greenland

    DEFF Research Database (Denmark)

    Hurkmans, R.T.W.L.; Bamber, J.L.; Sørensen, Louise Sandberg

    2012-01-01

    . In those areas, straightforward interpolation of data is unlikely to reflect the true patterns of dH/dt. Here, four interpolation methods are compared and evaluated over Jakobshavn Isbræ, an outlet glacier for which widespread airborne validation data are available from NASA's Airborne Topographic Mapper...

  19. Stereo matching and view interpolation based on image domain triangulation.

    Science.gov (United States)

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  20. On removing interpolation and resampling artifacts in rigid image registration.

    Science.gov (United States)

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  1. Global sensitivity analysis using sparse grid interpolation and polynomial chaos

    International Nuclear Information System (INIS)

    Buzzard, Gregery T.

    2012-01-01

    Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990) [1] and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) [2]. Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008) [3] to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial. - Highlights: ► Efficient estimation of variance-based sensitivity coefficients. ► Efficient estimation of derivative-based sensitivity coefficients. ► Use of homotopy methods for approximation of local maxima and minima.

  2. Interpolation of extensive routine water pollution monitoring datasets: methodology and discussion of implications for aquifer management.

    Science.gov (United States)

    Yuval, Yuval; Rimon, Yaara; Graber, Ellen R; Furman, Alex

    2014-08-01

    A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanisation often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data is thus an important tool for supplementing monitoring observations. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range of values (up to a few orders of magnitude) in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. A local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. The inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the coastal aquifer along the Israeli

  3. Interpolation of extensive routine water pollution monitoring datasets: methodology and discussion of implications for aquifer management

    Science.gov (United States)

    Yuval; Rimon, Y.; Graber, E. R.; Furman, A.

    2013-07-01

    A large fraction of the fresh water available for human use is stored in groundwater aquifers. Since human activities such as mining, agriculture, industry and urbanization often result in incursion of various pollutants to groundwater, routine monitoring of water quality is an indispensable component of judicious aquifer management. Unfortunately, groundwater pollution monitoring is expensive and usually cannot cover an aquifer with the spatial resolution necessary for making adequate management decisions. Interpolation of monitoring data between points is thus an important tool for supplementing measured data. However, interpolating routine groundwater pollution data poses a special problem due to the nature of the observations. The data from a producing aquifer usually includes many zero pollution concentration values from the clean parts of the aquifer but may span a wide range (up to a few orders of magnitude) of values in the polluted areas. This manuscript presents a methodology that can cope with such datasets and use them to produce maps that present the pollution plumes but also delineates the clean areas that are fit for production. A method for assessing the quality of mapping in a way which is suitable to the data's dynamic range of values is also presented. Local variant of inverse distance weighting is employed to interpolate the data. Inclusion zones around the interpolation points ensure that only relevant observations contribute to each interpolated concentration. Using inclusion zones improves the accuracy of the mapping but results in interpolation grid points which are not assigned a value. That inherent trade-off between the interpolation accuracy and coverage is demonstrated using both circular and elliptical inclusion zones. A leave-one-out cross testing is used to assess and compare the performance of the interpolations. The methodology is demonstrated using groundwater pollution monitoring data from the Coastal aquifer along the Israeli

  4. Comparison of different wind data interpolation methods for a region with complex terrain in Central Asia

    Science.gov (United States)

    Reinhardt, Katja; Samimi, Cyrus

    2018-01-01

    While climatological data of high spatial resolution are largely available in most developed countries, the network of climatological stations in many other regions of the world still constitutes large gaps. Especially for those regions, interpolation methods are important tools to fill these gaps and to improve the data base indispensible for climatological research. Over the last years, new hybrid methods of machine learning and geostatistics have been developed which provide innovative prospects in spatial predictive modelling. This study will focus on evaluating the performance of 12 different interpolation methods for the wind components \\overrightarrow{u} and \\overrightarrow{v} in a mountainous region of Central Asia. Thereby, a special focus will be on applying new hybrid methods on spatial interpolation of wind data. This study is the first evaluating and comparing the performance of several of these hybrid methods. The overall aim of this study is to determine whether an optimal interpolation method exists, which can equally be applied for all pressure levels, or whether different interpolation methods have to be used for the different pressure levels. Deterministic (inverse distance weighting) and geostatistical interpolation methods (ordinary kriging) were explored, which take into account only the initial values of \\overrightarrow{u} and \\overrightarrow{v} . In addition, more complex methods (generalized additive model, support vector machine and neural networks as single methods and as hybrid methods as well as regression-kriging) that consider additional variables were applied. The analysis of the error indices revealed that regression-kriging provided the most accurate interpolation results for both wind components and all pressure heights. At 200 and 500 hPa, regression-kriging is followed by the different kinds of neural networks and support vector machines and for 850 hPa it is followed by the different types of support vector machine and

  5. Magnetohydrodynamics of unsteady viscous fluid on boundary layer past a sliced sphere

    Science.gov (United States)

    Nursalim, Rahmat; Widodo, Basuki; Imron, Chairul

    2017-10-01

    Magnetohydrodynamics (MHD) is important study in engineering and industrial fields. By study on MHD, we can reach the fluid flow characteristics that can be used to minimize its negative effect to an object. In decades, MHD has been widely studied in various geometry forms and fluid types. The sliced sphere is a geometry form that has not been investigated. In this paper we study magnetohydrodynamics of unsteady viscous fluid on boundary layer past a sliced sphere. Assumed that the fluid is incompressible, there is no magnetic field, there is no electrical voltage, the sliced sphere is fix and there is no barrier around the object. In this paper we focus on velocity profile at stagnation point (x = 0°). Mathematical model is governed by continuity and momentum equation. It is converted to non-dimensional, stream function, and similarity equation. Solution of the mathematical model is obtained by using Keller-Box numerical method. By giving various of slicing angle and various of magnetic parameter we get the simulation results. The simulation results show that increasing the slicing angle causes the velocity profile be steeper. Also, increasing the value of magnetic parameter causes the velocity profile be steeper. On the large slicing angle there is no significant effect of magnetic parameter to velocity profile, and on the high the value of magnetic parameter there is no significant effect of slicing angle to velocity profile.

  6. [Anatomy of the skull base and the cranial nerves in slice imaging].

    Science.gov (United States)

    Bink, A; Berkefeld, J; Zanella, F

    2009-07-01

    Computed tomography (CT) and magnetic resonance imaging (MRI) are suitable methods for examination of the skull base. Whereas CT is used to evaluate mainly bone destruction e.g. for planning surgical therapy, MRI is used to show pathologies in the soft tissue and bone invasion. High resolution and thin slice thickness are indispensible for both modalities of skull base imaging. Detailed anatomical knowledge is necessary even for correct planning of the examination procedures. This knowledge is a requirement to be able to recognize and interpret pathologies. MRI is the method of choice for examining the cranial nerves. The total path of a cranial nerve can be visualized by choosing different sequences taking into account the tissue surrounding this cranial nerve. This article summarizes examination methods of the skull base in CT and MRI, gives a detailed description of the anatomy and illustrates it with image examples.

  7. Anatomy of the skull base and the cranial nerves in slice imaging

    International Nuclear Information System (INIS)

    Bink, A.; Berkefeld, J.; Zanella, F.

    2009-01-01

    Computed tomography (CT) and magnetic resonance imaging (MRI) are suitable methods for examination of the skull base. Whereas CT is used to evaluate mainly bone destruction e.g. for planning surgical therapy, MRI is used to show pathologies in the soft tissue and bone invasion. High resolution and thin slice thickness are indispensible for both modalities of skull base imaging. Detailed anatomical knowledge is necessary even for correct planning of the examination procedures. This knowledge is a requirement to be able to recognize and interpret pathologies. MRI is the method of choice for examining the cranial nerves. The total path of a cranial nerve can be visualized by choosing different sequences taking into account the tissue surrounding this cranial nerve. This article summarizes examination methods of the skull base in CT and MRI, gives a detailed description of the anatomy and illustrates it with image examples. (orig.) [de

  8. Air Quality Assessment Using Interpolation Technique

    Directory of Open Access Journals (Sweden)

    Awkash Kumar

    2016-07-01

    Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.

  9. MR-based three-dimensional presentation of cartilage thickness in the femoral head

    International Nuclear Information System (INIS)

    Nakanishi, Katsuyuki; Tanaka, Hisashi; Nakamura, Hironobu; Sato, Yoshinobu; Kubota, Tetsuya; Tamura, Shinichi; Ueguchi, Takashi

    2001-01-01

    The purpose of our study was to visualize the hyaline cartilage of the femoral head and to evaluate the distribution of the thickness by three-dimensional reconstruction of MRI data. The MRI was performed in 10 normal volunteers, 1 patient with osteonecrosis and 4 with advanced osteoarthritis. A fast 3D spoiled gradient-recalled acquisition in the steady state pulse sequence (TR 22 ms/TE 5.6 ms/no. of excitations 2) with fat suppression was used for data collection. Coronal and sagittal images were obtained with 3-mm effective slice thickness, 16-cm field of view (FOV) and 256 x 192 matrix. The MR images were reconstructed in three dimensions for evaluating the distribution of the cartilage thickness. In all normal volunteers, 1 patient with osteonecrosis and three advanced osteoarthritis, 3D reconstruction was successful, but in 1 case of osteoarthritis, 3D reconstruction failed because of the narrow joint space. In normal volunteers, the cartilage thickness is thickest in the central portion around the ligamentum teres (mean 2.8 mm). The medial portion and the lateral portion are almost of the same thickness (medial 1.3 mm, lateral 1.1 mm). In 3 cases of osteoarthritis, the cartilage became thinner in the lateral portions (<0.6 mm), but was unchanged in the central and medial portions. Three-dimensional reconstruction of MRI data is useful for evaluating the distribution of the cartilage thickness of the femoral head objectively. (orig.)

  10. MR-based three-dimensional presentation of cartilage thickness in the femoral head

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, Katsuyuki [Dept. of Radiology, Osaka Seamen' s Insurance Hospital (Japan); Tanaka, Hisashi; Nakamura, Hironobu [Osaka Univ. (Japan). Dept. of Radiology; Sugano, Nobuhiko [Dept. of Orthopedic Surgery, Osaka University Medical School (Japan); Sato, Yoshinobu; Kubota, Tetsuya; Tamura, Shinichi [Div. of Functional Imaging, Osaka University Medical School (Japan); Ueguchi, Takashi [Dept. of Radiology, Osaka University Medical Hospital (Japan)

    2001-11-01

    The purpose of our study was to visualize the hyaline cartilage of the femoral head and to evaluate the distribution of the thickness by three-dimensional reconstruction of MRI data. The MRI was performed in 10 normal volunteers, 1 patient with osteonecrosis and 4 with advanced osteoarthritis. A fast 3D spoiled gradient-recalled acquisition in the steady state pulse sequence (TR 22 ms/TE 5.6 ms/no. of excitations 2) with fat suppression was used for data collection. Coronal and sagittal images were obtained with 3-mm effective slice thickness, 16-cm field of view (FOV) and 256 x 192 matrix. The MR images were reconstructed in three dimensions for evaluating the distribution of the cartilage thickness. In all normal volunteers, 1 patient with osteonecrosis and three advanced osteoarthritis, 3D reconstruction was successful, but in 1 case of osteoarthritis, 3D reconstruction failed because of the narrow joint space. In normal volunteers, the cartilage thickness is thickest in the central portion around the ligamentum teres (mean 2.8 mm). The medial portion and the lateral portion are almost of the same thickness (medial 1.3 mm, lateral 1.1 mm). In 3 cases of osteoarthritis, the cartilage became thinner in the lateral portions (<0.6 mm), but was unchanged in the central and medial portions. Three-dimensional reconstruction of MRI data is useful for evaluating the distribution of the cartilage thickness of the femoral head objectively. (orig.)

  11. Slice image pretreatment for cone-beam computed tomography based on adaptive filter

    International Nuclear Information System (INIS)

    Huang Kuidong; Zhang Dinghua; Jin Yanfang

    2009-01-01

    According to the noise properties and the serial slice image characteristics in Cone-Beam Computed Tomography (CBCT) system, a slice image pretreatment for CBCT based on adaptive filter was proposed. The judging criterion for the noise is established firstly. All pixels are classified into two classes: adaptive center weighted modified trimmed mean (ACWMTM) filter is used for the pixels corrupted by Gauss noise and adaptive median (AM) filter is used for the pixels corrupted by impulse noise. In ACWMTM filtering algorithm, the estimated Gauss noise standard deviation in the current slice image with offset window is replaced by the estimated standard deviation in the adjacent slice image to the current with the corresponding window, so the filtering accuracy of the serial images is improved. The pretreatment experiment on CBCT slice images of wax model of hollow turbine blade shows that the method makes a good performance both on eliminating noises and on protecting details. (authors)

  12. Energy band structure of Cr by the Slater-Koster interpolation scheme

    International Nuclear Information System (INIS)

    Seifu, D.; Mikusik, P.

    1986-04-01

    The matrix elements of the Hamiltonian between nine localized wave-functions in tight-binding formalism are derived. The symmetry adapted wave-functions and the secular equations are formed by the group theory method for high symmetry points in the Brillouin zone. A set of interaction integrals is chosen on physical ground and fitted via the Slater-Koster interpolation scheme to the abinito band structure of chromium calculated by the Green function method. Then the energy band structure of chromium is interpolated and extrapolated in the Brillouin zone. (author)

  13. Drying characteristics of pumpkin ( Cucurbita moschata) slices in convective and freeze dryer

    Science.gov (United States)

    Caliskan, Gulsah; Dirim, Safiye Nur

    2017-06-01

    This study was intended to determine the drying and rehydration kinetics of convective and freeze dried pumpkin slices (0.5 × 3.5 × 0.5 cm). A pilot scale tray drier (at 80 ± 2 °C inlet temperature, 1 m s-1 air velocity) and freeze drier (13.33 kPa absolute pressure, condenser temperature of -48 ± 2 °C) were used for the drying experiments. Drying curves were fitted to six well-known thin layer drying models. Nonlinear regression analysis was used to evaluate the parameters of the selected models by using statistical software SPSS 16.0 (SPSS Inc., USA). For the convective and freeze drying processes of pumpkin slices, the highest R2 values, and the lowest RMSE as well as χ2 values were obtained from Page model. The effective moisture diffusivity (Deff) of the convective and freeze dried pumpkin slices were obtained from the Fick's diffusion model, and they were found to be 2.233 × 10-7 and 3.040 × 10-9 m2s-1, respectively. Specific moisture extraction rate, moisture extraction rate, and specific energy consumption values were almost twice in freeze drying process. Depending on the results, moisture contents and water activity values of pumpkin slices were in acceptable limits for safe storage of products. The rehydration behaviour of [at 18 ± 2 and 100 ± 2 °C for 1:25, 1:50, 1:75, 1:100, and 1:125 solid:liquid ratios (w:w)] dried pumpkin slices was determined by Peleg's model with the highest R2. The highest total soluble solid loss of pumpkin slices was observed for the rehydration experiment which performed at 1:25 solid: liquid ratio (w:w). Rehydration ratio of freeze dried slices was found 2-3 times higher than convective dried slices.

  14. Seismic Experiment at North Arizona To Locate Washington Fault - 3D Data Interpolation

    KAUST Repository

    Hanafy, Sherif M.

    2008-10-01

    The recorded data is interpolated using sinc technique to create the following two data sets 1. Data Set # 1: Here, we interpolated only in the receiver direction to regularize the receiver interval to 1 m, however, the source locations are the same as the original data (2 and 4 m source intervals). Now the data contains 6 lines, each line has 121 receivers and a total of 240 shot gathers. 2. Data Set # 2: Here, we used the result from the previous step, and interpolated only in the shot direction to regularize the shot interval to 1 m. Now, both shot and receivers has 1 m interval. The data contains 6 lines, each line has 121 receivers and a total of 726 shot gathers.

  15. A shape-based statistical method to retrieve 2D TRUS-MR slice correspondence for prostate biopsy

    Science.gov (United States)

    Mitra, Jhimli; Srikantha, Abhilash; Sidibé, Désiré; Martí, Robert; Oliver, Arnau; Lladó, Xavier; Ghose, Soumya; Vilanova, Joan C.; Comet, Josep; Meriaudeau, Fabrice

    2012-02-01

    This paper presents a method based on shape-context and statistical measures to match interventional 2D Trans Rectal Ultrasound (TRUS) slice during prostate biopsy to a 2D Magnetic Resonance (MR) slice of a pre-acquired prostate volume. Accurate biopsy tissue sampling requires translation of the MR slice information on the TRUS guided biopsy slice. However, this translation or fusion requires the knowledge of the spatial position of the TRUS slice and this is only possible with the use of an electro-magnetic (EM) tracker attached to the TRUS probe. Since, the use of EM tracker is not common in clinical practice and 3D TRUS is not used during biopsy, we propose to perform an analysis based on shape and information theory to reach close enough to the actual MR slice as validated by experts. The Bhattacharyya distance is used to find point correspondences between shape-context representations of the prostate contours. Thereafter, Chi-square distance is used to find out those MR slices where the prostates closely match with that of the TRUS slice. Normalized Mutual Information (NMI) values of the TRUS slice with each of the axial MR slices are computed after rigid alignment and consecutively a strategic elimination based on a set of rules between the Chi-square distances and the NMI leads to the required MR slice. We validated our method for TRUS axial slices of 15 patients, of which 11 results matched at least one experts validation and the remaining 4 are at most one slice away from the expert validations.

  16. A comparative risk assessment for Listeria monocytogenes in prepackaged versus retail-sliced deli meat.

    Science.gov (United States)

    Endrikat, Sarah; Gallagher, Daniel; Pouillot, Régis; Hicks Quesenberry, Heather; Labarre, David; Schroeder, Carl M; Kause, Janell

    2010-04-01

    Deli meat was ranked as the highest-risk ready-to-eat food vehicle of Listeria monocytogenes within the 2003 U.S. Food and Drug Administration and U.S. Department of Agriculture, Food Safety and Inspection Service risk assessment. The comparative risk of L. monocytogenes in retail-sliced versus prepackaged deli meats was evaluated with a modified version of this model. Other research has found that retail-sliced deli meats have both higher prevalence and levels of L. monocytogenes than have product sliced and packaged at the manufacturer level. The updated risk assessment model considered slicing location as well as the use of growth inhibitors. The per annum comparative risk ratio for the number of deaths from retail-sliced versus prepackaged deli meats was found to be 4.89, and the per-serving comparative risk ratio was 4.27. There was a significant interaction between the use of growth inhibitors and slicing location. Almost 70% of the estimated deaths occurred from retail-sliced product that did not possess a growth inhibitor. A sensitivity analysis, assessing the effect of the model's consumer storage time and shelf life assumptions, found that even if retail-sliced deli meats were stored for a quarter of the time prepackaged deli meats were stored, retail-sliced product is 1.7 times more likely to result in death from listeriosis. Sensitivity analysis also showed that the shelf life assumption had little effect on the comparative risk ratio.

  17. Fluidic system for long-term in vitro culturing and monitoring of organotypic brain slices

    DEFF Research Database (Denmark)

    Bakmand, Tanya; Troels-Smith, Ane R.; Dimaki, Maria

    2015-01-01

    Brain slice preparations cultured in vitro have long been used as a simplified model for studying brain development, electrophysiology, neurodegeneration and neuroprotection. In this paper an open fluidic system developed for improved long term culturing of organotypic brain slices is presented....... The positive effect of continuous flow of growth medium, and thus stability of the glucose concentration and waste removal, is simulated and compared to the effect of stagnant medium that is most often used in tissue culturing. Furthermore, placement of the tissue slices in the developed device was studied...... by numerical simulations in order to optimize the nutrient distribution. The device was tested by culturing transverse hippocampal slices from 7 days old NMRI mice for a duration of 14 days. The slices were inspected visually and the slices cultured in the fluidic system appeared to have preserved...

  18. Neutron fluence-to-dose equivalent conversion factors: a comparison of data sets and interpolation methods

    International Nuclear Information System (INIS)

    Sims, C.S.; Killough, G.G.

    1983-01-01

    Various segments of the health physics community advocate the use of different sets of neutron fluence-to-dose equivalent conversion factors as a function of energy and different methods of interpolation between discrete points in those data sets. The major data sets and interpolation methods are used to calculate the spectrum average fluence-to-dose equivalent conversion factors for five spectra associated with the various shielded conditions of the Health Physics Research Reactor. The results obtained by use of the different data sets and interpolation methods are compared and discussed. (author)

  19. Brain slice on a chip: opportunities and challenges of applying microfluidic technology to intact tissues.

    Science.gov (United States)

    Huang, Yu; Williams, Justin C; Johnson, Stephen M

    2012-06-21

    Isolated brain tissue, especially brain slices, are valuable experimental tools for studying neuronal function at the network, cellular, synaptic, and single channel levels. Neuroscientists have refined the methods for preserving brain slice viability and function and converged on principles that strongly resemble the approach taken by engineers in developing microfluidic devices. With respect to brain slices, microfluidic technology may 1) overcome the traditional limitations of conventional interface and submerged slice chambers and improve oxygen/nutrient penetration into slices, 2) provide better spatiotemporal control over solution flow/drug delivery to specific slice regions, and 3) permit successful integration with modern optical and electrophysiological techniques. In this review, we highlight the unique advantages of microfluidic devices for in vitro brain slice research, describe recent advances in the integration of microfluidic devices with optical and electrophysiological instrumentation, and discuss clinical applications of microfluidic technology as applied to brain slices and other non-neuronal tissues. We hope that this review will serve as an interdisciplinary guide for both neuroscientists studying brain tissue in vitro and engineers as they further develop microfluidic chamber technology for neuroscience research.

  20. Pricing and simulation for real estate index options: Radial basis point interpolation

    Science.gov (United States)

    Gong, Pu; Zou, Dong; Wang, Jiayue

    2018-06-01

    This study employs the meshfree radial basis point interpolation (RBPI) for pricing real estate derivatives contingent on real estate index. This method combines radial and polynomial basis functions, which can guarantee the interpolation scheme with Kronecker property and effectively improve accuracy. An exponential change of variables, a mesh refinement algorithm and the Richardson extrapolation are employed in this study to implement the RBPI. Numerical results are presented to examine the computational efficiency and accuracy of our method.