WorldWideScience

Sample records for thick slice interpolation

  1. Thick Slice and Thin Slice Teaching Evaluations

    Science.gov (United States)

    Tom, Gail; Tong, Stephanie Tom; Hesse, Charles

    2010-01-01

    Student-based teaching evaluations are an integral component to institutions of higher education. Previous work on student-based teaching evaluations suggest that evaluations of instructors based upon "thin slice" 30-s video clips of them in the classroom correlate strongly with their end of the term "thick slice" student evaluations. This study's…

  2. Influence of the slice thickness in CT to clinical effect

    International Nuclear Information System (INIS)

    Kimura, Kazue; Katakura, Toshihiko; Ito, Masami; Okuaki, Okihisa; Suzuki, Kenji

    1980-01-01

    CT is a kind of tomography. Therefore, what thickness of tissue is being observed in the picture - this is important in the clinical application of CT. The influence of slice thickness on the pictures, especially its clinical effect, was examined. The apparatus used is EMI CT 5005. For varying the slice thickness, it cannot be any larger than the thickness essential to the apparatus. Therefore, to make it thinner than the essential 14 mm, collimators were specially prepared, which were used on the sides of an X-ray tube and a detector. As basic observation, the revelation ability of form owing to the difference of slice thickness using acryl pipes, and the revelation ability of slice face owing to the difference of slice thickness, were examined. About clinical observation, the results for certain cases of cancer were compared with the CT images for the slice thickness of 14 mm essential to EMI CT 5005 and the slice thickness of 7 mm achieved by means of the collimators. (J.P.N.)

  3. The effects of slice thickness and reconstructive parameters on VR image quality in multi-slice CT

    International Nuclear Information System (INIS)

    Gao Zhenlong; Wang Qiang; Liu Caixia

    2005-01-01

    Objective: To explore the effects of slice thickness, reconstructive thickness and reconstructive interval on VR image quality in multi-slice CT, in order to select the best slice thickness and reconstructive parameters for the imaging. Methods: Multi-slice CT scan was applied on a rubber dinosaur model with different slice thickness. VR images were reconstructed with different reconstructive thickness and reconstructive interval. Five radiologists were invited to evaluate the quality of the images without knowing anything about the parameters. Results: The slice thickness, reconstructive thickness and reconstructive interval did have effects on VR image quality and the effective degree was different. The effective coefficients were V 1 =1413.033, V 2 =563.733, V 3 =390.533, respectively. The parameters interacted with the others (P<0.05). The smaller of those parameters, the better of the image quality. With a small slice thickness and a reconstructive slice equal to slice thickness, the image quality had no obvious difference when the reconstructive interval was 1/2, 1/3, 1/4 of the slice thickness. Conclusion: A relative small scan slice thickness, a reconstructive slice equal to slice thickness and a reconstructive interval 1/2 of the slice thickness should be selected for the best VR image quality. The image quality depends mostly on the slice thickness. (authors)

  4. Shape determinative slice localization for patient-specific masseter modeling using shape-based interpolation

    Energy Technology Data Exchange (ETDEWEB)

    Ng, H.P. [NUS Graduate School for Integrative Sciences and Engineering (Singapore); Biomedical Imaging Lab., Agency for Science Technology and Research (Singapore); Foong, K.W.C. [NUS Graduate School for Integrative Sciences and Engineering (Singapore); Dept. of Preventive Dentistry, National Univ. of Singapore (Singapore); Ong, S.H. [Dept. of Electrical and Computer Engineering, National Univ. of Singapore (Singapore); Div. of Bioengineering, National Univ. of Singapore (Singapore); Liu, J.; Nowinski, W.L. [Biomedical Imaging Lab., Agency for Science Technology and Research (Singapore); Goh, P.S. [Dept. of Diagnostic Radiology, National Univ. of Singapore (Singapore)

    2007-06-15

    The masseter plays a critical role in the mastication system. A hybrid method to shape-based interpolation is used to build the masseter model from magnetic resonance (MR) data sets. The main contribution here is the localizing of determinative slices in the data sets where clinicians are required to perform manual segmentations in order for an accurate model to be built. Shape-based criteria were used to locate the candidates for determinative slices and fuzzy-c-means (FCM) clustering technique was used to establish the determinative slices. Five masseter models were built in our work and the average overlap indices ({kappa}) achieved is 85.2%. This indicates that there is good agreement between the models and the manual contour tracings. In addition, the time taken, as compared to manually segmenting all the slices, is significantly lesser. (orig.)

  5. Shape determinative slice localization for patient-specific masseter modeling using shape-based interpolation

    International Nuclear Information System (INIS)

    Ng, H.P.; Foong, K.W.C.; Ong, S.H.; Liu, J.; Nowinski, W.L.; Goh, P.S.

    2007-01-01

    The masseter plays a critical role in the mastication system. A hybrid method to shape-based interpolation is used to build the masseter model from magnetic resonance (MR) data sets. The main contribution here is the localizing of determinative slices in the data sets where clinicians are required to perform manual segmentations in order for an accurate model to be built. Shape-based criteria were used to locate the candidates for determinative slices and fuzzy-c-means (FCM) clustering technique was used to establish the determinative slices. Five masseter models were built in our work and the average overlap indices (κ) achieved is 85.2%. This indicates that there is good agreement between the models and the manual contour tracings. In addition, the time taken, as compared to manually segmenting all the slices, is significantly lesser. (orig.)

  6. Effects of Temperature and Slice Thickness on Drying Kinetics of Pumpkin Slices

    OpenAIRE

    Kongdej LIMPAIBOON

    2011-01-01

    Dried pumpkin slice is an alternative crisp food product. In this study, the effects of temperature and slice thickness on the drying characteristics of pumpkin were studied in a lab-scale tray dryer, using hot air temperatures of 55, 60 and 65 °C and 2, 3 and 4 mm slice thickness at a constant air velocity of 1.5 m/s. The initial moisture content of the pumpkin samples was 900.5 % (wb). The drying process was carried out until the final moisture content of product was 100.5 % (wb). The resul...

  7. Fourier-based approach to interpolation in single-slice helical computed tomography

    International Nuclear Information System (INIS)

    La Riviere, Patrick J.; Pan Xiaochuan

    2001-01-01

    It has recently been shown that longitudinal aliasing can be a significant and detrimental presence in reconstructed single-slice helical computed tomography (CT) volumes. This aliasing arises because the directly measured data in helical CT are generally undersampled by a factor of at least 2 in the longitudinal direction and because the exploitation of the redundancy of fanbeam data acquired over 360 degree sign to generate additional longitudinal samples does not automatically eliminate the aliasing. In this paper we demonstrate that for pitches near 1 or lower, the redundant fanbeam data, when used properly, can provide sufficient information to satisfy a generalized sampling theorem and thus to eliminate aliasing. We develop and evaluate a Fourier-based algorithm, called 180FT, that accomplishes this. As background we present a second Fourier-based approach, called 360FT, that makes use only of the directly measured data. Both Fourier-based approaches exploit the fast Fourier transform and the Fourier shift theorem to generate from the helical projection data a set of fanbeam sinograms corresponding to equispaced transverse slices. Slice-by-slice reconstruction is then performed by use of two-dimensional fanbeam algorithms. The proposed approaches are compared to their counterparts based on the use of linear interpolation - the 360LI and 180LI approaches. The aliasing suppression property of the 180FT approach is a clear advantage of the approach and represents a step toward the desirable goal of achieving uniform longitudinal resolution properties in reconstructed helical CT volumes

  8. Filter and slice thickness selection in SPECT image reconstruction

    International Nuclear Information System (INIS)

    Ivanovic, M.; Weber, D.A.; Wilson, G.A.; O'Mara, R.E.

    1985-01-01

    The choice of filter and slice thickness in SPECT image reconstruction as function of activity and linear and angular sampling were investigated in phantom and patient imaging studies. Reconstructed transverse and longitudinal spatial resolution of the system were measured using a line source in a water filled phantom. Phantom studies included measurements of the Data Spectrum phantom; clinical studies included tomographic procedures in 40 patients undergoing imaging of the temporomandibular joint. Slices of the phantom and patient images were evaluated for spatial of the phantom and patient images were evaluated for spatial resolution, noise, and image quality. Major findings include; spatial resolution and image quality improve with increasing linear sampling frequencies over the range of 4-8 mm/p in the phantom images, best spatial resolution and image quality in clinical images were observed at a linear sampling frequency of 6mm/p, Shepp and Logan filter gives the best spatial resolution for phantom studies at the lowest linear sampling frequency; smoothed Shepp and Logan filter provides best quality images without loss of resolution at higher frequencies and, spatial resolution and image quality improve with increased angular sampling frequency in the phantom at 40 c/p but appear to be independent of angular sampling frequency at 400 c/p

  9. Inter-slice motion correction using spatiotemporal interpolation for functional magnetic resonance imaging of the moving fetus

    OpenAIRE

    Limperopoulos, Catherine; You, Wonsang

    2017-01-01

    Fetal motion continues to be one of the major artifacts in in-utero functional MRI; interestingly few methods have been developed to address fetal motion correction. In this study, we propose a robust method for motion correction in fetal fMRI by which both inter-slice and inter-volume motion artifacts are jointly corrected. To accomplish this, an original volume is temporally split into odd and even slices, and then voxel intensities are spatially and temporally interpolated in the process o...

  10. Favorable noise uniformity properties of Fourier-based interpolation and reconstruction approaches in single-slice helical computed tomography

    International Nuclear Information System (INIS)

    La Riviere, Patrick J.; Pan Xiaochuan

    2002-01-01

    Volumes reconstructed by standard methods from single-slice helical computed tomography (CT) data have been shown to have noise levels that are highly nonuniform relative to those in conventional CT. These noise nonuniformities can affect low-contrast object detectability and have also been identified as the cause of the zebra artifacts that plague maximum intensity projection (MIP) images of such volumes. While these spatially variant noise levels have their root in the peculiarities of the helical scan geometry, there is also a strong dependence on the interpolation and reconstruction algorithms employed. In this paper, we seek to develop image reconstruction strategies that eliminate or reduce, at its source, the nonuniformity of noise levels in helical CT relative to that in conventional CT. We pursue two approaches, independently and in concert. We argue, and verify, that Fourier-based longitudinal interpolation approaches lead to more uniform noise ratios than do the standard 360LI and 180LI approaches. We also demonstrate that a Fourier-based fan-to-parallel rebinning algorithm, used as an alternative to fanbeam filtered backprojection for slice reconstruction, also leads to more uniform noise ratios, even when making use of the 180LI and 360LI interpolation approaches

  11. Impact of Different CT Slice Thickness on Clinical Target Volume for 3D Conformal Radiation Therapy

    International Nuclear Information System (INIS)

    Prabhakar, Ramachandran; Ganesh, Tharmar; Rath, Goura K.; Julka, Pramod K.; Sridhar, Pappiah S.; Joshi, Rakesh C.; Thulkar, Sanjay

    2009-01-01

    The purpose of this study was to present the variation of clinical target volume (CTV) with different computed tomography (CT) slice thicknesses and the impact of CT slice thickness on 3-dimensional (3D) conformal radiotherapy treatment planning. Fifty patients with brain tumors were selected and CT scans with 2.5-, 5-, and 10-mm slice thicknesses were performed with non-ionic contrast enhancement. The patients were selected with tumor volume ranging from 2.54 cc to 222 cc. Three-dimensional treatment planning was performed for all three CT datasets. The target coverage and the isocenter shift between the treatment plans for different slice thickness were correlated with the tumor volume. An important observation from our study revealed that for volume 25 cc, the target underdosage was less than 6.7% for 5-mm slice thickness and 8% for 10-mm slice thickness. For 3D conformal radiotherapy treatment planning (3DCRT), a CT slice thickness of 2.5 mm is optimum for tumor volume 25 cc

  12. Evaluation of the possibility to use thick slabs of reconstructed outer breast tomosynthesis slice images

    Science.gov (United States)

    Petersson, Hannie; Dustler, Magnus; Tingberg, Anders; Timberg, Pontus

    2016-03-01

    The large image volumes in breast tomosynthesis (BT) have led to large amounts of data and a heavy workload for breast radiologists. The number of slice images can be decreased by combining adjacent image planes (slabbing) but the decrease in depth resolution can considerably affect the detection of lesions. The aim of this work was to assess if thicker slabbing of the outer slice images (where lesions seldom are present) could be a viable alternative in order to reduce the number of slice images in BT image volumes. The suggested slabbing (an image volume with thick outer slabs and thin slices between) were evaluated in two steps. Firstly, a survey of the depth of 65 cancer lesions within the breast was performed to estimate how many lesions would be affected by outer slabs of different thicknesses. Secondly, a selection of 24 lesions was reconstructed with 2, 6 and 10 mm slab thickness to evaluate how the appearance of lesions located in the thicker slabs would be affected. The results show that few malignant breast lesions are located at a depth less than 10 mm from the surface (especially for breast thicknesses of 50 mm and above). Reconstruction of BT volumes with 6 mm slab thickness yields an image quality that is sufficient for lesion detection for a majority of the investigated cases. Together, this indicates that thicker slabbing of the outer slice images is a promising option in order to reduce the number of slice images in BT image volumes.

  13. Influence of image slice thickness on rectal dose–response relationships following radiotherapy of prostate cancer

    International Nuclear Information System (INIS)

    Olsson, C; Thor, M; Apte, A; Deasy, J O; Liu, M; Moissenko, V; Petersen, S E; Høyer, M

    2014-01-01

    When pooling retrospective data from different cohorts, slice thicknesses of acquired computed tomography (CT) images used for treatment planning may vary between cohorts. It is, however, not known if varying slice thickness influences derived dose–response relationships. We investigated this for rectal bleeding using dose–volume histograms (DVHs) of the rectum and rectal wall for dose distributions superimposed on images with varying CT slice thicknesses. We used dose and endpoint data from two prostate cancer cohorts treated with three-dimensional conformal radiotherapy to either 74 Gy (N = 159) or 78 Gy (N = 159) at 2 Gy per fraction. The rectum was defined as the whole organ with content, and the morbidity cut-off was Grade ≥2 late rectal bleeding. Rectal walls were defined as 3 mm inner margins added to the rectum. DVHs for simulated slice thicknesses from 3 to 13 mm were compared to DVHs for the originally acquired slice thicknesses at 3 and 5 mm. Volumes, mean, and maximum doses were assessed from the DVHs, and generalized equivalent uniform dose (gEUD) values were calculated. For each organ and each of the simulated slice thicknesses, we performed predictive modeling of late rectal bleeding using the Lyman–Kutcher–Burman (LKB) model. For the most coarse slice thickness, rectal volumes increased (≤18%), whereas maximum and mean doses decreased (≤0.8 and ≤4.2 Gy, respectively). For all a values, the gEUD for the simulated DVHs were ≤1.9 Gy different than the gEUD for the original DVHs. The best-fitting LKB model parameter values with 95% CIs were consistent between all DVHs. In conclusion, we found that the investigated slice thickness variations had minimal impact on rectal dose–response estimations. From the perspective of predictive modeling, our results suggest that variations within 10 mm in slice thickness between cohorts are unlikely to be a limiting factor when pooling multi-institutional rectal dose data that include slice

  14. Influence of image slice thickness on rectal dose-response relationships following radiotherapy of prostate cancer

    Science.gov (United States)

    Olsson, C.; Thor, M.; Liu, M.; Moissenko, V.; Petersen, S. E.; Høyer, M.; Apte, A.; Deasy, J. O.

    2014-07-01

    When pooling retrospective data from different cohorts, slice thicknesses of acquired computed tomography (CT) images used for treatment planning may vary between cohorts. It is, however, not known if varying slice thickness influences derived dose-response relationships. We investigated this for rectal bleeding using dose-volume histograms (DVHs) of the rectum and rectal wall for dose distributions superimposed on images with varying CT slice thicknesses. We used dose and endpoint data from two prostate cancer cohorts treated with three-dimensional conformal radiotherapy to either 74 Gy (N = 159) or 78 Gy (N = 159) at 2 Gy per fraction. The rectum was defined as the whole organ with content, and the morbidity cut-off was Grade ≥2 late rectal bleeding. Rectal walls were defined as 3 mm inner margins added to the rectum. DVHs for simulated slice thicknesses from 3 to 13 mm were compared to DVHs for the originally acquired slice thicknesses at 3 and 5 mm. Volumes, mean, and maximum doses were assessed from the DVHs, and generalized equivalent uniform dose (gEUD) values were calculated. For each organ and each of the simulated slice thicknesses, we performed predictive modeling of late rectal bleeding using the Lyman-Kutcher-Burman (LKB) model. For the most coarse slice thickness, rectal volumes increased (≤18%), whereas maximum and mean doses decreased (≤0.8 and ≤4.2 Gy, respectively). For all a values, the gEUD for the simulated DVHs were ≤1.9 Gy different than the gEUD for the original DVHs. The best-fitting LKB model parameter values with 95% CIs were consistent between all DVHs. In conclusion, we found that the investigated slice thickness variations had minimal impact on rectal dose-response estimations. From the perspective of predictive modeling, our results suggest that variations within 10 mm in slice thickness between cohorts are unlikely to be a limiting factor when pooling multi-institutional rectal dose data that include slice thickness

  15. Influence of slice thickness on the determination of left ventricular wall thickness and dimension by magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Ohnishi, Shusaku; Fukui, Sugao; Atsumi, Chisato and others

    1989-02-01

    Wall thickness of the ventricular septum and left ventricle, and left ventricular cavity dimension were determined on magnetic resonance (MR) images with slices 5 mm and 10 mm in thickness. Subjects were 3 healthy volunteers and 7 patients with hypertension (4), hypertrophic cardiomyopathy (one) or valvular heart disease (2). In visualizing the cardiac structures such as left ventricular papillary muscle and right and left ventricles, 5 mm-thick images were better than 10 mm-thick images. Edges of ventricular septum and left ventricular wall were more clearly visualized on 5 mm-thick images than 10 mm-thick images. Two mm-thick MR images obtained from 2 patients yielded the most excellent visualization in end-systole, but failed to reveal cardiac structures in detail in end-diastole. Phantom studies revealed no significant differences in image quality of 10 mm and 5 mm in thickness in the axial view 80 degree to the long axis. In the axial view 45 degree to the long axis, 10 mm-thick images were inferior to 5 mm-thick images in detecting the edge of the septum and the left ventricular wall. These results indicate that the selection of slice thickness is one of the most important determinant factors in the measurement of left ventricular wall thickness and cavity dimension. (Namekawa, K).

  16. Influence of slice thickness on the determination of left ventricular wall thickness and dimension by magnetic resonance imaging

    International Nuclear Information System (INIS)

    Ohnishi, Shusaku; Fukui, Sugao; Atsumi, Chisato

    1989-01-01

    Wall thickness of the ventricular septum and left ventricle, and left ventricular cavity dimension were determined on magnetic resonance (MR) images with slices 5 mm and 10 mm in thickness. Subjects were 3 healthy volunteers and 7 patients with hypertension (4), hypertrophic cardiomyopathy (one) or valvular heart disease (2). In visualizing the cardiac structures such as left ventricular papillary muscle and right and left ventricles, 5 mm-thick images were better than 10 mm-thick images. Edges of ventricular septum and left ventricular wall were more clearly visualized on 5 mm-thick images than 10 mm-thick images. Two mm-thick MR images obtained from 2 patients yielded the most excellent visualization in end-systole, but failed to reveal cardiac structures in detail in end-diastole. Phantom studies revealed no significant differences in image quality of 10 mm and 5 mm in thickness in the axial view 80 degree to the long axis. In the axial view 45 degree to the long axis, 10 mm-thick images were inferior to 5 mm-thick images in detecting the edge of the septum and the left ventricular wall. These results indicate that the selection of slice thickness is one of the most important determinant factors in the measurement of left ventricular wall thickness and cavity dimension. (Namekawa, K)

  17. Feasibility study of 2D thick-slice MR digital subtraction angiography

    International Nuclear Information System (INIS)

    Ishimori, Yoshiyuki; Takeuchi, Miho; Higashimura, Kyouji; Komuro, Hiroyuki

    2000-01-01

    Conditions required to perform contrast MR digital subtraction angiography using a two-dimensional thick-slice high-speed gradient echo were investigated. The conditions in the phantom experiment included: slice profile, flip angle, imaging matrix, fat suppression, duration of IR pulse and frequency selectivity, flip angle of IR pulse and inversion time. Based on the results of the experiment, 2D thick-slice MRDSA was performed in volunteers. Under TR/TE=5.3-9/1.3-1.8 ms conditions, the requirements were a slice thick enough to include the target region, a flip angle of 10 degrees, and a phase matrix of 96 or more. Fat suppression was required for adipose-tissue-rich regions, such as the abdomen. The optimal conditions for applying the IR preparation pulse of the IR prepped fast gradient recalled echo as spectrally selective inversion recovery appeared to be: duration of IR pulse =20 ms, flip angle =100 degrees, and inversion time =40 ms. The authors concluded that it was feasible to perform 2D thick-slice MRDSA with time resolution within 1 second. (K.H.)

  18. A Comparative Study of Spiral Tomograms with Different Slice Thicknesses in Dental Implant Planning

    International Nuclear Information System (INIS)

    Yoon Sook Ja

    1999-01-01

    To know whether there would be a difference among spiral tomograms of different slice thicknesses in the measurement of distances which are used for dental implant planning. 10 dry mandibules and 40 metal balls are used to take total 120 Scanora tomograms with the slice thickness of 2 mm, 4 mm and 8 mm. 3 oral radiologists interpreted each tomogram to measure the distances from the mandibular canal to the alveoalr crest and buccal, lingual and inferior borders of mandible. 3 observers recorded grades of 0, 1 or 2 to evaluate the perceptibility of alveolar crest and the superior border of mandibular canal. For statistical analysis, ANOVA with repeated measure, Chi-square tests and intraclass correlation coefficient (R2, alpha) were used. There was not a statistically significant difference among spiral tomograms with different slice thicknesses in the measurement of the distances and in the perceptibility of alveolar crest and mandibular canal (p>0.05). All of them showed a good relationship in the reliability analysis. The perceptibility of alveolar crest and mandibular canal was almost similar and an excellent relationship was seen on all of them. There would be no significant difference, no matter which spiral tomogram of any slice thickness may be used in dental implant planning, considering the thickness of dental implant fixture.

  19. Image quality dependence on thickness of sliced rat kidney taken by a simplest DEI construction

    Energy Technology Data Exchange (ETDEWEB)

    Li Gang [Institute of High Energy Physics, Chinese Academy of Science, Yuquan Rd. No 19, Beijing 100039 (China)]. E-mail: lig@ihep.ac.cn; Chen Zhihua [China-Japan Friendship Institute of Clinical Medical Science, Yinghua Rd., Beijing 100029 (China); Wu Ziyu [Institute of High Energy Physics, Chinese Academy of Science, Yuquan Rd. No 19, Beijing 100039 (China); Ando, M. [Photon Factory, KEK, Oho 1-1, Tsukuba, Ibaraki 305-0801 (Japan); Pan Lin [China-Japan Friendship Institute of Clinical Medical Science, Yinghua Rd., Beijing 100029 (China); Wang, J.Y. [Institute of High Energy Physics, Chinese Academy of Science, Yuquan Rd. No 19, Beijing 100039 (China); Jiang, X.M. [Institute of High Energy Physics, Chinese Academy of Science, Yuquan Rd. No 19, Beijing 100039 (China)

    2005-08-11

    The excised rat kidney slices were investigated using a simplified diffraction-enhanced imaging (DEI) configuration with only two crystals: the first one working as monochromator and the second one working as analyzer in the Bragg geometry that was developed at Beijing Synchrotron Radiation Facility (BSRF). Many fine anatomic structures of the sliced rat kidneys with thickness of 2mm and 120{mu}m can be distinguished clearly in the DEI images that were obtained at the shoulder of a rocking curve. The authors would like to emphasize that the thick and thin slices DEI provides very different images; in the thick sample only the structure with the big density gradient or that near the surface where X-ray comes out can be distinguished, while in the thin ones some fine structures, which can not be distinguished at the thick sample under the same condition, can be seen very clearly. The reason related with the counteraction of {delta}(x,y,z) gradient in the integral process along the X-ray path inside the thick sample is discussed.

  20. Image quality dependence on thickness of sliced rat kidney taken by a simplest DEI construction

    International Nuclear Information System (INIS)

    Li Gang; Chen Zhihua; Wu Ziyu; Ando, M.; Pan Lin; Wang, J.Y.; Jiang, X.M.

    2005-01-01

    The excised rat kidney slices were investigated using a simplified diffraction-enhanced imaging (DEI) configuration with only two crystals: the first one working as monochromator and the second one working as analyzer in the Bragg geometry that was developed at Beijing Synchrotron Radiation Facility (BSRF). Many fine anatomic structures of the sliced rat kidneys with thickness of 2mm and 120μm can be distinguished clearly in the DEI images that were obtained at the shoulder of a rocking curve. The authors would like to emphasize that the thick and thin slices DEI provides very different images; in the thick sample only the structure with the big density gradient or that near the surface where X-ray comes out can be distinguished, while in the thin ones some fine structures, which can not be distinguished at the thick sample under the same condition, can be seen very clearly. The reason related with the counteraction of δ(x,y,z) gradient in the integral process along the X-ray path inside the thick sample is discussed

  1. Slices

    KAUST Repository

    McCrae, James

    2011-01-01

    Minimalist object representations or shape-proxies that spark and inspire human perception of shape remain an incompletely understood, yet powerful aspect of visual communication. We explore the use of planar sections, i.e., the contours of intersection of planes with a 3D object, for creating shape abstractions, motivated by their popularity in art and engineering. We first perform a user study to show that humans do define consistent and similar planar section proxies for common objects. Interestingly, we observe a strong correlation between user-defined planes and geometric features of objects. Further we show that the problem of finding the minimum set of planes that capture a set of 3D geometric shape features is both NP-hard and not always the proxy a user would pick. Guided by the principles inferred from our user study, we present an algorithm that progressively selects planes to maximize feature coverage, which in turn influence the selection of subsequent planes. The algorithmic framework easily incorporates various shape features, while their relative importance values are computed and validated from the user study data. We use our algorithm to compute planar slices for various objects, validate their utility towards object abstraction using a second user study, and conclude showing the potential applications of the extracted planar slice shape proxies. © 2011 ACM.

  2. Study of imaging time shortening in Whole Heart MRCA. Evaluation of slice thickness

    International Nuclear Information System (INIS)

    Iwai, Mitsuhiro; Tateishi, Toshiki; Takeda, Soji; Hayashi, Ryuji

    2005-01-01

    A series of examinations in cardiac MR imaging, such as cine, perfusion, MR coronary angiography (MRCA) and viability, is generally known as One Stop Cardiac Examination. It takes about 40 to 60 minutes to perform One Stop Cardiac Examination, and Whole Heart MRCA accounts for most of the examination time. Therefore, we aimed to shorten imaging time of Whole Heart MRCA, especially in a large imaging area such as that in the case of the postoperative evaluation of a bypass graft, by investigating the depiction of a diameter of mimic blood vessels as changing the slice thickness of Whole Heart MRCA. The results showed that the maximum slice thickness of about 1 mm was excellent considering the diameters of actual coronary arteries are about 3 mm. In this study, we could grasp the relationships among slice thickness of Whole Heart MRCA, the diameter of a blood vessel, and shortened examination time. We suggested that it was useful for selecting the suitable sequence depending on a patient's conditions. (author)

  3. Comparing electron tomography and HRTEM slicing methods as tools to measure the thickness of nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Alloyeau, D., E-mail: alloyeau.damien@gmail.com [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Laboratoire d' Etude des Microstructures - ONERA/CNRS, UMR 104, B.P. 72, 92322 Chatillon (France); Ricolleau, C. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Oikawa, T. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); JEOL (Europe) SAS, Espace Claude Monet, 1 Allee de Giverny, 78290 Croissy-sur-Seine (France); Langlois, C. [Laboratoire Materiaux et Phenomenes Quantiques, Universite Paris 7/CNRS, UMR 7162, 2 Place Jussieu, 75251 Paris (France); Le Bouar, Y.; Loiseau, A. [Laboratoire d' Etude des Microstructures - ONERA/CNRS, UMR 104, B.P. 72, 92322 Chatillon (France)

    2009-06-15

    Nanoparticles' morphology is a key parameter in the understanding of their thermodynamical, optical, magnetic and catalytic properties. In general, nanoparticles, observed in transmission electron microscopy (TEM), are viewed in projection so that the determination of their thickness (along the projection direction) with respect to their projected lateral size is highly questionable. To date, the widely used methods to measure nanoparticles thickness in a transmission electron microscope are to use cross-section images or focal series in high-resolution transmission electron microscopy imaging (HRTEM 'slicing'). In this paper, we compare the focal series method with the electron tomography method to show that both techniques yield similar particle thickness in a range of size from 1 to 5 nm, but the electron tomography method provides better statistics since more particles can be analyzed at one time. For this purpose, we have compared, on the same samples, the nanoparticles thickness measurements obtained from focal series with the ones determined from cross-section profiles of tomograms (tomogram slicing) perpendicular to the plane of the substrate supporting the nanoparticles. The methodology is finally applied to the comparison of CoPt nanoparticles annealed ex situ at two different temperatures to illustrate the accuracy of the techniques in detecting small particle thickness changes.

  4. Influence of slice thickness of computed tomography and type of rapid protyping on the accuracy of 3-dimensional medical model

    International Nuclear Information System (INIS)

    Um, Ki Doo; Lee, Byung Do

    2004-01-01

    This study was to evaluate the influence of slice thickness of computed tomography (CT) and rapid protyping (RP) type on the accuracy of 3-dimensional medical model. Transaxial CT data of human dry skull were taken from multi-detector spiral CT. Slice thickness were 1, 2, 3 and 4 mm respectively. Three-dimensional image model reconstruction using 3-D visualization medical software (V-works 3.0) and RP model fabrication were followed. 2-RP models were 3D printing (Z402, Z Corp., Burlington, USA) and Stereolithographic Apparatus model. Linear measurements of anatomical landmarks on dry skull, 3-D image model, and 2-RP models were done and compared according to slice thickness and RP model type. There were relative error percentage in absolute value of 0.97, 1.98, 3.83 between linear measurements of dry skull and image models of 1, 2, 3 mm slice thickness respectively. There was relative error percentage in absolute value of 0.79 between linear measurements of dry skull and SLA model. There was relative error difference in absolute value of 2.52 between linear measurements of dry skull and 3D printing model. These results indicated that 3-dimensional image model of thin slice thickness and stereolithographic RP model showed relative high accuracy.

  5. Influence of slice thickness of computed tomography and type of rapid protyping on the accuracy of 3-dimensional medical model

    Energy Technology Data Exchange (ETDEWEB)

    Um, Ki Doo; Lee, Byung Do [Wonkwang University College of Medicine, Iksan (Korea, Republic of)

    2004-03-15

    This study was to evaluate the influence of slice thickness of computed tomography (CT) and rapid protyping (RP) type on the accuracy of 3-dimensional medical model. Transaxial CT data of human dry skull were taken from multi-detector spiral CT. Slice thickness were 1, 2, 3 and 4 mm respectively. Three-dimensional image model reconstruction using 3-D visualization medical software (V-works 3.0) and RP model fabrication were followed. 2-RP models were 3D printing (Z402, Z Corp., Burlington, USA) and Stereolithographic Apparatus model. Linear measurements of anatomical landmarks on dry skull, 3-D image model, and 2-RP models were done and compared according to slice thickness and RP model type. There were relative error percentage in absolute value of 0.97, 1.98, 3.83 between linear measurements of dry skull and image models of 1, 2, 3 mm slice thickness respectively. There was relative error percentage in absolute value of 0.79 between linear measurements of dry skull and SLA model. There was relative error difference in absolute value of 2.52 between linear measurements of dry skull and 3D printing model. These results indicated that 3-dimensional image model of thin slice thickness and stereolithographic RP model showed relative high accuracy.

  6. Does slice thickness affect diagnostic performance of 64-slice CT coronary angiography in stable and unstable angina patients with a positive calcium score?

    Energy Technology Data Exchange (ETDEWEB)

    Meijs, Matthijs F.L.; Vos, Alexander M. de; Cramer, Maarten J.; Doevendans, Pieter A.; Vries, Jan J.J. de; Rutten, Annemarieke; Budde, Ricardo P.J.; Prokop, Mathias (Dept. of Radiology, Univ. Medical Center Utrecht, Utrecht (Netherlands)), e-mail: m.meijs@umcutrecht.nl; Meijboom, W. Bob; Feyter, Pim J. de (Dept. of Cardiology, Erasmus Medical Center, Rotterdam (Netherlands))

    2010-05-15

    Background: Coronary calcification can lead to over-estimation of the degree of coronary stenosis. Purpose: To evaluate whether thinner reconstruction thickness improves the diagnostic performance of 64-slice CT coronary angiography (CTCA) in angina patients with a positive calcium score. Material and Methods: We selected 20 scans from a clinical study comparing CTCA to conventional coronary angiography (CCA) in stable and unstable angina patients based on a low number of motion artifacts and a positive calcium score. All images were acquired at 64 x 0.625 mm and each CTCA scan was reconstructed at slice thickness/increment 0.67 mm/0.33 mm, 0.9 mm/0.45 mm, and 1.4 mm/0.7 mm. Two reviewers blinded for CCA results independently evaluated the scans for the presence of significant coronary artery disease (CAD) in three randomly composed series, with =2 weeks in between series. The diagnostic performance of CTCA was compared for the different slice thicknesses using a pooled analysis of both reviewers. Significant CAD was defined as >50% diameter narrowing on quantitative CCA. Image noise (standard deviation of CT numbers) was measured in all scans. Inter-observer variability was assessed with kappa. Results: Significant CAD was present in 8% of 304 available segments. Median total Agatston calcium score was 181.8 (interquartile range 34.9-815.6). Sensitivity at 0.67 mm, 0.9 mm, and 1.4 mm slice thickness was 70% (95% confidence interval 57-83%), 74% (62-86%), and 70% (57-83%), respectively. Specificity was 85% (82-88%), 84% (81-87%), and 84% (81-87%), respectively. The positive predictive value was 30 (21-38%), 29 (21-37%), and 28 (20-36%), respectively. The negative predictive value was 97% (95-98%), 97% (96-99%), and 97% (96-99%), respectively. Kappa for inter-observer agreement was 0.56, 0.58, and 0.59. Noise decreased from 32.9 HU at 0.67 mm, to 23.2 HU at 1.4 mm (P<0.001). Conclusion: Diagnostic performance of CTCA in angina patients with a positive calcium score

  7. Diagnostic limitations of 10 mm thickness single-slice computed tomography for patients with suspected appendicitis

    International Nuclear Information System (INIS)

    Kaidu, Motoki; Oyamatu, Manabu; Sato, Kenji; Saitou, Akira; Yamamoto, Satoshi; Yoshimura, Norihiko; Sasai, Keisuke

    2008-01-01

    The aim of this retrospective analysis was to evaluate the accuracy of 10 mm thickness single helical computed tomography (CT) examination for confirming the diagnosis of appendicitis or providing a diagnosis other than appendicitis, including underlying periappendical neoplasms. From April 1, 2001 to March 30, 2005, a total of 272 patients with suspected appendicitis underwent CT examinations. Of the 272 patients, 106 (39%) underwent surgery. Seven CT examinations for seven patients were excluded because of inconsistency of the CT protocol. We therefore reviewed 99 CT images (99 patients) with correlation to surgical-pathological findings to clarify the diagnostic accuracy of CT examinations. We compared the postoperative diagnosis with the preoperative CT report. The final diagnoses were confirmed by macroscopic findings at surgery and pathological evaluations if necessary. Of the 99 patients, 87 had acute appendicitis at surgery. The sensitivity, specificity, and accuracy of CT were 98.9%, 75.0%, and 96.0%, respectively. The positive predictive value and negative predictive value were 96.6% and 90.0%, respectively. Among nine patients in the true-negative category, five had colon cancers; and among three patients in the false-positive category, two had cancer of the cecal-appendiceal region as the underlying disease. CT examination is useful for patients with suspected appendicitis, but radiologists should be aware of the limitation of thick-sliced single helical CT. They should also be aware of the possibility of other diseases, including coincident abdominal neoplasms and underlying cecal-appendiceal cancer. (author)

  8. The accuracy of ventricular volume measurement and the optimal slice thickness by using multislice helical computed tomography

    International Nuclear Information System (INIS)

    Cui Wei; Guo Yuyin

    2005-01-01

    Objective: To determine the optimal slice thickness for ventricular volume measurement by tomographic multislice Simpson's method and to evaluate the accuracy of ventricular volume measured by multislice helical computed tomography (MSCT) in human ventricular casts. Methods: Fourteen human left ventricular (LV) and 15 right ventricular (RV) casts were scanned with MSCT scanner by using a scanning protocol similar to clinical practice. A series of LV and RV short-axis images were reconstructed with slice thickness of 2 mm, 3.5 mm, 5 mm, 7 mm, and 10 mm, respectively. Multislice Simpson's method was used to calculate LV and RV volumes and true cast volume was determined by water displacement. Results: The true LV and RV volumes were (55.57 ± 28.91) ml, and (64.23 ± 24.51) ml, respectively. The calculated volumes from different slice thickness ranged from (58.78 ± 28.93) ml to (68.15 ± 32.57) ml for LV casts, and (74.45 ± 27.81) ml to (88.14 ± 32.91) ml for RV casts, respectively. Both the calculated LV and RV volumes correlated closely with the corresponding true volumes (all r > 0.95, P<0.001), but overestimated the corresponding true volume by (3.21 ± 5.95) to (12.58 ± 8.56) ml for LV and (10.22 ± 8.45) to (23.91 ± 12.24) ml for RV (all P<0.01). There was a close correlation between the overestimation and the selected slice thickness for both LV and RV volume measurements (r=0.998 and 0.996, P<0.001). However, when slice thickness was reduced to 5.0 mm, the overestimation became nonsignificant for slice thickness through 2.0 mm to 5.0 mm, and also for both LV and RV volume measurements. Conclusion: Both LV and RV volumes can be accurately calculated with MSCT. A 5 mm slice thickness is enough and most efficient for accurate measurement of LV and RV volume. (authors)

  9. Influence of detector collimation on SNR in four different MDCT scanners using a reconstructed slice thickness of 5 mm

    International Nuclear Information System (INIS)

    Verdun, F.R.; Pachoud, M.; Monnin, P.; Valley, J.-F.; Noel, A.; Meuli, R.; Schnyder, P.; Denys, A.

    2004-01-01

    The purpose of this paper is to compare the influence of detector collimation on the signal-to-noise ratio (SNR) for a 5.0 mm reconstructed slice thickness for four multi-detector row CT (MDCT) units. SNRs were measured on Catphan test phantom images from four MDCT units: a GE LightSpeed QX/I, a Marconi MX 8000, a Toshiba Aquilion and a Siemens Volume Zoom. Five-millimetre-thick reconstructed slices were obtained from acquisitions performed using detector collimations of 2.0-2.5 mm and 5.0 mm, 120 kV, a 360 tube rotation time of 0.5 s, a wide range of mA and pitch values in the range of 0.75-0.85 and 1.25-1.5. For each set of acquisition parameters, a Wiener spectrum was also calculated. Statistical differences in SNR for the different acquisition parameters were evaluated using a Student's t-test (P<0.05). The influence of detector collimation on the SNR for a 5.0-mm reconstructed slice thickness is different for different MDCT scanners. At pitch values lower than unity, the use of a small detector collimation to produce 5.0-mm thick slices is beneficial for one unit and detrimental for another. At pitch values higher than unity, using a small detector collimation is beneficial for two units. One manufacturer uses different reconstruction filters when switching from a 2.5- to a 5.0-mm detector collimation. For a comparable reconstructed slice thickness, using a smaller detector collimation does not always reduce image noise. Thus, the impact of the detector collimation on image noise should be determined by standard deviation calculations, and also by assessing the power spectra of the noise. (orig.)

  10. Spatial interpolation and simulation of post-burn duff thickness after prescribed fire

    Science.gov (United States)

    Peter R. Robichaud; S. M. Miller

    1999-01-01

    Prescribed fire is used as a site treatment after timber harvesting. These fires result in spatial patterns with some portions consuming all of the forest floor material (duff) and others consuming little. Prior to the burn, spatial sampling of duff thickness and duff water content can be used to generate geostatistical spatial simulations of these characteristics....

  11. The impact of computed tomography slice thickness on the assessment of stereotactic, 3D conformal and intensity-modulated radiotherapy of brain tumors.

    Science.gov (United States)

    Caivano, R; Fiorentino, A; Pedicini, P; Califano, G; Fusco, V

    2014-05-01

    To evaluate radiotherapy treatment planning accuracy by varying computed tomography (CT) slice thickness and tumor size. CT datasets from patients with primary brain disease and metastatic brain disease were selected. Tumor volumes ranging from about 2.5 to 100 cc and CT scan at different slice thicknesses (1, 2, 4, 6 and 10 mm) were used to perform treatment planning (1-, 2-, 4-, 6- and 10-CT, respectively). For any slice thickness, a conformity index (CI) referring to 100, 98, 95 and 90 % isodoses and tumor size was computed. All the CI and volumes obtained were compared to evaluate the impact of CT slice thickness on treatment plans. The smallest volumes reduce significantly if defined on 1-CT with respect to 4- and 6-CT, while the CT slice thickness does not affect target definition for the largest volumes. The mean CI for all the considered isodoses and CT slice thickness shows no statistical differences when 1-CT is compared to 2-CT. Comparing the mean CI of 1- with 4-CT and 1- with 6-CT, statistical differences appear only for the smallest volumes with respect to 100, 98 and 95 % isodoses-the CI for 90 % isodose being not statistically significant for all the considered PTVs. The accuracy of radiotherapy tumor volume definition depends on CT slice thickness. To achieve a better tumor definition and dose coverage, 1- and 2-CT would be suitable for small targets, while 4- and 6-CT are suitable for the other volumes.

  12. Clinical lymph node staging-Influence of slice thickness and reconstruction kernel on volumetry and RECIST measurements

    Energy Technology Data Exchange (ETDEWEB)

    Fabel, M., E-mail: m.fabel@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Wulff, A., E-mail: a.wulff@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Heckel, F., E-mail: frank.heckel@mevis.fraunhofer.de [Fraunhofer MeVis, Universitaetsallee 29, 28359 Bremen (Germany); Bornemann, L., E-mail: lars.bornemann@mevis.fraunhofer.de [Fraunhofer MeVis, Universitaetsallee 29, 28359 Bremen (Germany); Freitag-Wolf, S., E-mail: freitag@medinfo.uni-kiel.de [Institute of Medical Informatics and Statistics, Brunswiker Strasse 10, 24105 Kiel (Germany); Heller, M., E-mail: martin.heller@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Biederer, J., E-mail: juergen.biederer@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Bolte, H., E-mail: hendrik.bolte@ukmuenster.de [Department of Nuclear Medicine, University Hospital Muenster, Albert-Schweitzer-Campus 1, Gebaeude A1, D-48149 Muenster (Germany)

    2012-11-15

    Purpose: Therapy response evaluation in oncological patient care requires reproducible and accurate image evaluation. Today, common standard in measurement of tumour growth or shrinkage is one-dimensional RECIST 1.1. A proposed alternative method for therapy monitoring is computer aided volumetric analysis. In lung metastases volumetry proved high reliability and accuracy in experimental studies. High reliability and accuracy of volumetry in lung metastases has been proven. However, other metastatic lesions such as enlarged lymph nodes are far more challenging. The aim of this study was to investigate the reproducibility of semi-automated volumetric analysis of lymph node metastases as a function of both slice thickness and reconstruction kernel. In addition, manual long axis diameters (LAD) as well as short axis diameters (SAD) were compared to automated RECIST measurements. Materials and methods: Multislice-CT of the chest, abdomen and pelvis of 15 patients with lymph node metastases of malignant melanoma were included. Raw data were reconstructed using different slice thicknesses (1-5 mm) and varying reconstruction kernels (B20f, B40f, B60f). Volume and RECIST measurements were performed for 85 lymph nodes between 10 and 60 mm using Oncology Prototype Software (Fraunhofer MEVIS, Siemens, Germany) and were compared to a defined reference volume and diameter by calculating absolute percentage errors (APE). Variability of the lymph node sizes was computed as relative measurement differences, precision of measurements was computed as relative measurement deviation. Results: Mean absolute percentage error (APE) for volumetric analysis varied between 3.95% and 13.8% and increased significantly with slice thickness. Differences between reconstruction kernels were not significant, however, a trend towards middle soft tissue kernel could be observed.. Between automated and manual short axis diameter (SAD, RECIST 1.1) and long axis diameter (LAD, RECIST 1.0) no

  13. Clinical lymph node staging—Influence of slice thickness and reconstruction kernel on volumetry and RECIST measurements

    International Nuclear Information System (INIS)

    Fabel, M.; Wulff, A.; Heckel, F.; Bornemann, L.; Freitag-Wolf, S.; Heller, M.; Biederer, J.; Bolte, H.

    2012-01-01

    Purpose: Therapy response evaluation in oncological patient care requires reproducible and accurate image evaluation. Today, common standard in measurement of tumour growth or shrinkage is one-dimensional RECIST 1.1. A proposed alternative method for therapy monitoring is computer aided volumetric analysis. In lung metastases volumetry proved high reliability and accuracy in experimental studies. High reliability and accuracy of volumetry in lung metastases has been proven. However, other metastatic lesions such as enlarged lymph nodes are far more challenging. The aim of this study was to investigate the reproducibility of semi-automated volumetric analysis of lymph node metastases as a function of both slice thickness and reconstruction kernel. In addition, manual long axis diameters (LAD) as well as short axis diameters (SAD) were compared to automated RECIST measurements. Materials and methods: Multislice-CT of the chest, abdomen and pelvis of 15 patients with lymph node metastases of malignant melanoma were included. Raw data were reconstructed using different slice thicknesses (1–5 mm) and varying reconstruction kernels (B20f, B40f, B60f). Volume and RECIST measurements were performed for 85 lymph nodes between 10 and 60 mm using Oncology Prototype Software (Fraunhofer MEVIS, Siemens, Germany) and were compared to a defined reference volume and diameter by calculating absolute percentage errors (APE). Variability of the lymph node sizes was computed as relative measurement differences, precision of measurements was computed as relative measurement deviation. Results: Mean absolute percentage error (APE) for volumetric analysis varied between 3.95% and 13.8% and increased significantly with slice thickness. Differences between reconstruction kernels were not significant, however, a trend towards middle soft tissue kernel could be observed.. Between automated and manual short axis diameter (SAD, RECIST 1.1) and long axis diameter (LAD, RECIST 1.0) no

  14. The influence of secondary reconstruction slice thickness on NewTom 3G cone beam computed tomography-based radiological interpretation of sheep mandibular condyle fractures.

    Science.gov (United States)

    Sirin, Yigit; Guven, Koray; Horasan, Sinan; Sencan, Sabri; Bakir, Baris; Barut, Oya; Tanyel, Cem; Aral, Ali; Firat, Deniz

    2010-11-01

    The objective of this study was to examine the diagnostic accuracy of the different secondary reconstruction slice thicknesses of cone beam computed tomography (CBCT) on artificially created mandibular condyle fractures. A total of 63 sheep heads with or without condylar fractures were scanned with a NewTom 3G CBCT scanner. Multiplanar reformatted (MPR) views in 0.2-mm, 1-mm, 2-mm, and 3-mm secondary reconstruction slice thicknesses were evaluated by 7 observers. Inter- and intraobserver agreements were calculated with weighted kappa statistics. The receiver operating characteristic (ROC) curve analysis was used to statistically compare the area under the curve (AUC) of each slice thickness. The kappa coefficients varied from fair and to excellent. The AUCs of 0.2-mm and 1-mm slice thicknesses were found to be significantly higher than those of 2 mm and 3 mm for some type of fractures. CBCT was found to be accurate in detecting all variants of fractures at 0.2 mm and 1 mm. However, 2-mm and 3-mm slices were not suitable to detect fissure, complete, and comminuted types of mandibular condyle fractures. Copyright © 2010 Mosby, Inc. All rights reserved.

  15. CT liver volumetry using three-dimensional image data in living donor liver transplantation: Effects of slice thickness on volume calculation

    Science.gov (United States)

    Hori, Masatoshi; Suzuki, Kenji; Epstein, Mark L.; Baron, Richard L.

    2011-01-01

    The purpose was to evaluate a relationship between slice thickness and calculated volume on CT liver volumetry by comparing the results for images with various slice thicknesses including three-dimensional images. Twenty adult potential liver donors (12 men, 8 women; mean age, 39 years; range, 24–64) underwent CT with a 64-section multi-detector row CT scanner after intra-venous injection of contrast material. Four image sets with slice thicknesses of 0.625 mm, 2.5 mm, 5 mm, and 10 mm were used. First, a program developed in our laboratory for automated liver extraction was applied to CT images, and the liver boundary was obtained automatically. Then, an abdominal radiologist reviewed all images on which automatically extracted boundaries were superimposed, and edited the boundary on each slice to enhance the accuracy. Liver volumes were determined by counting of the voxels within the liver boundary. Mean whole liver volumes estimated with CT were 1322.5 cm3 on 0.625-mm, 1313.3 cm3 on 2.5-mm, 1310.3 cm3 on 5-mm, and 1268.2 cm3 on 10-mm images. Volumes calculated for three-dimensional (0.625-mm-thick) images were significantly larger than those for thicker images (Pvolumetry. If not, three-dimensional images could be essential. PMID:21850689

  16. Emergency department CT screening of patients with nontraumatic neurological symptoms referred to the posterior fossa: comparison of thin versus thick slice images.

    Science.gov (United States)

    Kamalian, Shervin; Atkinson, Wendy L; Florin, Lauren A; Pomerantz, Stuart R; Lev, Michael H; Romero, Javier M

    2014-06-01

    Evaluation of the posterior fossa (PF) on 5-mm-thick helical CT images (current default) has improved diagnostic accuracy compared to 5-mm sequential CT images; however, 5-mm-thick images may not be ideal for PF pathology due to volume averaging of rapid changes in anatomy in the Z-direction. Therefore, we sought to determine if routine review of 1.25-mm-thin helical CT images has superior accuracy in screening for nontraumatic PF pathology. MRI proof of diagnosis was obtained within 6 h of helical CT acquisition for 90 consecutive ED patients with, and 88 without, posterior fossa lesions. Helical CT images were post-processed at 1.25 and 5-mm-axial slice thickness. Two neuroradiologists blinded to the clinical/MRI findings reviewed both image sets. Interobserver agreement and accuracy were rated using Kappa statistics and ROC analysis, respectively. Of the 90/178 (51 %) who were MR positive, 60/90 (66 %) had stroke and 30/90 (33 %) had other etiologies. There was excellent interobserver agreement (κ > 0.97) for both thick and thin slice assessments. The accuracy, sensitivity, and specificity for 1.25-mm images were 65, 44, and 84 %, respectively, and for 5-mm images were 67, 45, and 85 %, respectively. The diagnostic accuracy was not significantly different (p > 0.5). In this cohort of patients with nontraumatic neurological symptoms referred to the posterior fossa, 1.25-mm-thin slice CT reformatted images do not have superior accuracy compared to 5-mm-thick images. This information has implications on optimizing resource utilizations and efficiency in a busy emergency room. Review of 1.25-mm-thin images may help diagnostic accuracy only when review of 5-mm-thick images as current default is inconclusive.

  17. Multivariate interpolation

    Directory of Open Access Journals (Sweden)

    Pakhnutov I.A.

    2017-04-01

    Full Text Available the paper deals with iterative interpolation methods in forms of similar recursive procedures defined by a sort of simple functions (interpolation basis not necessarily real valued. These basic functions are kind of arbitrary type being defined just by wish and considerations of user. The studied interpolant construction shows virtue of versatility: it may be used in a wide range of vector spaces endowed with scalar product, no dimension restrictions, both in Euclidean and Hilbert spaces. The choice of basic interpolation functions is as wide as possible since it is subdued nonessential restrictions. The interpolation method considered in particular coincides with traditional polynomial interpolation (mimic of Lagrange method in real unidimensional case or rational, exponential etc. in other cases. The interpolation as iterative process, in fact, is fairly flexible and allows one procedure to change the type of interpolation, depending on the node number in a given set. Linear interpolation basis options (perhaps some nonlinear ones allow to interpolate in noncommutative spaces, such as spaces of nondegenerate matrices, interpolated data can also be relevant elements of vector spaces over arbitrary numeric field. By way of illustration, the author gives the examples of interpolation on the real plane, in the separable Hilbert space and the space of square matrices with vektorvalued source data.

  18. Automated pulmonary nodule volumetry with an optimized algorithm - accuracy at different slice thicknesses compared to unidimensional and bidimentional measurements

    International Nuclear Information System (INIS)

    Vogel, M.N.; Schmuecker, S.; Maksimovich, O.; Claussen, C.D.; Horger, M.; Vonthein, R.; Bethge, W.; Dicken, V.

    2008-01-01

    Purpose: This in-vivo study quantifies the accuracy of automated pulmonary nodule volumetry in reconstructions with different slice thicknesses (ST) of clinical routine CT scans. The accuracy of volumetry is compared to that of unidimensional and bidimensional measurements. Materials and Methods: 28 patients underwent contrast enhanced 64-row CT scans of the chest and abdomen obtained in the clinical routine. All scans were reconstructed with 1, 3, and 5 mm ST. Volume, maximum axial diameter, and areas following the guidelines of Response Evaluation Criteria in Solid Tumors (RECIST) and the World Health Organization (WHO) were measured in all 101 lesions located in the overlap region of both scans using the new software tool OncoTreat (MeVis, Deutschland). The accuracy of quantifications in both scans was evaluated using the Bland and Altmann method. The reproducibility of measurements in dependence on the ST was compared using the likelihood ratio Chi-squared test. Results: A total of 101 nodules were identified in all patients. Segmentation was considered successful in 88.1% of the cases without local manual correction which was deliberately not employed in this study. For 80 nodules all 6 measurements were successful. These were statistically evaluated. The volumes were in the range 0.1 to 15.6 ml. Of all 80 lesions, 34 (42%) had direct contact to the pleura parietalis oder diaphragmalis and were termed parapleural, 32 (40%) were paravascular, 7 (9%) both parapleural and paravascular, the remaining 21 (27%) were free standing in the lung. The trueness differed significantly (Chi-square 7.22, p value 0.027) and was best with an ST of 3 mm and worst at 5 mm. Differences in precision were not significant (Chi-square 5.20, p value 0.074). The limits of agreement for an ST of 3 mm were ± 17.5% of the mean volume for volumetry, for maximum diameters ± 1.3 mm, and ± 31.8% for the calculated areas. Conclusion: Automated volumetry of pulmonary nodules using Onco

  19. A Unified Approach to Diffusion Direction Sensitive Slice Registration and 3-D DTI Reconstruction From Moving Fetal Brain Anatomy

    DEFF Research Database (Denmark)

    Hansen, Mads Fogtmann; Seshamani, Sharmishtaa; Kroenke, Christopher

    2014-01-01

    to the underlying anatomy. Previous image registration techniques have been described to estimate the between slice fetal head motion, allowing the reconstruction of 3D a diffusion estimate on a regular grid using interpolation. We propose Approach to Unified Diffusion Sensitive Slice Alignment and Reconstruction...... (AUDiSSAR) that explicitly formulates a process for diffusion direction sensitive DW-slice-to-DTI-volume alignment. This also incorporates image resolution modeling to iteratively deconvolve the effects of the imaging point spread function using the multiple views provided by thick slices acquired...

  20. Spatial interpolation

    NARCIS (Netherlands)

    Stein, A.

    1991-01-01

    The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are

  1. Efficacy of UV-C irradiation for inactivation of food-borne pathogens on sliced cheese packaged with different types and thicknesses of plastic films.

    Science.gov (United States)

    Ha, Jae-Won; Back, Kyeong-Hwan; Kim, Yoon-Hee; Kang, Dong-Hyun

    2016-08-01

    In this study, the efficacy of using UV-C light to inactivate sliced cheese inoculated with Escherichia coli O157:H7, Salmonella Typhimurium, and Listeria monocytogenes and, packaged with 0.07 mm films of polyethylene terephthalate (PET), polyvinylchloride (PVC), polypropylene (PP), and polyethylene (PE) was investigated. The results show that compared with PET and PVC, PP and PE films showed significantly reduced levels of the three pathogens compared to inoculated but non-treated controls. Therefore, PP and PE films of different thicknesses (0.07 mm, 0.10 mm, and 0.13 mm) were then evaluated for pathogen reduction of inoculated sliced cheese samples. Compared with 0.10 and 0.13 mm, 0.07 mm thick PP and PE films did not show statistically significant reductions compared to non-packaged treated samples. Moreover, there were no statistically significant differences between the efficacy of PP and PE films. These results suggest that adjusted PP or PE film packaging in conjunction with UV-C radiation can be applied to control foodborne pathogens in the dairy industry. Copyright © 2016. Published by Elsevier Ltd.

  2. Three-dimensional image analysis of the skull using variable CT scanning protocols-effect of slice thickness on measurement in the three-dimensional CT images

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Ho Gul; Kim, Kee Deog; Park, Hyok; Kim, Dong Ook; Jeong, Hai Jo; Kim, Hee Joung; Yoo, Sun Kook; Kim, Yong Oock; Park, Chang Seo [Yonsei University College of Medicine, Seoul (Korea, Republic of)

    2004-07-15

    To evaluate the quantitative accuracy of three-dimensional (3D) images by mean of comparing distance measurements on the 3D images with direct measurements of dry human skull according to slice thickness and scanning modes. An observer directly measured the distance of 21 line items between 12 orthodontic landmarks on the skull surface using a digital vernier caliper and each was repeated five times. The dry human skull was scanned with a Helical CT with various slice thickness (3, 5, 7 mm) and acquisition modes (Conventional and Helical). The same observer measured corresponding distance of the same items on reconstructed 3D images with the internal program of V-works 4.0 (Cybermed Inc., Seoul, Korea). The quantitative accuracy of distance measurements were statistically evaluated with Wilcoxons' two-sample test. 11 line items in Conventional 3 mm, 8 in Helical 3 mm, 11 in Conventional 5 mm, 10 in Helical 5 mm, 5 in Conventional 7 mm and 9 in Helical 7 mm showed no statistically significant difference. Average difference between direct measurements and measurements on 3D CT images was within 2 mm in 19 line items of Conventional 3 mm. 20 of Helical 3 mm, 15 of Conventional 5 mm, 18 of Helical 5 mm, 11 of Conventional 7 mm and 16 of Helical 7 mm. Considering image quality and patient's exposure time, scanning protocol of Helical 5 mm is recommended for 3D image analysis of the skull in CT.

  3. Interpolation theory

    CERN Document Server

    Lunardi, Alessandra

    2018-01-01

    This book is the third edition of the 1999 lecture notes of the courses on interpolation theory that the author delivered at the Scuola Normale in 1998 and 1999. In the mathematical literature there are many good books on the subject, but none of them is very elementary, and in many cases the basic principles are hidden below great generality. In this book the principles of interpolation theory are illustrated aiming at simplification rather than at generality. The abstract theory is reduced as far as possible, and many examples and applications are given, especially to operator theory and to regularity in partial differential equations. Moreover the treatment is self-contained, the only prerequisite being the knowledge of basic functional analysis.

  4. Computer-assisted detection of pulmonary nodules: evaluation of diagnostic performance using an expert knowledge-based detection system with variable reconstruction slice thickness settings

    International Nuclear Information System (INIS)

    Marten, Katharina; Grillhoesl, Andreas; Seyfarth, Tobias; Rummeny, Ernst J.; Engelke, Christoph; Obenauer, Silvia

    2005-01-01

    The purpose of this study was to evaluate the performance of a computer-assisted diagnostic (CAD) tool using various reconstruction slice thicknesses (RST). Image data of 20 patients undergoing multislice CT for pulmonary metastasis were reconstructed at 4.0, 2.0 and 0.75 mm RST and assessed by two blinded radiologists (R1 and R2) and CAD. Data were compared against an independent reference standard. Nodule subgroups (diameter >10, 4-10, <4 mm) were assessed separately. Statistical methods were the ROC analysis and Mann-Whitney Utest. CAD was outperformed by readers at 4.0 mm (Az = 0.18, 0.62 and 0.69 for CAD, R1 and R2, respectively; P<0.05), comparable at 2.0 mm (Az = 0.57, 0.70 and 0.69 for CAD, R1 and R2, respectively), and superior using 0.75 mm RST (Az = 0.80, 0.70 and 0.70 and sensitivity = 0.74, 0.53 and 0.53 for CAD, R1 and R2, respectively; P<0.05). Reader performances were significantly enhanced by CAD (Az = 0.93 and 0.95 for R1 + CAD and R2 + CAD, respectively, P<0.05). The CAD advantage was best for nodules <10 mm (detection rates = 93.3, 89.9, 47.9 and 47.9% for R1 + CAD, R2 + CAD, R1 and R2, respectively). CAD using 0.75 mm RST outperformed radiologists in nodules below 10 mm in diameter and should be used to replace a second radiologist. CAD is not recommended for 4.0 mm RST. (orig.)

  5. A FAST MORPHING-BASED INTERPOLATION FOR MEDICAL IMAGES: APPLICATION TO CONFORMAL RADIOTHERAPY

    Directory of Open Access Journals (Sweden)

    Hussein Atoui

    2011-05-01

    Full Text Available A method is presented for fast interpolation between medical images. The method is intended for both slice and projective interpolation. It allows offline interpolation between neighboring slices in tomographic data. Spatial correspondence between adjacent images is established using a block matching algorithm. Interpolation of image intensities is then carried out by morphing between the images. The morphing-based method is compared to standard linear interpolation, block-matching-based interpolation and registrationbased interpolation in 3D tomographic data sets. Results show that the proposed method scored similar performance in comparison to registration-based interpolation, and significantly outperforms both linear and block-matching-based interpolation. This method is applied in the context of conformal radiotherapy for online projective interpolation between Digitally Reconstructed Radiographs (DRRs.

  6. Shape-based grey-level image interpolation

    International Nuclear Information System (INIS)

    Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh

    1999-01-01

    The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)

  7. Interpolation functors and interpolation spaces

    CERN Document Server

    Brudnyi, Yu A

    1991-01-01

    The theory of interpolation spaces has its origin in the classical work of Riesz and Marcinkiewicz but had its first flowering in the years around 1960 with the pioneering work of Aronszajn, Calderón, Gagliardo, Krein, Lions and a few others. It is interesting to note that what originally triggered off this avalanche were concrete problems in the theory of elliptic boundary value problems related to the scale of Sobolev spaces. Later on, applications were found in many other areas of mathematics: harmonic analysis, approximation theory, theoretical numerical analysis, geometry of Banach spaces, nonlinear functional analysis, etc. Besides this the theory has a considerable internal beauty and must by now be regarded as an independent branch of analysis, with its own problems and methods. Further development in the 1970s and 1980s included the solution by the authors of this book of one of the outstanding questions in the theory of the real method, the K-divisibility problem. In a way, this book harvests the r...

  8. Architectural slicing

    DEFF Research Database (Denmark)

    Christensen, Henrik Bærbak; Hansen, Klaus Marius

    2013-01-01

    Architectural prototyping is a widely used practice, con- cerned with taking architectural decisions through experiments with light- weight implementations. However, many architectural decisions are only taken when systems are already (partially) implemented. This is prob- lematic in the context...... of architectural prototyping since experiments with full systems are complex and expensive and thus architectural learn- ing is hindered. In this paper, we propose a novel technique for harvest- ing architectural prototypes from existing systems, \\architectural slic- ing", based on dynamic program slicing. Given...... a system and a slicing criterion, architectural slicing produces an architectural prototype that contain the elements in the architecture that are dependent on the ele- ments in the slicing criterion. Furthermore, we present an initial design and implementation of an architectural slicer for Java....

  9. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  10. Linear Methods for Image Interpolation

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  11. Spline Interpolation of Image

    OpenAIRE

    I. Kuba; J. Zavacky; J. Mihalik

    1995-01-01

    This paper presents the use of B spline functions in various digital signal processing applications. The theory of one-dimensional B spline interpolation is briefly reviewed, followed by its extending to two dimensions. After presenting of one and two dimensional spline interpolation, the algorithms of image interpolation and resolution increasing were proposed. Finally, experimental results of computer simulations are presented.

  12. SPLINE, Spline Interpolation Function

    International Nuclear Information System (INIS)

    Allouard, Y.

    1977-01-01

    1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

  13. Generalized interpolative quantum statistics

    International Nuclear Information System (INIS)

    Ramanathan, R.

    1992-01-01

    A generalized interpolative quantum statistics is presented by conjecturing a certain reordering of phase space due to the presence of possible exotic objects other than bosons and fermions. Such an interpolation achieved through a Bose-counting strategy predicts the existence of an infinite quantum Boltzmann-Gibbs statistics akin to the one discovered by Greenberg recently

  14. CMB anisotropies interpolation

    NARCIS (Netherlands)

    Zinger, S.; Delabrouille, Jacques; Roux, Michel; Maitre, Henri

    2010-01-01

    We consider the problem of the interpolation of irregularly spaced spatial data, applied to observation of Cosmic Microwave Background (CMB) anisotropies. The well-known interpolation methods and kriging are compared to the binning method which serves as a reference approach. We analyse kriging

  15. Monotone piecewise bicubic interpolation

    International Nuclear Information System (INIS)

    Carlson, R.E.; Fritsch, F.N.

    1985-01-01

    In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables

  16. Linear Methods for Image Interpolation

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  17. Feature displacement interpolation

    DEFF Research Database (Denmark)

    Nielsen, Mads; Andresen, Per Rønsholt

    1998-01-01

    Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features...... often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov regularizations, also including scale......-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface....

  18. Extension Of Lagrange Interpolation

    Directory of Open Access Journals (Sweden)

    Mousa Makey Krady

    2015-01-01

    Full Text Available Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.

  19. Endorectal 3D T2-weighted 1 mm-slice thickness MRI for prostate cancer staging at 1.5 Tesla: Should we reconsider the indirects signs of extracapsular extension according to the D’Amico tumor risk criteria?

    International Nuclear Information System (INIS)

    Cornud, F.; Rouanne, M.; Beuvon, F.; Eiss, D.; Flam, T.; Liberatore, M.; Zerbib, M.; Delongchamps, N.B.

    2012-01-01

    Purpose: To evaluate the accuracy of a 3D-endorectal 1 mm-thick slices MRI acquisition for local staging of low, intermediate and high D’Amico risk prostate cancer (PCa). Materials and methods: 178 consecutive patients underwent a multiparametric MRI protocol prior to radical prostatectomy (RP). T2W images were acquired with the 3D sampling perfection with application optimized contrasts using different flip angle evolutions (SPACE) sequence (5 mn acquisition time). Direct and indirect MRI signs of extracapsular extension (ECE) were evaluated to predict the pT stage. The likelihood of SVI (seminal vesicle invasion) was also assessed. Results: Histology showed ECE and SVI in 38 (21%) and 12 (7%) cases, respectively. MRI sensitivity and specificity to detect ECE were 55 and 96% if direct signs of ECE were used and 84 and 89% (p < 0.05), if both direct and indirect signs were combined. D’Amico criteria did not influence MRI performance. Sensitivity and specificity for SVI detection were 83% and 99%. Conclusions: 3D data sets acquired with the SPACE sequence provides a high accuracy for local staging of prostate cancer. The use of indirect signs of ECE may be recommended in low D’Amico risk tumors to optimise patient selection for active surveillance or focal therapy.

  20. Digital time-interpolator

    International Nuclear Information System (INIS)

    Schuller, S.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report presents a description of the design of a digital time meter. This time meter should be able to measure, by means of interpolation, times of 100 ns with an accuracy of 50 ps. In order to determine the best principle for interpolation, three methods were simulated at the computer with a Pascal code. On the basis of this the best method was chosen and used in the design. In order to test the principal operation of the circuit a part of the circuit was constructed with which the interpolation could be tested. The remainder of the circuit was simulated with a computer. So there are no data available about the operation of the complete circuit in practice. The interpolation part however is the most critical part, the remainder of the circuit is more or less simple logic. Besides this report also gives a description of the principle of interpolation and the design of the circuit. The measurement results at the prototype are presented finally. (author). 3 refs.; 37 figs.; 2 tabs

  1. [Multimodal medical image registration using cubic spline interpolation method].

    Science.gov (United States)

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  2. Multivariate Birkhoff interpolation

    CERN Document Server

    Lorentz, Rudolph A

    1992-01-01

    The subject of this book is Lagrange, Hermite and Birkhoff (lacunary Hermite) interpolation by multivariate algebraic polynomials. It unifies and extends a new algorithmic approach to this subject which was introduced and developed by G.G. Lorentz and the author. One particularly interesting feature of this algorithmic approach is that it obviates the necessity of finding a formula for the Vandermonde determinant of a multivariate interpolation in order to determine its regularity (which formulas are practically unknown anyways) by determining the regularity through simple geometric manipulations in the Euclidean space. Although interpolation is a classical problem, it is surprising how little is known about its basic properties in the multivariate case. The book therefore starts by exploring its fundamental properties and its limitations. The main part of the book is devoted to a complete and detailed elaboration of the new technique. A chapter with an extensive selection of finite elements follows as well a...

  3. The bases for the use of interpolation in helical computed tomography: an explanation for radiologists

    International Nuclear Information System (INIS)

    Garcia-Santos, J. M.; Cejudo, J.

    2002-01-01

    In contrast to conventional computed tomography (CT), helical CT requires the application of interpolators to achieve image reconstruction. This is because the projections processed by the computer are not situated in the same plane. Since the introduction of helical CT. a number of interpolators have been designed in the attempt to maintain the thickness of the reconstructed section as close as possible to the thickness of the X-ray beam. The purpose of this article is to discuss the function of these interpolators, stressing the advantages and considering the possible inconveniences of high-grade curved interpolators with respect to standard linear interpolators. (Author) 7 refs

  4. Time-interpolator

    International Nuclear Information System (INIS)

    Blok, M. de; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report describes a time-interpolator with which time differences can be measured using digital and analog techniques. It concerns a maximum measuring time of 6.4 μs with a resolution of 100 ps. Use is made of Emitter Coupled Logic (ECL) and analogues of high-frequency techniques. The difficulty which accompanies the use of ECL-logic is keeping as short as possible the mutual connections and closing properly the outputs in order to avoid reflections. The digital part of the time-interpolator consists of a continuous running clock and logic which converts an input signal into a start- and stop signal. The analog part consists of a Time to Amplitude Converter (TAC) and an analog to digital converter. (author). 3 refs.; 30 figs

  5. Interpolative Boolean Networks

    Directory of Open Access Journals (Sweden)

    Vladimir Dobrić

    2017-01-01

    Full Text Available Boolean networks are used for modeling and analysis of complex systems of interacting entities. Classical Boolean networks are binary and they are relevant for modeling systems with complex switch-like causal interactions. More descriptive power can be provided by the introduction of gradation in this model. If this is accomplished by using conventional fuzzy logics, the generalized model cannot secure the Boolean frame. Consequently, the validity of the model’s dynamics is not secured. The aim of this paper is to present the Boolean consistent generalization of Boolean networks, interpolative Boolean networks. The generalization is based on interpolative Boolean algebra, the [0,1]-valued realization of Boolean algebra. The proposed model is adaptive with respect to the nature of input variables and it offers greater descriptive power as compared with traditional models. For illustrative purposes, IBN is compared to the models based on existing real-valued approaches. Due to the complexity of the most systems to be analyzed and the characteristics of interpolative Boolean algebra, the software support is developed to provide graphical and numerical tools for complex system modeling and analysis.

  6. Influence of γ-irradiation on drying of slice potato

    International Nuclear Information System (INIS)

    Wang Jun; Chao Yan; Fu Junjie; Wang Jianping

    2001-01-01

    A new technology is introduced to dry food products by hot-air after pretreated by irradiation. The influence of different dosage of irradiation, temperature of hot air, thickness of the slice potato on the rate of dehydration temperature of irradiated potato were studied. A conclusion is reached that the 3 factors, irradiation dosage, hot-air temperature and thickness of slice potato, affect the rate of dehydration and temperature of slice potato. The higher the dosage is, the greater the rate of dehydration of potato becomes, and the higher the temperature of the slice potato gets. (authors)

  7. Smooth Phase Interpolated Keying

    Science.gov (United States)

    Borah, Deva K.

    2007-01-01

    Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values

  8. Interpolating string field theories

    International Nuclear Information System (INIS)

    Zwiebach, B.

    1992-01-01

    This paper reports that a minimal area problem imposing different length conditions on open and closed curves is shown to define a one-parameter family of covariant open-closed quantum string field theories. These interpolate from a recently proposed factorizable open-closed theory up to an extended version of Witten's open string field theory capable of incorporating on shell closed strings. The string diagrams of the latter define a new decomposition of the moduli spaces of Riemann surfaces with punctures and boundaries based on quadratic differentials with both first order and second order poles

  9. Effect of simultaneous infrared dry-blanching and dehydration on quality characteristics of carrot slices

    Science.gov (United States)

    This study investigated the effects of various processing parameters on carrot slices exposed to infrared (IR) radiation heating for achieving simultaneous infrared dry-blanching and dehydration (SIRDBD). The investigated parameters were product surface temperature, slice thickness and processing ti...

  10. Generalized Fourier slice theorem for cone-beam image reconstruction.

    Science.gov (United States)

    Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang

    2015-01-01

    The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is O(N^4), which is close to the filtered back-projection method, here N is the image size of 1-dimension. However the interpolation process can be avoid, in that case the number of the calculations is O(N5).

  11. Image Interpolation with Contour Stencils

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    Image interpolation is the problem of increasing the resolution of an image. Linear methods must compromise between artifacts like jagged edges, blurring, and overshoot (halo) artifacts. More recent works consider nonlinear methods to improve interpolation of edges and textures. In this paper we apply contour stencils for estimating the image contours based on total variation along curves and then use this estimation to construct a fast edge-adaptive interpolation.

  12. Quasi interpolation with Voronoi splines.

    Science.gov (United States)

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  13. Design and Development of a tomato Slicing Machine

    OpenAIRE

    Kamaldeen Oladimeji Salaudeen; Awagu E. F.

    2012-01-01

    Principle of slicing was reviewed and tomato slicing machine was developed based on appropriate technology. Locally available materials like wood, stainless steel and mild steel were used in the fabrication. The machine was made to cut tomatoes in 2cm thickness. The capacity of the machine is 540.09g per minute and its performance efficiency is 70%.

  14. Pixel Interpolation Methods

    OpenAIRE

    Mintěl, Tomáš

    2009-01-01

    Tato diplomová práce se zabývá akcelerací interpolačních metod s využitím GPU a architektury NVIDIA (R) CUDA TM. Grafický výstup je reprezentován demonstrační aplikací pro transformaci obrazu nebo videa s použitím vybrané interpolace. Časově kritické části kódu jsou přesunuty na GPU a vykonány paralelně. Pro práci s obrazem a videem jsou použity vysoce optimalizované algoritmy z knihovny OpenCV, od firmy Intel. This master's thesis deals with acceleration of pixel interpolation methods usi...

  15. Fan beam image reconstruction with generalized Fourier slice theorem.

    Science.gov (United States)

    Zhao, Shuangren; Yang, Kang; Yang, Kevin

    2014-01-01

    For parallel beam geometry the Fourier reconstruction works via the Fourier slice theorem (or central slice theorem, projection slice theorem). For fan beam situation, Fourier slice can be extended to a generalized Fourier slice theorem (GFST) for fan-beam image reconstruction. We have briefly introduced this method in a conference. This paper reintroduces the GFST method for fan beam geometry in details. The GFST method can be described as following: the Fourier plane is filled by adding up the contributions from all fanbeam projections individually; thereby the values in the Fourier plane are directly calculated for Cartesian coordinates such avoiding the interpolation from polar to Cartesian coordinates in the Fourier domain; inverse fast Fourier transform is applied to the image in Fourier plane and leads to a reconstructed image in spacial domain. The reconstructed image is compared between the result of the GFST method and the result from the filtered backprojection (FBP) method. The major differences of the GFST and the FBP methods are: (1) The interpolation process are at different data sets. The interpolation of the GFST method is at projection data. The interpolation of the FBP method is at filtered projection data. (2) The filtering process are done in different places. The filtering process of the GFST is at Fourier domain. The filtering process of the FBP method is the ramp filter which is done at projections. The resolution of ramp filter is variable with different location but the filter in the Fourier domain lead to resolution invariable with location. One advantage of the GFST method over the FBP method is in short scan situation, an exact solution can be obtained with the GFST method, but it can not be obtained with the FBP method. The calculation of both the GFST and the FBP methods are at O(N^3), where N is the number of pixel in one dimension.

  16. Fuzzy linguistic model for interpolation

    International Nuclear Information System (INIS)

    Abbasbandy, S.; Adabitabar Firozja, M.

    2007-01-01

    In this paper, a fuzzy method for interpolating of smooth curves was represented. We present a novel approach to interpolate real data by applying the universal approximation method. In proposed method, fuzzy linguistic model (FLM) applied as universal approximation for any nonlinear continuous function. Finally, we give some numerical examples and compare the proposed method with spline method

  17. A disposition of interpolation techniques

    NARCIS (Netherlands)

    Knotters, M.; Heuvelink, G.B.M.

    2010-01-01

    A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated

  18. Contrast-guided image interpolation.

    Science.gov (United States)

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  19. Radial basis function interpolation of unstructured, three-dimensional, volumetric particle tracking velocimetry data

    International Nuclear Information System (INIS)

    Casa, L D C; Krueger, P S

    2013-01-01

    Unstructured three-dimensional fluid velocity data were interpolated using Gaussian radial basis function (RBF) interpolation. Data were generated to imitate the spatial resolution and experimental uncertainty of a typical implementation of defocusing digital particle image velocimetry. The velocity field associated with a steadily rotating infinite plate was simulated to provide a bounded, fully three-dimensional analytical solution of the Navier–Stokes equations, allowing for robust analysis of the interpolation accuracy. The spatial resolution of the data (i.e. particle density) and the number of RBFs were varied in order to assess the requirements for accurate interpolation. Interpolation constraints, including boundary conditions and continuity, were included in the error metric used for the least-squares minimization that determines the interpolation parameters to explore methods for improving RBF interpolation results. Even spacing and logarithmic spacing of RBF locations were also investigated. Interpolation accuracy was assessed using the velocity field, divergence of the velocity field, and viscous torque on the rotating boundary. The results suggest that for the present implementation, RBF spacing of 0.28 times the boundary layer thickness is sufficient for accurate interpolation, though theoretical error analysis suggests that improved RBF positioning may yield more accurate results. All RBF interpolation results were compared to standard Gaussian weighting and Taylor expansion interpolation methods. Results showed that RBF interpolation improves interpolation results compared to the Taylor expansion method by 60% to 90% based on the average squared velocity error and provides comparable velocity results to Gaussian weighted interpolation in terms of velocity error. RMS accuracy of the flow field divergence was one to two orders of magnitude better for the RBF interpolation compared to the other two methods. RBF interpolation that was applied to

  20. Slice hyperholomorphic Schur analysis

    CERN Document Server

    Alpay, Daniel; Sabadini, Irene

    2016-01-01

    This book defines and examines the counterpart of Schur functions and Schur analysis in the slice hyperholomorphic setting. It is organized into three parts: the first introduces readers to classical Schur analysis, while the second offers background material on quaternions, slice hyperholomorphic functions, and quaternionic functional analysis. The third part represents the core of the book and explores quaternionic Schur analysis and its various applications. The book includes previously unpublished results and provides the basis for new directions of research.

  1. Interpolation for de-Dopplerisation

    Science.gov (United States)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  2. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Janusz Konrad

    2009-01-01

    Full Text Available View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  3. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Ince Serdar

    2008-01-01

    Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  4. BIMOND3, Monotone Bivariate Interpolation

    International Nuclear Information System (INIS)

    Fritsch, F.N.; Carlson, R.E.

    2001-01-01

    1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data

  5. The research on NURBS adaptive interpolation technology

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Zhang, Sujia; Zhang, Feng

    2017-04-01

    In order to solve the problems of Research on NURBS Adaptive Interpolation Technology, such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for NURBS adaptive interpolation method of NURBS curve and simulation. We can use NURBS adaptive interpolation that calculates (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meets the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  6. COMPARISONS BETWEEN DIFFERENT INTERPOLATION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    G. Garnero

    2014-01-01

    In the present study different algorithms will be analysed in order to spot an optimal interpolation methodology. The availability of the recent digital model produced by the Regione Piemonte with airborne LIDAR and the presence of sections of testing realized with higher resolutions and the presence of independent digital models on the same territory allow to set a series of analysis with consequent determination of the best methodologies of interpolation. The analysis of the residuals on the test sites allows to calculate the descriptive statistics of the computed values: all the algorithms have furnished interesting results; all the more interesting, notably for dense models, the IDW (Inverse Distance Weighing algorithm results to give best results in this study case. Moreover, a comparative analysis was carried out by interpolating data at different input point density, with the purpose of highlighting thresholds in input density that may influence the quality reduction of the final output in the interpolation phase.

  7. Interpolation in Spaces of Functions

    Directory of Open Access Journals (Sweden)

    K. Mosaleheh

    2006-03-01

    Full Text Available In this paper we consider the interpolation by certain functions such as trigonometric and rational functions for finite dimensional linear space X. Then we extend this to infinite dimensional linear spaces

  8. Study of Energy Consumption of Potato Slices During Drying Process

    Directory of Open Access Journals (Sweden)

    Hafezi Negar

    2015-06-01

    Full Text Available One of the new methods of food drying using infrared heating under vacuum is to increase the drying rate and maintain the quality of dried product. In this study, potato slices were dried using vacuum-infrared drying. Experiments were performed with the infrared lamp power levels 100, 150 and 200 W, absolute pressure levels 20, 80, 140 and 760 mmHg, and with three thicknesses of slices 1, 2 and 3 mm, in three repetitions. The results showed that the infrared lamp power, absolute pressure and slice thickness have important effects on the drying of potato. With increasing the radiation power, reducing the absolute pressure (acts of vacuum in the dryer chamber and also reducing the thickness of potato slices, drying time and the amount of energy consumed is reduced. In relation to thermal utilization efficiency, results indicated that with increasing the infrared radiation power and decreasing the absolute pressure, thermal efficiency increased.

  9. The virtual slice setup.

    Science.gov (United States)

    Lytton, William W; Neymotin, Samuel A; Hines, Michael L

    2008-06-30

    In an effort to design a simulation environment that is more similar to that of neurophysiology, we introduce a virtual slice setup in the NEURON simulator. The virtual slice setup runs continuously and permits parameter changes, including changes to synaptic weights and time course and to intrinsic cell properties. The virtual slice setup permits shocks to be applied at chosen locations and activity to be sampled intra- or extracellularly from chosen locations. By default, a summed population display is shown during a run to indicate the level of activity and no states are saved. Simulations can run for hours of model time, therefore it is not practical to save all of the state variables. These, in any case, are primarily of interest at discrete times when experiments are being run: the simulation can be stopped momentarily at such times to save activity patterns. The virtual slice setup maintains an automated notebook showing shocks and parameter changes as well as user comments. We demonstrate how interaction with a continuously running simulation encourages experimental prototyping and can suggest additional dynamical features such as ligand wash-in and wash-out-alternatives to typical instantaneous parameter change. The virtual slice setup currently uses event-driven cells and runs at approximately 2 min/h on a laptop.

  10. Permanently calibrated interpolating time counter

    International Nuclear Information System (INIS)

    Jachna, Z; Szplet, R; Kwiatkowski, P; Różyc, K

    2015-01-01

    We propose a new architecture of an integrated time interval counter that provides its permanent calibration in the background. Time interval measurement and the calibration procedure are based on the use of a two-stage interpolation method and parallel processing of measurement and calibration data. The parallel processing is achieved by a doubling of two-stage interpolators in measurement channels of the counter, and by an appropriate extension of control logic. Such modification allows the updating of transfer characteristics of interpolators without the need to break a theoretically infinite measurement session. We describe the principle of permanent calibration, its implementation and influence on the quality of the counter. The precision of the presented counter is kept at a constant level (below 20 ps) despite significant changes in the ambient temperature (from −10 to 60 °C), which can cause a sevenfold decrease in the precision of the counter with a traditional calibration procedure. (paper)

  11. Portable Device Slices Thermoplastic Prepregs

    Science.gov (United States)

    Taylor, Beverly A.; Boston, Morton W.; Wilson, Maywood L.

    1993-01-01

    Prepreg slitter designed to slit various widths rapidly by use of slicing bar holding several blades, each capable of slicing strip of preset width in single pass. Produces material evenly sliced and does not contain jagged edges. Used for various applications in such batch processes involving composite materials as press molding and autoclaving, and in such continuous processes as pultrusion. Useful to all manufacturers of thermoplastic composites, and in slicing B-staged thermoset composites.

  12. The interpolation method based on endpoint coordinate for CT three-dimensional image

    International Nuclear Information System (INIS)

    Suto, Yasuzo; Ueno, Shigeru.

    1997-01-01

    Image interpolation is frequently used to improve slice resolution to reach spatial resolution. Improved quality of reconstructed three-dimensional images can be attained with this technique as a result. Linear interpolation is a well-known and widely used method. The distance-image method, which is a non-linear interpolation technique, is also used to convert CT value images to distance images. This paper describes a newly developed method that makes use of end-point coordinates: CT-value images are initially converted to binary images by thresholding them and then sequences of pixels with 1-value are arranged in vertical or horizontal directions. A sequence of pixels with 1-value is defined as a line segment which has starting and end points. For each pair of adjacent line segments, another line segment was composed by spatial interpolation of the start and end points. Binary slice images are constructed from the composed line segments. Three-dimensional images were reconstructed from clinical X-ray CT images, using three different interpolation methods and their quality and processing speed were evaluated and compared. (author)

  13. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    Science.gov (United States)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  14. A Note on Cubic Convolution Interpolation

    OpenAIRE

    Meijering, E.; Unser, M.

    2003-01-01

    We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

  15. Node insertion in Coalescence Fractal Interpolation Function

    International Nuclear Information System (INIS)

    Prasad, Srijanani Anurag

    2013-01-01

    The Iterated Function System (IFS) used in the construction of Coalescence Hidden-variable Fractal Interpolation Function (CHFIF) depends on the interpolation data. The insertion of a new point in a given set of interpolation data is called the problem of node insertion. In this paper, the effect of insertion of new point on the related IFS and the Coalescence Fractal Interpolation Function is studied. Smoothness and Fractal Dimension of a CHFIF obtained with a node are also discussed

  16. Bayer Demosaicking with Polynomial Interpolation.

    Science.gov (United States)

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

    2016-08-30

    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  17. NMR surprizes with thin slices and strong gradients

    Energy Technology Data Exchange (ETDEWEB)

    Gaedke, Achim; Kresse, Benjamin [Institute of Condensed Matter Physics, Technische Universitaet Darmstadt (Germany); Nestle, Nikolaus

    2008-07-01

    In the context of our work on diffusion-relaxation-coupling in thin excited slices, we perform NMR experiments in static magnetic field gradients up to 200 T/m. For slice thicknesses in the range of 10{mu}m, the frequency bandwidth of the excited slices becomes sufficiently narrow that free induction decays (FIDs) become observable despite the presence of the strong static gradient. The observed FIDs were also simulated using standard methods from MRI physics. Possible effects of diffusion during the FID duration are still minor at this slice thickness in water but might become dominant for smaller slices or more diffusive media. Furthermore, the detailed excitation structure of the RF pulses was studied in profiling experiments over the edge of a plane liquid cell. Side lobe effects to the slices will be discussed along with approaches to control them. The spatial resolution achieved in the profiling experiments furthermore allows the identification of thermal expansion phenomena in the NMR magnet. Measures to reduce the temperature drift problems are presented.

  18. Slice sensitivity profiles and pixel noise of multi-slice CT in comparison with single-slice CT

    International Nuclear Information System (INIS)

    Schorn, C.; Obenauer, S.; Funke, M.; Hermann, K.P.; Kopka, L.; Grabbe, E.

    1999-01-01

    Purpose: Presentation and evaluation of slice sensitivity profile and pixel noise of multi-slice CT in comparison to single-slice CT. Methods: Slice sensitivity profiles and pixel noise of a multi-slice CT equiped with a 2D matrix detector array and of a single-slice CT were evaluated in phantom studies. Results: For the single-slice CT the width of the slice sensitivity profiles increased with increasing pitch. In spite of a much higher table speed the slice sensitivity profiles of multi-slice CT were narrower and did not increase with higher pitch. Noise in single-slice CT was independent of pitch. For multi-slice CT noise increased with higher pitch and for the higher pitch decreased slightly with higher detector row collimation. Conclusions: Multi-slice CT provides superior z-resolution and higher volume coverage speed. These qualities fulfill one of the prerequisites for improvement of 3D postprocessing. (orig.) [de

  19. Scanning and contrast enhancement protocols for multi-slice CT in evaluation of the upper abdomen

    International Nuclear Information System (INIS)

    Awai, Kazuo; Onishi, Hiromitsu; Takada, Koichi; Yamaguchi, Yasuo; Eguchi, Nobuko; Hiraishi, Kumiko; Hori, Shinichi

    2000-01-01

    The advent of multi-slice CT is one of the quantum leaps in computed tomography since the introduction of helical CT. Multi-slice CT can rapidly scan a large longitudinal (z-axis) volume with high longitudinal resolution and low image artifacts. The rapid volume coverage speed of multi-slice CT can increase the difficulty in optimizing the delay time between the beginning of contrast material injection and the acquisition of images and we need accurate knowledge about optimal temporal window for adequate contrast enhancement. High z-axis resolution of multi-slice can improve the quality of three-dimensional images and MPR images and we must select adequate slice thickness and slice intervals in each case. We discuss basic considerations for adequate contrast enhancement and scanning protocols by multi-slice CT scanner in the upper abdomen. (author)

  20. Improving the visualization of electron-microscopy data through optical flow interpolation

    KAUST Repository

    Carata, Lucian

    2013-01-01

    Technical developments in neurobiology have reached a point where the acquisition of high resolution images representing individual neurons and synapses becomes possible. For this, the brain tissue samples are sliced using a diamond knife and imaged with electron-microscopy (EM). However, the technique achieves a low resolution in the cutting direction, due to limitations of the mechanical process, making a direct visualization of a dataset difficult. We aim to increase the depth resolution of the volume by adding new image slices interpolated from the existing ones, without requiring modifications to the EM image-capturing method. As classical interpolation methods do not provide satisfactory results on this type of data, the current paper proposes a re-framing of the problem in terms of motion volumes, considering the depth axis as a temporal axis. An optical flow method is adapted to estimate the motion vectors of pixels in the EM images, and this information is used to compute and insert multiple new images at certain depths in the volume. We evaluate the visualization results in comparison with interpolation methods currently used on EM data, transforming the highly anisotropic original dataset into a dataset with a larger depth resolution. The interpolation based on optical flow better reveals neurite structures with realistic undistorted shapes, and helps to easier map neuronal connections. © 2011 ACM.

  1. Precipitation interpolation in mountainous areas

    Science.gov (United States)

    Kolberg, Sjur

    2015-04-01

    Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.

  2. Potential problems with interpolating fields

    Energy Technology Data Exchange (ETDEWEB)

    Birse, Michael C. [The University of Manchester, Theoretical Physics Division, School of Physics and Astronomy, Manchester (United Kingdom)

    2017-11-15

    A potential can have features that do not reflect the dynamics of the system it describes but rather arise from the choice of interpolating fields used to define it. This is illustrated using a toy model of scattering with two coupled channels. A Bethe-Salpeter amplitude is constructed which is a mixture of the waves in the two channels. The potential derived from this has a strong repulsive core, which arises from the admixture of the closed channel in the wave function and not from the dynamics of the model. (orig.)

  3. Strip interpolation in silicon and germanium strip detectors

    International Nuclear Information System (INIS)

    Wulf, E. A.; Phlips, B. F.; Johnson, W. N.; Kurfess, J. D.; Lister, C. J.; Kondev, F.; Physics; Naval Research Lab.

    2004-01-01

    The position resolution of double-sided strip detectors is limited by the strip pitch and a reduction in strip pitch necessitates more electronics. Improved position resolution would improve the imaging capabilities of Compton telescopes and PET detectors. Digitizing the preamplifier waveform yields more information than can be extracted with regular shaping electronics. In addition to the energy, depth of interaction, and which strip was hit, the digitized preamplifier signals can locate the interaction position to less than the strip pitch of the detector by looking at induced signals in neighboring strips. This allows the position of the interaction to be interpolated in three dimensions and improve the imaging capabilities of the system. In a 2 mm thick silicon strip detector with a strip pitch of 0.891 mm, strip interpolation located the interaction of 356 keV gamma rays to 0.3 mm FWHM. In a 2 cm thick germanium detector with a strip pitch of 5 mm, strip interpolation of 356 keV gamma rays yielded a position resolution of 1.5 mm FWHM

  4. Interpolation of rational matrix functions

    CERN Document Server

    Ball, Joseph A; Rodman, Leiba

    1990-01-01

    This book aims to present the theory of interpolation for rational matrix functions as a recently matured independent mathematical subject with its own problems, methods and applications. The authors decided to start working on this book during the regional CBMS conference in Lincoln, Nebraska organized by F. Gilfeather and D. Larson. The principal lecturer, J. William Helton, presented ten lectures on operator and systems theory and the interplay between them. The conference was very stimulating and helped us to decide that the time was ripe for a book on interpolation for matrix valued functions (both rational and non-rational). When the work started and the first partial draft of the book was ready it became clear that the topic is vast and that the rational case by itself with its applications is already enough material for an interesting book. In the process of writing the book, methods for the rational case were developed and refined. As a result we are now able to present the rational case as an indepe...

  5. Interpolation method by whole body computed tomography, Artronix 1120

    International Nuclear Information System (INIS)

    Fujii, Kyoichi; Koga, Issei; Tokunaga, Mitsuo

    1981-01-01

    Reconstruction of the whole body CT images by interpolation method was investigated by rapid scanning. Artronix 1120 with fixed collimator was used to obtain the CT images every 5 mm. X-ray source was circully movable to obtain perpendicular beam to the detector. A length of 150 mm was scanned in about 15 min., with the slice width of 5 mm. The images were reproduced every 7.5 mm, which was able to reduce every 1.5 mm when necessary. Out of 420 inspection in the chest, abdomen, and pelvis, 5 representative cases for which this method was valuable were described. The cases were fibrous histiocytoma of upper mediastinum, left adrenal adenoma, left ureter fibroma, recurrence of colon cancer in the pelvis, and abscess around the rectum. This method improved the image quality of lesions in the vicinity of the ureters, main artery, and rectum. The time required and exposure dose were reduced to 50% by this method. (Nakanishi, T.)

  6. Evaluation of various interpolants available in DICE

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reu, Phillip L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Crozier, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.

  7. Analysis of ECT Synchronization Performance Based on Different Interpolation Methods

    Directory of Open Access Journals (Sweden)

    Yang Zhixin

    2014-01-01

    Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.

  8. The time slice system

    International Nuclear Information System (INIS)

    DeWitt, J.

    1990-01-01

    We have designed a fast readout system for silicon microstrip detectors which could be used at HERA, LHC, and SSC. The system consists of an analog amplifier-comparator chip (AACC) and a digital time slice chip (DTSC). The analog ship is designed in dielectric isolated bipolar technology for low noise and potential radiation hardness. The DTSC is built in CMOS for low power use and high circuit density. The main implementation aims are low power consumption and compactness. The architectural goal is automatic data reduction, and ease of external interface. The pipelining of event information is done digitally in the DTSC. It has a 64 word deep level 1 buffer acting as a FIFO, and a 16 word deep level 2 buffer acting as a dequeue. The DTSC also includes an asynchronous bus interface. We are first building a scaled up (100 μm instead of 25 μm pitch) and slower (10 MHz instead of 60 MHz) version in 2 μm CMOS and plan to test the principle of operation of this system in the Leading Proton Spectrometer (LPS) of the ZEUS detector at HERA. Another very important development will be tested there: the radiation hardening of the chips. We have started a collaboration with a rad-hard foundry and with Los Alamos National Laboratories to test and evaluate rad-hard processes and the final rad-hard product. Initial data are very promising, because radiation resistance of up to many Mrad have been achieved. (orig.)

  9. Dosimetric variation due to CT inter-slice spacing in four-dimensional carbon beam lung therapy

    International Nuclear Information System (INIS)

    Kumagai, Motoki; Mori, Shinichiro; Kandatsu, Susumu; Baba, Masayuki; Sharp, Gregory C; Asakura, Hiroshi; Endo, Masahiro

    2009-01-01

    When CT data with thick slice thickness are used in treatment planning, geometrical uncertainty may induce dosimetric errors. We evaluated carbon ion dose variations due to different CT slice thicknesses using a four-dimensional (4D) carbon ion beam dose calculation, and compared results between ungated and gated respiratory strategies. Seven lung patients were scanned in 4D mode with a 0.5 mm slice thickness using a 256-multi-slice CT scanner. CT images were averaged with various numbers of images to simulate reconstructed images with various slice thicknesses (0.5-5.0 mm). Two scenarios were studied (respiratory-ungated and -gated strategies). Range compensators were designed for each of the CT volumes with coarse inter-slice spacing to cover the internal target volume (ITV), as defined from 4DCT. Carbon ion dose distribution was computed for each resulting ITV on the 0.5 mm slice 4DCT data. The accumulated dose distribution was then calculated using deformable registration for 4D dose assessment. The magnitude of over- and under-dosage was found to be larger with the use of range compensators designed with a coarser inter-slice spacing than those obtained with a 0.5 mm slice thickness. Although no under-dosage was observed within the clinical target volume (CTV) region, D95 remained at over 97% of the prescribed dose for the ungated strategy and 95% for the gated strategy for all slice thicknesses. An inter-slice spacing of less than 3 mm may be able to minimize dose variation between the ungated and gated strategies. Although volumes with increased inter-slice spacing may reduce geometrical accuracy at a certain respiratory phase, this does not significantly affect delivery of the accumulated dose to the target during the treatment course.

  10. Research on interpolation methods in medical image processing.

    Science.gov (United States)

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  11. [Standardization of production of process Notopterygii Rhizoma et Radix slices].

    Science.gov (United States)

    Sun, Zhen-Yang; Wang, Ying-Zi; Nie, Rui-Jie; Zhang, Jing-Zhen; Wang, Si-Yu

    2017-12-01

    Notopterol, isoimperatorin, volatile oil and extract (water and ethanol) were used as the research objects in this study to investigate the effects of different softening method, slice thickness and drying methods on the quality of Notopterygii Rhizoma et Radix slices, and the experimental data were analyzed by homogeneous distance evaluation method. The results showed that different softening, cutting and drying processes could affect the content of five components in Notopterygii Rhizoma et Radix incisum. The best processing technology of Notopterygii Rhizoma et Radix slices was as follows: non-medicinal parts were removed; mildewed and rot as well as moth-eaten parts were removed; washed by the flowing drinking water; stacked in the drug pool; moistening method was used for softening, where 1/8 volume of water was sprayed for every 1 kg of herbs every 2 h; upper part of herbs covered with clean and moist cotton, and cut into thick slices (2-4 mm) after 12 h moistening until appropriate softness, then received blast drying for 4 h at 50 ℃, and turned over for 2 times during the drying. The process is practical and provides the experimental basis for the standardization of the processing of Notopterygii Rhizoma et Radix, with great significance to improve the quality of Notopterygii Rhizoma et Radix slices. Copyright© by the Chinese Pharmaceutical Association.

  12. Cardiac tissue slices: preparation, handling, and successful optical mapping.

    Science.gov (United States)

    Wang, Ken; Lee, Peter; Mirams, Gary R; Sarathchandra, Padmini; Borg, Thomas K; Gavaghan, David J; Kohl, Peter; Bollensdorff, Christian

    2015-05-01

    Cardiac tissue slices are becoming increasingly popular as a model system for cardiac electrophysiology and pharmacology research and development. Here, we describe in detail the preparation, handling, and optical mapping of transmembrane potential and intracellular free calcium concentration transients (CaT) in ventricular tissue slices from guinea pigs and rabbits. Slices cut in the epicardium-tangential plane contained well-aligned in-slice myocardial cell strands ("fibers") in subepicardial and midmyocardial sections. Cut with a high-precision slow-advancing microtome at a thickness of 350 to 400 μm, tissue slices preserved essential action potential (AP) properties of the precutting Langendorff-perfused heart. We identified the need for a postcutting recovery period of 36 min (guinea pig) and 63 min (rabbit) to reach 97.5% of final steady-state values for AP duration (APD) (identified by exponential fitting). There was no significant difference between the postcutting recovery dynamics in slices obtained using 2,3-butanedione 2-monoxime or blebistatin as electromechanical uncouplers during the cutting process. A rapid increase in APD, seen after cutting, was caused by exposure to ice-cold solution during the slicing procedure, not by tissue injury, differences in uncouplers, or pH-buffers (bicarbonate; HEPES). To characterize intrinsic patterns of CaT, AP, and conduction, a combination of multipoint and field stimulation should be used to avoid misinterpretation based on source-sink effects. In summary, we describe in detail the preparation, mapping, and data analysis approaches for reproducible cardiac tissue slice-based investigations into AP and CaT dynamics. Copyright © 2015 the American Physiological Society.

  13. Flat slices in Minkowski space

    Science.gov (United States)

    Murchadha, Niall Ó.; Xie, Naqing

    2015-03-01

    Minkowski space, flat spacetime, with a distance measure in natural units of d{{s}2}=-d{{t}2}+d{{x}2}+d{{y}2}+d{{z}2}, or equivalently, with spacetime metric diag(-1, +1, +1, +1), is recognized as a fundamental arena for physics. The Poincaré group, the set of all rigid spacetime rotations and translations, is the symmetry group of Minkowski space. The action of this group preserves the form of the spacetime metric. Each t = constant slice of each preferred coordinate system is flat. We show that there are also nontrivial non-singular representations of Minkowski space with complete flat slices. If the embedding of the flat slices decays appropriately at infinity, the only flat slices are the standard ones. However, if we remove the decay condition, we find non-trivial flat slices with non-vanishing extrinsic curvature. We write out explicitly the coordinate transformation to a frame with such slices.

  14. Imaging skeletal anatomy of injured cervical spine specimens: comparison of single-slice vs multi-slice helical CT

    Energy Technology Data Exchange (ETDEWEB)

    Obenauer, S.; Alamo, L.; Herold, T.; Funke, M.; Kopka, L.; Grabbe, E. [Department of Radiology, Georg August-University Goettingen, Robert-Koch-Strasse 40, 37075 Goettingen (Germany)

    2002-08-01

    Our objective was to compare a single-slice CT (SS-CT) scanner with a multi-slice CT (MS-CT) scanner in the depiction of osseous anatomic structures and fractures of the upper cervical spine. Two cervical spine specimens with artificial trauma were scanned with a SS-CT scanner (HighSpeed, CT/i, GE, Milwaukee, Wis.) by using various collimations (1, 3, 5 mm) and pitch factors (1, 1.5, 2, 3) and a four-slice helical CT scanner (LightSpeed, QX/i, GE, Milwaukee, Wis.) by using various table speeds ranging from 3.75 to 15 mm/rotation for a pitch of 0.75 and from 7.5 to 30 mm/rotation for a pitch of 1.5. Images were reconstructed with an interval of 1 mm. Sagittal and coronal multiplanar reconstructions of the primary and reconstructed data set were performed. For MS-CT a tube current resulting in equivalent image noise as with SS-CT was used. All images were judged by two observers using a 4-point scale. The best image quality for SS-CT was achieved with the smallest slice thickness (1 mm) and a pitch smaller than 2 resulting in a table speed of up to 2 mm per gantry rotation (4 points). A reduction of the slice thickness rather than of the table speed proved to be beneficial at MS-CT. Therefore, the optimal scan protocol in MS-CT included a slice thickness of 1.25 mm with a table speed of 7.5 mm/360 using a pitch of 1.5 (4 points), resulting in a faster scan time than when a pitch of 0.75 (4 points) was used. This study indicates that MS-CT could provide equivalent image quality at approximately four times the volume coverage speed of SS-CT. (orig.)

  15. Differential Interpolation Effects in Free Recall

    Science.gov (United States)

    Petrusic, William M.; Jamieson, Donald G.

    1978-01-01

    Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…

  16. Transfinite C2 interpolant over triangles

    International Nuclear Information System (INIS)

    Alfeld, P.; Barnhill, R.E.

    1984-01-01

    A transfinite C 2 interpolant on a general triangle is created. The required data are essentially C 2 , no compatibility conditions arise, and the precision set includes all polynomials of degree less than or equal to eight. The symbol manipulation language REDUCE is used to derive the scheme. The scheme is discretized to two different finite dimensional C 2 interpolants in an appendix

  17. Analysis of velocity planning interpolation algorithm based on NURBS curve

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.

  18. An Improved Rotary Interpolation Based on FPGA

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2014-08-01

    Full Text Available This paper presents an improved rotary interpolation algorithm, which consists of a standard curve interpolation module and a rotary process module. Compared to the conventional rotary interpolation algorithms, the proposed rotary interpolation algorithm is simpler and more efficient. The proposed algorithm was realized on a FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe, which uses rotary ellipse and rotary parabolic as an example. According to the theoretical analysis and practical process validation, the algorithm has the following advantages: firstly, less arithmetic items is conducive for interpolation operation; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  19. Demonstration of the pulmonary interlobar fissures on multiplanar reformatted images with 64-slices spiral CT

    International Nuclear Information System (INIS)

    Wang Yafei; Chen Yerong; Shan Xiuhong; Tang Zhiyang; Ni Enzhen; Huang Hao; Wu Shuchun

    2009-01-01

    Objective: To determine the optimal orientation and slice thickness of reformatted images to visualize the interlobar fissures on multiplanar reformation (MPR) images and to recommend MPR imaging protocal for visualizing interlobar fissures in clinical practise. Methods: 64-slices CT scans of chest were obtained in 300 patients without pulmonary diseases. Axial, sagittal and coronal images were reformatted at 1, 2, 3, 7 mm slice thickness respectively from the raw volume data. Three experienced radiologists evaluated all of the MPR images in the lung window and compared the differences in visualization of the interlohar fissures among the three reformatted orientations and at the different slice thicknesses with Fisher test and Friedman test. Results: Fissures on sagittal MPR images using 1, 2, 3, and 7 mm reformatted slice thickness appeared as a fine line and the preference value analysis showed the MPR images with a 3 mm reformatted slice thickness is the best for visualizing the interlobar fissure. Compared to the sagittal orientation, the coronal was not as good and the axial was the worst among the three orientations. The coronal images with a 3 mm reformatted slice thickness were slightly inferior to sagittal images. The right horizontal fissures were observed as a fine line in all coronal image in 94.0% (282)of cases and in some of the images in 6.0% (18) of cases, the right oblique fissures were displayed as a fine line in coronal images in 2.3% (7) of cases and in some images in 85.0% (255) of cases, the left oblique fissures were displayed as a fine line in some coronal images in 35.7% (107) of cases and displayed as a coarse line in 64.3% (193) of cases. On axial MPR images using 3 mm reformation slice thickness, the right oblique fissures and the left oblique fissures were displayed as a fine line in some axial images in 79.3% (238) and 81.0% (243) of cases respectively, none of the images showed horizontal fissures as a fine line. There was

  20. Localizing gravity on exotic thick 3-branes

    International Nuclear Information System (INIS)

    Castillo-Felisola, Oscar; Melfo, Alejandra; Pantoja, Nelson; Ramirez, Alba

    2004-01-01

    We consider localization of gravity on thick branes with a nontrivial structure. Double walls that generalize the thick Randall-Sundrum solution, and asymmetric walls that arise from a Z 2 symmetric scalar potential, are considered. We present a new asymmetric solution: a thick brane interpolating between two AdS 5 spacetimes with different cosmological constants, which can be derived from a 'fake supergravity' superpotential, and show that it is possible to confine gravity on such branes

  1. Interferometric interpolation of sparse marine data

    KAUST Repository

    Hanafy, Sherif M.

    2013-10-11

    We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.

  2. Improved biochemical preservation of heart slices during cold storage.

    Science.gov (United States)

    Bull, D A; Reid, B B; Connors, R C; Albanil, A; Stringham, J C; Karwande, S V

    2000-01-01

    Development of myocardial preservation solutions requires the use of whole organ models which are animal and labor intensive. These models rely on physiologic rather than biochemical endpoints, making accurate comparison of the relative efficacy of individual solution components difficult. We hypothesized that myocardial slices could be used to assess preservation of biochemical function during cold storage. Whole rat hearts were precision cut into slices with a thickness of 200 microm and preserved at 4 degrees C in one of the following solutions: Columbia University (CU), University of Wisconsin (UW), D5 0.2% normal saline with 20 meq/l KCL (QNS), normal saline (NS), or a novel cardiac preservation solution (NPS) developed using this model. Myocardial biochemical function was assessed by ATP content (etamoles ATP/mg wet weight) and capacity for protein synthesis (counts per minute (cpm)/mg protein) immediately following slicing (0 hours), and at 6, 12, 18, and 24 hours of cold storage. Six slices were assayed at each time point for each solution. The data were analyzed using analysis of variance and are presented as the mean +/- standard deviation. ATP content was higher in the heart slices stored in the NPS compared to all other solutions at 6, 12, 18 and 24 hours of cold storage (p cold storage (p cold storage.

  3. Improved biochemical preservation of lung slices during cold storage.

    Science.gov (United States)

    Bull, D A; Connors, R C; Reid, B B; Albanil, A; Stringham, J C; Karwande, S V

    2000-05-15

    Development of lung preservation solutions typically requires whole-organ models which are animal and labor intensive. These models rely on physiologic rather than biochemical endpoints, making accurate comparison of the relative efficacy of individual solution components difficult. We hypothesized that lung slices could be used to assess preservation of biochemical function during cold storage. Whole rat lungs were precision cut into slices with a thickness of 500 microm and preserved at 4 degrees C in the following solutions: University of Wisconsin (UW), Euro-Collins (EC), low-potassium-dextran (LPD), Kyoto (K), normal saline (NS), or a novel lung preservation solution (NPS) developed using this model. Lung biochemical function was assessed by ATP content (etamol ATP/mg wet wt) and capacity for protein synthesis (cpm/mg protein) immediately following slicing (0 h) and at 6, 12, 18, and 24 h of cold storage. Six slices were assayed at each time point for each solution. The data were analyzed using analysis of variance and are presented as means +/- SD. ATP content was significantly higher in the lung slices stored in NPS compared with all other solutions at each time point (P cold storage. Copyright 2000 Academic Press.

  4. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    Science.gov (United States)

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  5. NOAA Optimum Interpolation (OI) SST V2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...

  6. Kuu plaat : Interpol Antics. Plaadid kauplusest Lasering

    Index Scriptorium Estoniae

    2005-01-01

    Heliplaatidest: "Interpol Antics", Scooter "Mind the Gap", Slide-Fifty "The Way Ahead", Psyhhoterror "Freddy, löö esimesena!", Riho Sibul "Must", Bossacucanova "Uma Batida Diferente", "Biscantorat - Sound of the spirit from Glenstal Abbey"

  7. Revisiting Veerman’s interpolation method

    DEFF Research Database (Denmark)

    Christiansen, Peter; Bay, Niels Oluf

    2016-01-01

    and (c) FEsimulations. A comparison of the determined forming limits yields insignificant differences in the limit strain obtainedwith Veerman’s method or exact Lagrangian interpolation for the two sheet metal forming processes investigated. Theagreement with the FE-simulations is reasonable.......This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established...... for comparison. Bulge testing and tensile testing of aluminium sheets containingelectro-chemically etched circle grids are performed to experimentally determine the forming limit of the sheet material.The forming limit is determined using (a) Veerman’s interpolation method, (b) exact Lagrangian interpolation...

  8. NOAA Daily Optimum Interpolation Sea Surface Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA 1/4° daily Optimum Interpolation Sea Surface Temperature (or daily OISST) is an analysis constructed by combining observations from different platforms...

  9. Integration and interpolation of sampled waveforms

    International Nuclear Information System (INIS)

    Stearns, S.D.

    1978-01-01

    Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed

  10. Wideband DOA Estimation through Projection Matrix Interpolation

    OpenAIRE

    Selva, J.

    2017-01-01

    This paper presents a method to reduce the complexity of the deterministic maximum likelihood (DML) estimator in the wideband direction-of-arrival (WDOA) problem, which is based on interpolating the array projection matrix in the temporal frequency variable. It is shown that an accurate interpolator like Chebyshev's is able to produce DML cost functions comprising just a few narrowband-like summands. Actually, the number of such summands is far smaller (roughly by factor ten in the numerical ...

  11. Interpolation for a subclass of H

    Indian Academy of Sciences (India)

    |g(zm)| ≤ c |zm − zm |, ∀m ∈ N. Thus it is natural to pose the following interpolation problem for H. ∞. : DEFINITION 4. We say that (zn) is an interpolating sequence in the weak sense for H. ∞ if given any sequence of complex numbers (λn) verifying. |λn| ≤ c ψ(zn,z. ∗ n) |zn − zn |, ∀n ∈ N,. (4) there exists a product fg ∈ H.

  12. Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI

    Science.gov (United States)

    Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.

    2015-01-01

    Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085

  13. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  14. Thin slices and Sherlock Holmes

    African Journals Online (AJOL)

    based on very little information, and often in a matter of seconds. This is partly based on very narrow slices of our experience, and involves pattern recognition, as well as the memory banks of our senses. It is also partly a heuristic process whereby one rapidly discards ideas or notions, or promotes other hypotheses, as one.

  15. Analysis of aliasing artifacts in 16-slice helical CT

    International Nuclear Information System (INIS)

    Chen Wei; Liu Jingkang; Ou Xiaoguang; Li Wenzheng; Liao Weihua; Yan Ang

    2006-01-01

    Objective: To recognize the features of aliasing artifacts on CT images, and to investigate the effects of imaging parameters on the magnitude of this artifacts. Methods: An adult dry skull was placed in a plastic water-filled container and scanned with a PHILIPS 16-slice helical CT. All the acquired transaxial images by using several different acquisition or reconstruction parameters were examined for comparative assessment of the aliasing artifacts. Results: The aliasing artifacts could be seen in most instances and characterized as the spokewise patterns emanating from the edges of high contrast structure as its radius varies sharply in the longitudinal direction. The images that scanned with pitch of 0.3, 0.6 and 0.9, respectively, showed aliasing artifacts, and its severities increased with pitches escalated (detector combination 16 x 1.5, reconstruction thickness 2 mm); There were more significant aliasing artifacts on the images reconstructed with 0.8 mm slice width compared with 1-mm slice width, and no aliasing artifacts were observed on the images reconstructed with 2-mm slice width (detector combination 16 x 0.75, pitch 0.6); No artifacts were perceived on the images scanned with detector combination 16 x 0.75, while presented evidently with the use of detector combination 16 x 1.5 (pitch 0.6, reconstruction thickness 2 mm); The degrees of aliasing artifacts were unaltered when reconstruction interval and tube current changed. Conclusions: Aliasing artifacts are caused by undersampling. When the operator choose the thinner sampling thickness, lower pitch and a much wider reconstruction thickness judiciously, aliasing artifacts could be effectively mitigated or suppressed. (authors)

  16. On the way to isotopic spatial resolution: technical principles and applications of 16-slice CT

    International Nuclear Information System (INIS)

    Flohr, T.; Ohnesorge, B.; Stierstorfer, K.

    2005-01-01

    The broad introduction of multi-slice CT by all major vendors in 1998 was a milestone with regard to extended volume coverage, improved axial resolution and better utilization of the tube output. New clinical applications such as CT-examinations of the heart and the coronary arteries became possible. Despite all promising advances, some limitations remain for 4-slice CT systems. They come close to isotropic resolution, but do not fully reach it in routine clinical applications. Cardiac CT-examinations require careful patient selection. The new generation of multi-slice CT-systems offer simultaneous acquisition of up to 16 sub-millimeter slices and improved temporal resolution for cardiac examinations by means of reduced gantry rotation time (0.4 s). In this overview article we present the basic technical principles and potential applications of 16-slice technology for the example of a 16-slice CT-system (SOMATOM Sensation 16, Siemens AG, Forchheim). We discuss detector design and dose efficiency as well as spiral scan- and reconstruction techniques. At comparable slice thickness, 16-slice CT-systems have a better dose efficiency than 4-slice CT-systems. The cone-beam geometry of the measurement rays requires new reconstruction approaches, an example is the adaptive multiple plane reconstruction, AMPR. First clinical experience indicates that sub-millimeter slice width in combination with reduced gantry rotation-time improves the clinical stability of cardiac examinations and expands the spectrum of patients accessible to cardiac CT. 16-slice CT-systems have the potential to cover even large scan ranges with sub-millimeter slices at considerably reduced examination times, thus approaching the goal of routine isotropic imaging [de

  17. Exposure (mAs) optimisation of a multi-detector CT protocol for hepatic lesion detection: are thinner slices better?

    International Nuclear Information System (INIS)

    Dobeli, Karen L.; Lewis, Sarah J.; Meikle, Steven R.; Brennan, Patrick C.; Thiele, David L.

    2014-01-01

    The purpose of this work was to determine the exposure-optimised slice thickness for hepatic lesion detection with CT. A phantom containing spheres (diameter 9.5, 4.8 and 2.4mm) with CT density 10 HU below the background (50 HU) was scanned at 125, 100, 75 and 50 mAs. Data were reconstructed at 5-, 3- and 1-mm slice thicknesses. Noise, contrast-to-noise ratio (CNR), area under the curve (AUC) as calculated using receiver operating characteristic analysis and sensitivity representing lesion detection were calculated and compared. Compared with the 125 mAs/5mm slice thickness setting, significant reductions in AUC were found for 75 mAs (P<0.01) and 50 mAs (P<0.05) at 1- and 3-mm thicknesses, respectively; sensitivity for the 9.5-mm sphere was significantly reduced for 75 (P<0.05) and 50 mAs (P<0.01) at 1-mm thickness; sensitivity for the 4.8-mm sphere was significantly lower for 100, 75 and 50 mAs at all three slice thicknesses (P<0.05). The 2.4-mm sphere was rarely detected. At each slice thickness, noise at 100, 75 and 50 mAs exposures was approximately 10, 30 and 50% higher, respectively, than that at 125 mAs exposure. CNRs decreased in an irregular manner with reductions in exposure and slice thickness. This study demonstrated no advantage to using slices below 5mm thickness, and consequently thinner slices are not necessarily better.

  18. Edge-detect interpolation for direct digital periapical images

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    The purpose of this study was to aid in the use of the digital images by edge-detect interpolation for direct digital periapical images using edge-deted interpolation. This study was performed by image processing of 20 digital periapical images; pixel replication, linear non-interpolation, linear interpolation, and edge-sensitive interpolation. The obtained results were as follows ; 1. Pixel replication showed blocking artifact and serious image distortion. 2. Linear interpolation showed smoothing effect on the edge. 3. Edge-sensitive interpolation overcame the smoothing effect on the edge and showed better image.

  19. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  20. New families of interpolating type IIB backgrounds

    Science.gov (United States)

    Minasian, Ruben; Petrini, Michela; Zaffaroni, Alberto

    2010-04-01

    We construct new families of interpolating two-parameter solutions of type IIB supergravity. These correspond to D3-D5 systems on non-compact six-dimensional manifolds which are mathbb{T}2 fibrations over Eguchi-Hanson and multi-center Taub-NUT spaces, respectively. One end of the interpolation corresponds to a solution with only D5 branes and vanishing NS three-form flux. A topology changing transition occurs at the other end, where the internal space becomes a direct product of the four-dimensional surface and the two-torus and the complexified NS-RR three-form flux becomes imaginary self-dual. Depending on the choice of the connections on the torus fibre, the interpolating family has either mathcal{N}=2 or mathcal{N}=1 supersymmetry. In the mathcal{N}=2 case it can be shown that the solutions are regular.

  1. Interpolation of quasi-Banach spaces

    International Nuclear Information System (INIS)

    Tabacco Vignati, A.M.

    1986-01-01

    This dissertation presents a method of complex interpolation for familities of quasi-Banach spaces. This method generalizes the theory for families of Banach spaces, introduced by others. Intermediate spaces in several particular cases are characterized using different approaches. The situation when all the spaces have finite dimensions is studied first. The second chapter contains the definitions and main properties of the new interpolation spaces, and an example concerning the Schatten ideals associated with a separable Hilbert space. The case of L/sup P/ spaces follows from the maximal operator theory contained in Chapter III. Also introduced is a different method of interpolation for quasi-Banach lattices of functions, and conditions are given to guarantee that the two techniques yield the same result. Finally, the last chapter contains a different, and more direct, approach to the case of Hardy spaces

  2. Quadratic Interpolation and Linear Lifting Design

    Directory of Open Access Journals (Sweden)

    Joel Solé

    2007-03-01

    Full Text Available A quadratic image interpolation method is stated. The formulation is connected to the optimization of lifting steps. This relation triggers the exploration of several interpolation possibilities within the same context, which uses the theory of convex optimization to minimize quadratic functions with linear constraints. The methods consider possible knowledge available from a given application. A set of linear equality constraints that relate wavelet bases and coefficients with the underlying signal is introduced in the formulation. As a consequence, the formulation turns out to be adequate for the design of lifting steps. The resulting steps are related to the prediction minimizing the detail signal energy and to the update minimizing the l2-norm of the approximation signal gradient. Results are reported for the interpolation methods in terms of PSNR and also, coding results are given for the new update lifting steps.

  3. Optimized Quasi-Interpolators for Image Reconstruction.

    Science.gov (United States)

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  4. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  5. Slices

    KAUST Repository

    McCrae, James; Singh, Karan; Mitra, Niloy J.

    2011-01-01

    Minimalist object representations or shape-proxies that spark and inspire human perception of shape remain an incompletely understood, yet powerful aspect of visual communication. We explore the use of planar sections, i.e., the contours

  6. Positivity Preserving Interpolation Using Rational Bicubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2015-01-01

    Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

  7. Interpolation algorithm for asynchronous ADC-data

    Directory of Open Access Journals (Sweden)

    S. Bramburger

    2017-09-01

    Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  8. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.; Efendiev, Yalchin R.; Galvis, Juan; Ghommem, Mehdi

    2014-01-01

    residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully

  9. Fast image interpolation via random forests.

    Science.gov (United States)

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  10. Spectral Compressive Sensing with Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Dadkhahi, Hamid; F. Duarte, Marco

    2013-01-01

    . In this paper, we introduce a greedy recovery algorithm that leverages a band-exclusion function and a polar interpolation function to address these two issues in spectral compressive sensing. Our algorithm is geared towards line spectral estimation from compressive measurements and outperforms most existing...

  11. Technique for image interpolation using polynomial transforms

    NARCIS (Netherlands)

    Escalante Ramírez, B.; Martens, J.B.; Haskell, G.G.; Hang, H.M.

    1993-01-01

    We present a new technique for image interpolation based on polynomial transforms. This is an image representation model that analyzes an image by locally expanding it into a weighted sum of orthogonal polynomials. In the discrete case, the image segment within every window of analysis is

  12. Low-dose ECG-gated 64-slices helical CT angiography of the chest: evaluation of image quality in 105 patients

    International Nuclear Information System (INIS)

    D'Agostino, A.G.; Remy-Jardin, M.; Khalil, C.; Remy, J.; Delannoy-Deken, V.; Duhamel, A.; Flohr, T.

    2006-01-01

    The purpose of this study was to evaluate image quality of low-dose electrocardiogram (ECG)-gated multislice helical computed tomography (CT) angiograms of the chest. One hundred and five consecutive patients with a regular sinus rhythm (72 men; 33 women) underwent ECG-gated CT angiographic examination of the chest without administration of beta blockers using the following parameters: (a) collimation 32 x 0.6 mm with z-flying focal spot for the acquisition of 64 overlapping 0.6-mm slices, rotation time 0.33 s, pitch 0.3; (b) 120 kV, 200 mAs; (c) use of two dose modulation systems, including adjustment of the mAs setting to the patient's size and anatomical shape and an ECG-controlled tube current. Subjective and objective image quality was evaluated by two radiologists in consensus on 3-mm-thick scans reconstructed at 55% of the response rate (RR) interval. The population and protocol characteristics included: (a) a mean [±standard deviation (SD)] body mass index (BMI) of 24.47 (±4.64); (b) a mean (±SD) heart rate of 72.04 (±15.76) bpm; (c) a mean (±SD) scanning time of 18.3 (±2.73) s; (d) a mean (±SD) dose-length product (DLP) value of 260.57 (±83.67) mGy/cm; (e) an estimated average effective dose of 4.95 (±1.59) mSv. Subjective noise was depicted in a total of nine examinations (8.5%), always rated as mild. Objective noise was assessed by measuring the standard deviation of pixel values in a homogeneous region of interest within the trachea and descending aorta; SD was 15.91 HU in the trachea and 22.16 HU in the descending aorta, with no significant difference in the mean value of the standard deviations between the four categories of BMI except for obese patients, who had a higher mean SD within the aorta. Interpolation artefacts were depicted in 22 patients, with a mean heart rate significantly lower than that of patients without interpolation artifacts, rated as mild in 11 patients and severe in 11 patients. The severity of interpolation artefacts

  13. Data-adapted moving least squares method for 3-D image interpolation

    International Nuclear Information System (INIS)

    Jang, Sumi; Lee, Yeon Ju; Jeong, Byeongseon; Nam, Haewon; Lee, Rena; Yoon, Jungho

    2013-01-01

    In this paper, we present a nonlinear three-dimensional interpolation scheme for gray-level medical images. The scheme is based on the moving least squares method but introduces a fundamental modification. For a given evaluation point, the proposed method finds the local best approximation by reproducing polynomials of a certain degree. In particular, in order to obtain a better match to the local structures of the given image, we employ locally data-adapted least squares methods that can improve the classical one. Some numerical experiments are presented to demonstrate the performance of the proposed method. Five types of data sets are used: MR brain, MR foot, MR abdomen, CT head, and CT foot. From each of the five types, we choose five volumes. The scheme is compared with some well-known linear methods and other recently developed nonlinear methods. For quantitative comparison, we follow the paradigm proposed by Grevera and Udupa (1998). (Each slice is first assumed to be unknown then interpolated by each method. The performance of each interpolation method is assessed statistically.) The PSNR results for the estimated volumes are also provided. We observe that the new method generates better results in both quantitative and visual quality comparisons. (paper)

  14. Temporal interpolation alters motion in fMRI scans: Magnitudes and consequences for artifact detection.

    Directory of Open Access Journals (Sweden)

    Jonathan D Power

    Full Text Available Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10-50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed. We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion.

  15. Usefulness of thin slice target CT scan in detecting mediastinal and hilar lymphadenopathy

    International Nuclear Information System (INIS)

    Yoshida, Shoji; Maeda, Tomoho; Nishioka, Masatoshi

    1986-01-01

    Comparative study of target scan with the different slice thickness and scan modes was performed to evaluate the mediastinal and hilar lymphadenopathy. 20 cases in controls and 35 cases in lymphadenopathy were examined. To delineate mediastinal and hilar lymphadenopathy, the scan mode of standard target was most useful in contrast and sharpness. Thin slice thickness with 5 mm was necessary in detecting small lymphnode or contour and internal structure of enlarged lymphnode. Valuable estimation of 5 mm contiguous target scan was obtained in the subaortic node (no. 5), tracheobronchial node (no. 4), precarinal and subcarinal node (no. 7) and right hilar node (no. 12). (author)

  16. Radiation sterilization and identification of gizzard slices

    International Nuclear Information System (INIS)

    Zhu, S.; Fu, C.; Jiang, W.; Yao, D.; Zhao, K.; Zhang, Y.

    1998-01-01

    An orthogonal test of 4 factors of radiation dose, storage temperature, storage time, and sanitation of cutting places was carried out to optimize the conditions for disinfection of gizzard slices. In the optimized condition, both the sanitary quality and the shelf-life of gizzard slices were improved. To identify irradiated gizzard slices, the sensory change, and the levels of water-soluble nitrogen, amino acid, total volatile basic nitrogen, peroxide value, vitamin C consumption and KMnO 4 consumption were determinated. No significant change was observed except for the color which was light brown on the surface of irradiated slices

  17. SAR image formation with azimuth interpolation after azimuth transform

    Science.gov (United States)

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  18. Interpolation of fuzzy data | Khodaparast | Journal of Fundamental ...

    African Journals Online (AJOL)

    Considering the many applications of mathematical functions in different ways, it is essential to have a defining function. In this study, we used Fuzzy Lagrangian interpolation and natural fuzzy spline polynomials to interpolate the fuzzy data. In the current world and in the field of science and technology, interpolation issues ...

  19. Interpolation of diffusion weighted imaging datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

    2014-01-01

    anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...

  20. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

  1. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

  2. Slice of LHC dipole wiring

    CERN Multimedia

    Dipole model slice made in 1994 by Ansaldo. The high magnetic fields needed for guiding particles around the Large Hadron Collider (LHC) ring are created by passing 12’500 amps of current through coils of superconducting wiring. At very low temperatures, superconductors have no electrical resistance and therefore no power loss. The LHC is the largest superconducting installation ever built. The magnetic field must also be extremely uniform. This means the current flowing in the coils has to be very precisely controlled. Indeed, nowhere before has such precision been achieved at such high currents. 50’000 tonnes of steel sheets are used to make the magnet yokes that keep the wiring firmly in place. The yokes constitute approximately 80% of the accelerator's weight and, placed side by side, stretch over 20 km!

  3. Quadratic polynomial interpolation on triangular domain

    Science.gov (United States)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  4. Trace interpolation by slant-stack migration

    International Nuclear Information System (INIS)

    Novotny, M.

    1990-01-01

    The slant-stack migration formula based on the radon transform is studied with respect to the depth steep Δz of wavefield extrapolation. It can be viewed as a generalized trace-interpolation procedure including wave extrapolation with an arbitrary step Δz. For Δz > 0 the formula yields the familiar plane-wave decomposition, while for Δz > 0 it provides a robust tool for migration transformation of spatially under sampled wavefields. Using the stationary phase method, it is shown that the slant-stack migration formula degenerates into the Rayleigh-Sommerfeld integral in the far-field approximation. Consequently, even a narrow slant-stack gather applied before the diffraction stack can significantly improve the representation of noisy data in the wavefield extrapolation process. The theory is applied to synthetic and field data to perform trace interpolation and dip reject filtration. The data examples presented prove that the radon interpolator works well in the dip range, including waves with mutual stepouts smaller than half the dominant period

  5. Image Interpolation with Geometric Contour Stencils

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We consider the image interpolation problem where given an image vm,n with uniformly-sampled pixels vm,n and point spread function h, the goal is to find function u(x,y satisfying vm,n = (h*u(m,n for all m,n in Z. This article improves upon the IPOL article Image Interpolation with Contour Stencils. In the previous work, contour stencils are used to estimate the image contours locally as short line segments. This article begins with a continuous formulation of total variation integrated over a collection of curves and defines contour stencils as a consistent discretization. This discretization is more reliable than the previous approach and can effectively distinguish contours that are locally shaped like lines, curves, corners, and circles. These improved contour stencils sense more of the geometry in the image. Interpolation is performed using an extension of the method described in the previous article. Using the improved contour stencils, there is an increase in image quality while maintaining similar computational efficiency.

  6. Delimiting areas of endemism through kernel interpolation.

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  7. Delimiting areas of endemism through kernel interpolation.

    Directory of Open Access Journals (Sweden)

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  8. Thick Toenails

    Science.gov (United States)

    ... in individuals with nail fungus (onychomycosis), psoriasis and hypothyroidism. Those who have problems with the thickness of their toenails should consult a foot and ankle surgeon for proper diagnosis and treatment. Find an ACFAS Physician Search Search Tools Find ...

  9. Is correction necessary when clinically determining quantitative cerebral perfusion parameters from multi-slice dynamic susceptibility contrast MR studies?

    International Nuclear Information System (INIS)

    Salluzzi, M; Frayne, R; Smith, M R

    2006-01-01

    Several groups have modified the standard singular value decomposition (SVD) algorithm to produce delay-insensitive cerebral blood flow (CBF) estimates from dynamic susceptibility contrast (DSC) perfusion studies. However, new dependences of CBF estimates on bolus arrival times and slice position in multi-slice studies have been recently recognized. These conflicting findings can be reconciled by accounting for several experimental and algorithmic factors. Using simulation and clinical studies, the non-simultaneous measurement of arterial and tissue concentration curves (relative slice position) in a multi-slice study is shown to affect time-related perfusion parameters, e.g. arterial-tissue-delay measurements. However, the current clinical impact of relative slice position on amplitude-related perfusion parameters, e.g. CBF, can be expected to be small unless any of the following conditions are present individually or in combination: (a) high concentration curve signal-to-noise ratios, (b) small tissue mean transit times, (c) narrow arterial input functions or (d) low temporal resolution of the DSC image sequence. Recent improvements in magnetic resonance (MR) technology can easily be expected to lead to scenarios where these effects become increasingly important sources of inaccuracy for all perfusion parameter estimates. We show that using Fourier interpolated (high temporal resolution) residue functions reduces the systematic error of the perfusion parameters obtained from multi-slice studies

  10. A z-gradient array for simultaneous multi-slice excitation with a single-band RF pulse.

    Science.gov (United States)

    Ertan, Koray; Taraghinia, Soheil; Sadeghi, Alireza; Atalar, Ergin

    2018-07-01

    Multi-slice radiofrequency (RF) pulses have higher specific absorption rates, more peak RF power, and longer pulse durations than single-slice RF pulses. Gradient field design techniques using a z-gradient array are investigated for exciting multiple slices with a single-band RF pulse. Two different field design methods are formulated to solve for the required current values of the gradient array elements for the given slice locations. The method requirements are specified, optimization problems are formulated for the minimum current norm and an analytical solution is provided. A 9-channel z-gradient coil array driven by independent, custom-designed gradient amplifiers is used to validate the theory. Performance measures such as normalized slice thickness error, gradient strength per unit norm current, power dissipation, and maximum amplitude of the magnetic field are provided for various slice locations and numbers of slices. Two and 3 slices are excited by a single-band RF pulse in simulations and phantom experiments. The possibility of multi-slice excitation with a single-band RF pulse using a z-gradient array is validated in simulations and phantom experiments. Magn Reson Med 80:400-412, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Dried fruit breadfruit slices by Refractive Window™ technique

    Directory of Open Access Journals (Sweden)

    Diego F. Tirado

    2016-01-01

    Full Text Available A large amount of products are dried due several reasons as preservation, weight reduction and improvement of stability. However, on the market are not offered low-cost and high quality products simultaneously. Although there are effective methods of dehydrating foods such as freeze drying, which preserves the flavor, color and vitamins, they are poor accessibility technologies. Therefore, alternative processes are required to be efficient and economical. The aim of this research was compare drying kinetics of sliced of breadfruit (Artocarpus communis using the technique of Refractive Window® (VR with the tray drying. To carry out this study, sliced of 1 and 2 mm thick were used. Refractive window drying was performed with the water bath temperature to 92 °C; and tray drying at 62 °C and an air velocity of 0.52 m/s. During the Refractive window drying technique, the moisture content reached the lower than tray drying levels. Similarly it happened with samples of 1 mm, which, having a smaller diameter reached lower moisture levels than samples 2 mm. The higher diffusivities were obtained during drying sliced VR 1 and 2 mm with coefficients of 6.13 and 3.90*10-9 m2/s respectively.

  12. Development of an electrically operated cassava slicing machine

    Directory of Open Access Journals (Sweden)

    I. S. Aji

    2013-08-01

    Full Text Available Labor input in manual cassava chips processing is very high and product quality is low. This paper presents the design and construction of an electrically operated cassava slicing machine that requires only one person to operate. Efficiency, portability, ease of operation, corrosion prevention of slicing component of the machine, force required to slice a cassava tuber, capacity of 10 kg/min and uniformity in the size of the cassava chips were considered in the design and fabrication of the machine. The performance of the machine was evaluated with cassava of average length and diameter of 253 mm and 60 mm respectively at an average speed of 154 rpm. The machine produced 5.3 kg of chips of 10 mm length and 60 mm diameter in 1 minute. The efficiency of the machine was 95.6% with respect to the quantity of the input cassava. The chips were found to be well chipped to the designed thickness, shape and of generally similar size. Galvanized steel sheets were used in the cutting section to avoid corrosion of components. The machine is portable and easy to operate which can be adopted for cassava processing in a medium size industry.

  13. Image Interpolation Scheme based on SVM and Improved PSO

    Science.gov (United States)

    Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.

    2018-01-01

    In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.

  14. Interpolation functions and the Lions-Peetre interpolation construction

    International Nuclear Information System (INIS)

    Ovchinnikov, V I

    2014-01-01

    The generalization of the Lions-Peetre interpolation method of means considered in the present survey is less general than the generalizations known since the 1970s. However, our level of generalization is sufficient to encompass spaces that are most natural from the point of view of applications, like the Lorentz spaces, Orlicz spaces, and their analogues. The spaces φ(X 0 ,X 1 ) p 0 ,p 1 considered here have three parameters: two positive numerical parameters p 0 and p 1 of equal standing, and a function parameter φ. For p 0 ≠p 1 these spaces can be regarded as analogues of Orlicz spaces under the real interpolation method. Embedding criteria are established for the family of spaces φ(X 0 ,X 1 ) p 0 ,p 1 , together with optimal interpolation theorems that refine all the known interpolation theorems for operators acting on couples of weighted spaces L p and that extend these theorems beyond scales of spaces. The main specific feature is that the function parameter φ can be an arbitrary natural functional parameter in the interpolation. Bibliography: 43 titles

  15. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  16. Integrating interface slicing into software engineering processes

    Science.gov (United States)

    Beck, Jon

    1993-01-01

    Interface slicing is a tool which was developed to facilitate software engineering. As previously presented, it was described in terms of its techniques and mechanisms. The integration of interface slicing into specific software engineering activities is considered by discussing a number of potential applications of interface slicing. The applications discussed specifically address the problems, issues, or concerns raised in a previous project. Because a complete interface slicer is still under development, these applications must be phrased in future tenses. Nonetheless, the interface slicing techniques which were presented can be implemented using current compiler and static analysis technology. Whether implemented as a standalone tool or as a module in an integrated development or reverse engineering environment, they require analysis no more complex than that required for current system development environments. By contrast, conventional slicing is a methodology which, while showing much promise and intuitive appeal, has yet to be fully implemented in a production language environment despite 12 years of development.

  17. Research progress and hotspot analysis of spatial interpolation

    Science.gov (United States)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  18. Measurement of slice sensitivity profile for a 64-slice spiral CT system

    International Nuclear Information System (INIS)

    Liu Chuanya; Qin Weichang; Wang Wei; Lu Chuanyou

    2006-01-01

    Objective: To measure and evaluate slice sensitivity profile (SSP) and the full width at half-maximum(FWHM) for a 64-slice spiral CT system. Methods: Using the same CT technique and body mode as those used for clinical CT, delta phantom was scanned with Somatom Sensation 64-slice spiral CT. SSPs and FWHM were measured both with reconstruction slice width of 0.6 mm at pitch=0.50, 0.75, 1.00, 1.25, 1.50 and with reconstruction slice width of 0.6, 1.0, 1.5 mm at pitch=1 respectively. Results: For normal slice width of 0. 6 mm, the measured FWHM, i.e. effective slice width, is 0.67, 0.67, 0.66, 0.69, 0.69 mm at different pitch. All the measured FWHM deviate less than 0.1 mm from the nominal slice width. The measured SSPs are symmetrical, bell-shaped curves without far-reaching tails, and show only slight variations as a function of the spiral pitch. When reconstruction slice width increase, relative SSP become wider. Conclusions: The variation of pitch hardly has effect all on SSP, effective slice width, and z-direction spatial resolution for Sensation 64-slice spiral CT system, which is helpful to optimize CT scanning protocol. (authors)

  19. Comparison of sliced lungs with whole lung sets for a torso phantom measured with Ge detectors using Monte Carlo simulations (MCNP).

    Science.gov (United States)

    Kramer, Gary H; Guerriere, Steven

    2003-02-01

    Lung counters are generally used to measure low energy photons (<100 keV). They are usually calibrated with lung sets that are manufactured from a lung tissue substitute material that contains homogeneously distributed activity; however, it is difficult to verify either the activity in the phantom or the homogeneity of the activity distribution without destructive testing. Lung sets can have activities that are as much as 25% different from the expected value. An alternative method to using whole lungs to calibrate a lung counter is to use a sliced lung with planar inserts. Experimental work has already indicated that this alternative method of calibration can be a satisfactory substitute. This work has extended the experimental study by the use of Monte Carlo simulation to validate that sliced and whole lungs are equivalent. It also has determined the optimum slice thicknesses that separate the planar sources in the sliced lung. Slice thicknesses have been investigated in the range of 0.5 cm to 9.0 cm and at photon energies from 17 keV to 1,000 keV. Results have shown that there is little difference between sliced and whole lungs at low energies providing that the slice thickness is 2.0 cm or less. As the photon energy rises the slice thickness can increase substantially with no degradation on equivalence.

  20. Generation of nuclear data banks through interpolation

    International Nuclear Information System (INIS)

    Castillo M, J.A.

    1999-01-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, used to generate Nuclear Data Banks employing bi cubic polynomial interpolation, taking as independent variables the uranium and gadolinium percents. Two proposals were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed to obtain the interpolating polynomial and later, the corresponding linear equations system. In the solution of this system the Gaussian elimination method with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validations test, a comparison was made between the values obtained with INTPOLBI and INTERTEG (created at the Instituto de Investigaciones Electricas with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks. (Author)

  1. Nuclear data banks generation by interpolation

    International Nuclear Information System (INIS)

    Castillo M, J. A.

    1999-01-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks

  2. Calculation of reactivity without Lagrange interpolation

    International Nuclear Information System (INIS)

    Suescun D, D.; Figueroa J, J. H.; Rodriguez R, K. C.; Villada P, J. P.

    2015-09-01

    A new method to solve numerically the inverse equation of punctual kinetics without using Lagrange interpolating polynomial is formulated; this method uses a polynomial approximation with N points based on a process of recurrence for simulating different forms of nuclear power. The results show a reliable accuracy. Furthermore, the method proposed here is suitable for real-time measurements of reactivity, with step sizes of calculations greater that Δt = 0.3 s; due to its precision can be used to implement a digital meter of reactivity in real time. (Author)

  3. Solving the Schroedinger equation using Smolyak interpolants

    International Nuclear Information System (INIS)

    Avila, Gustavo; Carrington, Tucker Jr.

    2013-01-01

    In this paper, we present a new collocation method for solving the Schroedinger equation. Collocation has the advantage that it obviates integrals. All previous collocation methods have, however, the crucial disadvantage that they require solving a generalized eigenvalue problem. By combining Lagrange-like functions with a Smolyak interpolant, we device a collocation method that does not require solving a generalized eigenvalue problem. We exploit the structure of the grid to develop an efficient algorithm for evaluating the matrix-vector products required to compute energy levels and wavefunctions. Energies systematically converge as the number of points and basis functions are increased

  4. Topics in multivariate approximation and interpolation

    CERN Document Server

    Jetter, Kurt

    2005-01-01

    This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr

  5. Effectiveness of thin-slice axial images of multidetector row CT for visualization of bronchial artery before bronchial arterial embolization

    International Nuclear Information System (INIS)

    Shida, Yoshitaka; Hasuo, Kanehiro; Aibe, Hitoshi; Kubo, Yuko; Terashima, Kotaro; Kinjo, Maya; Kamano, H.; Yoshida, Atsuko

    2008-01-01

    We assessed the ability of visualization of bronchial artery (BA) by using thin-slice axial images of 4-detector multidetector row CT in 65 patients with hemoptysis. In all patients, the origins of BA were well identified with observation of consecutive axial images with 1 mm thickness by paging method and bronchial arterial embolization (BAE) was performed successfully. Thin-slice axial images were considered to be useful to recognize BA and to perform BAE in patients with hemoptysis. (author)

  6. The cause of the artifact in 4-slice helical computed tomography

    International Nuclear Information System (INIS)

    Taguchi, Katsuyuki; Aradate, Hiroshi; Saito, Yasuo; Zmora, Ilan; Han, Kyung S.; Silver, Michael D.

    2004-01-01

    The causes of the image artifacts in a 4-slice helical computed tomography have been discussed as follows: (1) changeover in pairs of data used in z interpolation, (2) sampling interval in z, and (3) the cone angle. This study analyzes the first two causes of the artifact and describes how the current algorithm [K. Taguchi and H. Aradate, Radiology 205P, 390 (1997); 205P, 618 (1997); Med. Phys. 25, 550-561 (1998); H. Hu, ibid. 26, 5-18 (1999); S. Schaller et al., IEEE Trans. Med. Imaging 19, 822-834 (2000); K. Taguchi, Ph.D. thesis, University of Tsukuba, 2002] solves the problem. An interpolated sinogram for a slice at the edge of a ball phantom shows discontinuity caused by the changeover. If we extend the streak artifact in the reconstructed image, it crosses the focus orbit at the corresponding projection angle. Applying z filtering can reduce such causes by its feathering effect and mixing data obtained by different cone angles; the best results are provided when z filtering is applied to densely sampled helical data

  7. Air Quality Assessment Using Interpolation Technique

    Directory of Open Access Journals (Sweden)

    Awkash Kumar

    2016-07-01

    Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.

  8. Randomized interpolative decomposition of separated representations

    Science.gov (United States)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  9. Size-Dictionary Interpolation for Robot's Adjustment

    Directory of Open Access Journals (Sweden)

    Morteza eDaneshmand

    2015-05-01

    Full Text Available This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically-inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps towards completion of the whole realistic online fitting package is provided.

  10. Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation

    Directory of Open Access Journals (Sweden)

    Hezerul Abdul Karim

    2004-09-01

    Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.

  11. Outline and handling manual of experimental data time slice monitoring software 'SLICE'

    International Nuclear Information System (INIS)

    Shirai, Hiroshi; Hirayama, Toshio; Shimizu, Katsuhiro; Tani, Keiji; Azumi, Masafumi; Hirai, Ken-ichiro; Konno, Satoshi; Takase, Keizou.

    1993-02-01

    We have developed a software 'SLICE' which maps various kinds of plasma experimental data measured at the different geometrical position of JT-60U and JFT-2M onto the equilibrium magnetic configuration and treats them as a function of volume averaged minor radius ρ. Experimental data can be handled uniformly by using 'SLICE'. Plenty of commands of 'SLICE' make it easy to process the mapped data. The experimental data measured as line integrated values are also transformed by Abel inversion. The mapped data are fitted to a functional form and saved to the database 'MAPDB'. 'SLICE' can read the data from 'MAPDB' and re-display and transform them. Still more 'SLICE' creates run data of orbit following Monte-Carlo code 'OFMC' and tokamak predictive and interpretation code system 'TOPICS'. This report summarizes an outline and the usage of 'SLICE'. (author)

  12. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-03

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  13. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-01

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  14. Visual patch clamp recording of neurons in thick portions of the adult spinal cord

    DEFF Research Database (Denmark)

    Munch, Anders Sonne; Smith, Morten; Moldovan, Mihai

    2010-01-01

    The study of visually identified neurons in slice preparations from the central nervous system offers considerable advantages over in vivo preparations including high mechanical stability in the absence of anaesthesia and full control of the extracellular medium. However, because of their relative...... remain alive and capable of generating action potentials. By stimulating the lateral funiculus we can evoke intense synaptic activity associated with large increases in conductance of the recorded neurons. The conductance increases substantially more in neurons recorded in thick slices suggesting...... that the size of the network recruited with the stimulation increases with the thickness of the slices. We also find that that the number of spontaneous excitatory postsynaptic currents (EPSCs) is higher in thick slices compared with thin slices while the number of spontaneous inhibitory postsynaptic currents...

  15. A simple water-immersion condenser for imaging living brain slices on an inverted microscope.

    Science.gov (United States)

    Prusky, G T

    1997-09-05

    Due to some physical limitations of conventional condensers, inverted compound microscopes are not optimally suited for imaging living brain slices with transmitted light. Herein is described a simple device that converts an inverted microscope into an effective tool for this application by utilizing an objective as a condenser. The device is mounted on a microscope in place of the condenser, is threaded to accept a water immersion objective, and has a slot for a differential interference contrast (DIC) slider. When combined with infrared video techniques, this device allows an inverted microscope to effectively image living cells within thick brain slices in an open perfusion chamber.

  16. Distance-two interpolation for parallel algebraic multigrid

    International Nuclear Information System (INIS)

    Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M

    2007-01-01

    In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers

  17. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  18. Clinical usefulness of facial soft tissues thickness measurement using 3D computed tomographic images

    International Nuclear Information System (INIS)

    Jeong, Ho Gul; Kim, Kee Deog; Hu, Kyung Seok; Lee, Jae Bum; Park, Hyok; Han, Seung Ho; Choi, Seong Ho; Kim, Chong Kwan; Park, Chang Seo

    2006-01-01

    To evaluate clinical usefulness of facial soft tissue thickness measurement using 3D computed tomographic images. One cadaver that had sound facial soft tissues was chosen for the study. The cadaver was scanned with a Helical CT under following scanning protocols about slice thickness and table speed: 3 mm and 3 mm/sec, 5 mm and 5 mm/sec, 7 mm and 7 mm/sec. The acquired data were reconstructed 1.5, 2.5, 3.5 mm reconstruction interval respectively and the images were transferred to a personal computer. Using a program developed to measure facial soft tissue thickness in 3D image, the facial soft tissue thickness was measured. After the ten-time repeation of the measurement for ten times, repeated measure analysis of variance (ANOVA) was adopted to compare and analyze the measurements using the three scanning protocols. Comparison according to the areas was analysed by Mann-Whitney test. There were no statistically significant intraobserver differences in the measurements of the facial soft tissue thickness using the three scanning protocols (p>0.05). There were no statistically significant differences between measurements in the 3 mm slice thickness and those in the 5 mm, 7 mm slice thickness (p>0.05). There were statistical differences in the 14 of the total 30 measured points in the 5 mm slice thickness and 22 in the 7 mm slice thickness. The facial soft tissue thickness measurement using 3D images of 7 mm slice thickness is acceptable clinically, but those of 5 mm slice thickness is recommended for the more accurate measurement

  19. Optimum and robust 3D facies interpolation strategies in a heterogeneous coal zone (Tertiary As Pontes basin, NW Spain)

    Energy Technology Data Exchange (ETDEWEB)

    Falivene, Oriol; Cabrera, Lluis; Saez, Alberto [Geomodels Institute, Group of Geodynamics and Basin Analysis, Department of Stratigraphy, Paleontology and Marine Geosciences, Universitat de Barcelona, c/ Marti i Franques s/n, Facultat de Geologia, 08028 Barcelona (Spain)

    2007-07-02

    Coal exploration and mining in extensively drilled and sampled coal zones can benefit from 3D statistical facies interpolation. Starting from closely spaced core descriptions, and using interpolation methods, a 3D optimum and robust facies distribution model was obtained for a thick, heterogeneous coal zone deposited in the non-marine As Pontes basin (Oligocene-Early Miocene, NW Spain). Several grid layering styles, interpolation methods (truncated inverse squared distance weighting, truncated kriging, truncated kriging with an areal trend, indicator inverse squared distance weighting, indicator kriging, and indicator kriging with an areal trend) and searching conditions were compared. Facies interpolation strategies were evaluated using visual comparison and cross validation. Moreover, robustness of the resultant facies distribution with respect to variations in interpolation method input parameters was verified by taking into account several scenarios of uncertainty. The resultant 3D facies reconstruction improves the understanding of the distribution and geometry of the coal facies. Furthermore, since some coal quality properties (e.g. calorific value or sulphur percentage) display a good statistical correspondence with facies, predicting the distribution of these properties using the reconstructed facies distribution as a template proved to be a powerful approach, yielding more accurate and realistic reconstructions of these properties in the coal zone. (author)

  20. Organotypic brain slice cultures of adult transgenic P301S mice--a model for tauopathy studies.

    Directory of Open Access Journals (Sweden)

    Agneta Mewes

    Full Text Available BACKGROUND: Organotypic brain slice cultures represent an excellent compromise between single cell cultures and complete animal studies, in this way replacing and reducing the number of animal experiments. Organotypic brain slices are widely applied to model neuronal development and regeneration as well as neuronal pathology concerning stroke, epilepsy and Alzheimer's disease (AD. AD is characterized by two protein alterations, namely tau hyperphosphorylation and excessive amyloid β deposition, both causing microglia and astrocyte activation. Deposits of hyperphosphorylated tau, called neurofibrillary tangles (NFTs, surrounded by activated glia are modeled in transgenic mice, e.g. the tauopathy model P301S. METHODOLOGY/PRINCIPAL FINDINGS: In this study we explore the benefits and limitations of organotypic brain slice cultures made of mature adult transgenic mice as a potential model system for the multifactorial phenotype of AD. First, neonatal (P1 and adult organotypic brain slice cultures from 7- to 10-month-old transgenic P301S mice have been compared with regard to vitality, which was monitored with the lactate dehydrogenase (LDH- and the MTT (3-(4,5-Dimethylthiazol-2-yl-2,5-diphenyltetrazolium bromide assays over 15 days. Neonatal slices displayed a constant high vitality level, while the vitality of adult slice cultures decreased significantly upon cultivation. Various preparation and cultivation conditions were tested to augment the vitality of adult slices and improvements were achieved with a reduced slice thickness, a mild hypothermic cultivation temperature and a cultivation CO(2 concentration of 5%. Furthermore, we present a substantial immunohistochemical characterization analyzing the morphology of neurons, astrocytes and microglia in comparison to neonatal tissue. CONCLUSION/SIGNIFICANCE: Until now only adolescent animals with a maximum age of two months have been used to prepare organotypic brain slices. The current study

  1. Interpolation from Grid Lines: Linear, Transfinite and Weighted Method

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2017-01-01

    When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid l...

  2. Shape Preserving Interpolation Using C2 Rational Cubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2016-01-01

    Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.

  3. Input variable selection for interpolating high-resolution climate ...

    African Journals Online (AJOL)

    Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...

  4. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  5. Some observations on interpolating gauges and non-covariant gauges

    Indian Academy of Sciences (India)

    We discuss the viability of using interpolating gauges to define the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition defining term. We show that the boundary condition needed to maintain gauge-invariance as the interpolating parameter ...

  6. Convergence of trajectories in fractal interpolation of stochastic processes

    International Nuclear Information System (INIS)

    MaIysz, Robert

    2006-01-01

    The notion of fractal interpolation functions (FIFs) can be applied to stochastic processes. Such construction is especially useful for the class of α-self-similar processes with stationary increments and for the class of α-fractional Brownian motions. For these classes, convergence of the Minkowski dimension of the graphs in fractal interpolation of the Hausdorff dimension of the graph of original process was studied in [Herburt I, MaIysz R. On convergence of box dimensions of fractal interpolation stochastic processes. Demonstratio Math 2000;4:873-88.], [MaIysz R. A generalization of fractal interpolation stochastic processes to higher dimension. Fractals 2001;9:415-28.], and [Herburt I. Box dimension of interpolations of self-similar processes with stationary increments. Probab Math Statist 2001;21:171-8.]. We prove that trajectories of fractal interpolation stochastic processes converge to the trajectory of the original process. We also show that convergence of the trajectories in fractal interpolation of stochastic processes is equivalent to the convergence of trajectories in linear interpolation

  7. Improved Interpolation Kernels for Super-resolution Algorithms

    DEFF Research Database (Denmark)

    Rasti, Pejman; Orlova, Olga; Tamberg, Gert

    2016-01-01

    Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....

  8. Scalable Intersample Interpolation Architecture for High-channel-count Beamformers

    DEFF Research Database (Denmark)

    Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt

    2011-01-01

    Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....

  9. Fractional Delayer Utilizing Hermite Interpolation with Caratheodory Representation

    Directory of Open Access Journals (Sweden)

    Qiang DU

    2018-04-01

    Full Text Available Fractional delay is indispensable for many sorts of circuits and signal processing applications. Fractional delay filter (FDF utilizing Hermite interpolation with an analog differentiator is a straightforward way to delay discrete signals. This method has a low time-domain error, but a complicated sampling module than the Shannon sampling scheme. A simplified scheme, which is based on Shannon sampling and utilizing Hermite interpolation with a digital differentiator, will lead a much higher time-domain error when the signal frequency approaches the Nyquist rate. In this letter, we propose a novel fractional delayer utilizing Hermite interpolation with Caratheodory representation. The samples of differential signal are obtained by Caratheodory representation from the samples of the original signal only. So, only one sampler is needed and the sampling module is simple. Simulation results for four types of signals demonstrate that the proposed method has significantly higher interpolation accuracy than Hermite interpolation with digital differentiator.

  10. SU-E-I-10: Investigation On Detectability of a Small Target for Different Slice Direction of a Volumetric Cone Beam CT Image

    International Nuclear Information System (INIS)

    Lee, C; Han, M; Baek, J

    2015-01-01

    Purpose: To investigate the detectability of a small target for different slice direction of a volumetric cone beam CT image and its impact on dose reduction. Methods: Analytic projection data of a sphere object (1 mm diameter, 0.2/cm attenuation coefficient) were generated and reconstructed by FDK algorithm. In this work, we compared the detectability of the small target from four different backprojection Methods: hanning weighted ramp filter with linear interpolation (RECON 1), hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON 3), and ramp filter with Fourier interpolation (RECON4), respectively. For noise simulation, 200 photons per measurement were used, and the noise only data were reconstructed using FDK algorithm. For each reconstructed volume, axial and coronal slice were extracted and detection-SNR was calculated using channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels. Results: Detection-SNR of coronal images varies for different backprojection methods, while axial images have a similar detection-SNR. Detection-SNR 2 ratios of coronal and axial images in RECON1 and RECON2 are 1.33 and 1.15, implying that the coronal image has a better detectability than axial image. In other words, using coronal slices for the small target detection can reduce the patient dose about 33% and 15% compared to using axial slices in RECON 1 and RECON 2. Conclusion: In this work, we investigated slice direction dependent detectability of a volumetric cone beam CT image. RECON 1 and RECON 2 produced the highest detection-SNR, with better detectability in coronal slices. These results indicate that it is more beneficial to use coronal slice to improve detectability of a small target in a volumetric cone beam CT image. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Program (NIPA-2014-H0201

  11. SU-E-I-10: Investigation On Detectability of a Small Target for Different Slice Direction of a Volumetric Cone Beam CT Image

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C; Han, M; Baek, J [Yonsei University, Incheon (Korea, Republic of)

    2015-06-15

    Purpose: To investigate the detectability of a small target for different slice direction of a volumetric cone beam CT image and its impact on dose reduction. Methods: Analytic projection data of a sphere object (1 mm diameter, 0.2/cm attenuation coefficient) were generated and reconstructed by FDK algorithm. In this work, we compared the detectability of the small target from four different backprojection Methods: hanning weighted ramp filter with linear interpolation (RECON 1), hanning weighted ramp filter with Fourier interpolation (RECON2), ramp filter with linear interpolation (RECON 3), and ramp filter with Fourier interpolation (RECON4), respectively. For noise simulation, 200 photons per measurement were used, and the noise only data were reconstructed using FDK algorithm. For each reconstructed volume, axial and coronal slice were extracted and detection-SNR was calculated using channelized Hotelling observer (CHO) with dense difference-of-Gaussian (D-DOG) channels. Results: Detection-SNR of coronal images varies for different backprojection methods, while axial images have a similar detection-SNR. Detection-SNR{sup 2} ratios of coronal and axial images in RECON1 and RECON2 are 1.33 and 1.15, implying that the coronal image has a better detectability than axial image. In other words, using coronal slices for the small target detection can reduce the patient dose about 33% and 15% compared to using axial slices in RECON 1 and RECON 2. Conclusion: In this work, we investigated slice direction dependent detectability of a volumetric cone beam CT image. RECON 1 and RECON 2 produced the highest detection-SNR, with better detectability in coronal slices. These results indicate that it is more beneficial to use coronal slice to improve detectability of a small target in a volumetric cone beam CT image. This research was supported by the MSIP (Ministry of Science, ICT and Future Planning), Korea, under the IT Consilience Creative Program (NIPA-2014-H0201

  12. Imaging by the SSFSE single slice method at different viscosities of bile

    Energy Technology Data Exchange (ETDEWEB)

    Kubo, Hiroya; Usui, Motoki; Fukunaga, Kenichi; Yamamoto, Naruto; Ikegami, Toshimi [Kawasaki Hospital, Kobe (Japan)

    2001-11-01

    The single shot fast spin echo single thick slice method (single slice method) is a technique that visualizes the water component alone using a heavy T{sub 2}. However, this method is considered to be markedly affected by changes in the viscosity of the material because a very long TE is used, and changes in the T{sub 2} value, which are related to viscosity, directly affect imaging. In this study, we evaluated the relationship between the effects of TE and the T{sub 2} value of bile in the single slice method and also examined the relationship between the signal intensity of bile on T{sub 1}- and T{sub 2}-weighted images and imaging by MR cholangiography (MRC). It was difficult to image bile with high viscosities at a usual effective TE level of 700-1,500 ms. With regard to the relationship between the signal intensity of bile and MRC imaging, all T{sub 2} values of the bile samples showing relatively high signal intensities on the T{sub 1}-weighted images suggested high viscosities, and MRC imaging of these bile samples was poor. In conclusion, MRC imaging of bile with high viscosities was poor with the single slice method. Imaging by the single slice method alone of bile showing a relatively high signal intensity on T{sub 1}-weighted images should be avoided, and combination with other MRC sequences should be used. (author)

  13. Imaging by the SSFSE single slice method at different viscosities of bile

    International Nuclear Information System (INIS)

    Kubo, Hiroya; Usui, Motoki; Fukunaga, Kenichi; Yamamoto, Naruto; Ikegami, Toshimi

    2001-01-01

    The single shot fast spin echo single thick slice method (single slice method) is a technique that visualizes the water component alone using a heavy T 2 . However, this method is considered to be markedly affected by changes in the viscosity of the material because a very long TE is used, and changes in the T 2 value, which are related to viscosity, directly affect imaging. In this study, we evaluated the relationship between the effects of TE and the T 2 value of bile in the single slice method and also examined the relationship between the signal intensity of bile on T 1 - and T 2 -weighted images and imaging by MR cholangiography (MRC). It was difficult to image bile with high viscosities at a usual effective TE level of 700-1,500 ms. With regard to the relationship between the signal intensity of bile and MRC imaging, all T 2 values of the bile samples showing relatively high signal intensities on the T 1 -weighted images suggested high viscosities, and MRC imaging of these bile samples was poor. In conclusion, MRC imaging of bile with high viscosities was poor with the single slice method. Imaging by the single slice method alone of bile showing a relatively high signal intensity on T 1 -weighted images should be avoided, and combination with other MRC sequences should be used. (author)

  14. Interactive Slice of the CMS detector

    CERN Multimedia

    Davis, Siona Ruth

    2016-01-01

    This slice shows a colorful cross-section of the CMS detector with all parts of the detector labelled. Viewers are invited to click on buttons associated with five types of particles to see what happens when each type interacts with the sections of the detector. The five types of particles users can select to send through the slice are muons, electrons, neutral hadrons, charged hadrons and photons. Supplementary information on each type of particles is given. Useful for inclusion into general talks on CMS etc. *Animated CMS "slice" for Powerpoint (Mac & PC) Original version - 2004 Updated version - July 2010 *Six slides required - first is a set of buttons; others are for each particle type (muon, electron, charged/neutral hadron, photon) Recommend putting slide 1 anywhere in your presentation and the rest at the end

  15. Brain Slice Staining and Preparation for Three-Dimensional Super-Resolution Microscopy

    Science.gov (United States)

    German, Christopher L.; Gudheti, Manasa V.; Fleckenstein, Annette E.; Jorgensen, Erik M.

    2018-01-01

    Localization microscopy techniques – such as photoactivation localization microscopy (PALM), fluorescent PALM (FPALM), ground state depletion (GSD), and stochastic optical reconstruction microscopy (STORM) – provide the highest precision for single molecule localization currently available. However, localization microscopy has been largely limited to cell cultures due to the difficulties that arise in imaging thicker tissue sections. Sample fixation and antibody staining, background fluorescence, fluorophore photoinstability, light scattering in thick sections, and sample movement create significant challenges for imaging intact tissue. We have developed a sample preparation and image acquisition protocol to address these challenges in rat brain slices. The sample preparation combined multiple fixation steps, saponin permeabilization, and tissue clarification. Together, these preserve intracellular structures, promote antibody penetration, reduce background fluorescence and light scattering, and allow acquisition of images deep in a 30 μm thick slice. Image acquisition challenges were resolved by overlaying samples with a permeable agarose pad and custom-built stainless steel imaging adapter, and sealing the imaging chamber. This approach kept slices flat, immobile, bathed in imaging buffer, and prevented buffer oxidation during imaging. Using this protocol, we consistently obtained single molecule localizations of synaptic vesicle and active zone proteins in three-dimensions within individual synaptic terminals of the striatum in rat brain slices. These techniques may be easily adapted to the preparation and imaging of other tissues, substantially broadening the application of super-resolution imaging. PMID:28924666

  16. High-resolution multi-slice PET

    International Nuclear Information System (INIS)

    Yasillo, N.J.; Chintu Chen; Ordonez, C.E.; Kapp, O.H.; Sosnowski, J.; Beck, R.N.

    1992-01-01

    This report evaluates the progress to test the feasibility and to initiate the design of a high resolution multi-slice PET system. The following specific areas were evaluated: detector development and testing; electronics configuration and design; mechanical design; and system simulation. The design and construction of a multiple-slice, high-resolution positron tomograph will provide substantial improvements in the accuracy and reproducibility of measurements of the distribution of activity concentrations in the brain. The range of functional brain research and our understanding of local brain function will be greatly extended when the development of this instrumentation is completed

  17. Introduction to bit slices and microprogramming

    International Nuclear Information System (INIS)

    Van Dam, A.

    1981-01-01

    Bit-slice logic blocks are fourth-generation LSI components which are natural extensions of traditional mulitplexers, registers, decoders, counters, ALUs, etc. Their functionality is controlled by microprogramming, typically to implement CPUs and peripheral controllers where both speed and easy programmability are required for flexibility, ease of implementation and debugging, etc. Processors built from bit-slice logic give the designer an alternative for approaching the programmibility of traditional fixed-instruction-set microprocessors with a speed closer to that of hardwired random logic. (orig.)

  18. Slice through an LHC bending magnet

    CERN Multimedia

    Slice through an LHC superconducting dipole (bending) magnet. The slice includes a cut through the magnet wiring (niobium titanium), the beampipe and the steel magnet yokes. Particle beams in the Large Hadron Collider (LHC) have the same energy as a high-speed train, squeezed ready for collision into a space narrower than a human hair. Huge forces are needed to control them. Dipole magnets (2 poles) are used to bend the paths of the protons around the 27 km ring. Quadrupole magnets (4 poles) focus the proton beams and squeeze them so that more particles collide when the beams’ paths cross. There are 1232 15m long dipole magnets in the LHC.

  19. Computing Diffeomorphic Paths for Large Motion Interpolation.

    Science.gov (United States)

    Seo, Dohyung; Jeffrey, Ho; Vemuri, Baba C

    2013-06-01

    In this paper, we introduce a novel framework for computing a path of diffeomorphisms between a pair of input diffeomorphisms. Direct computation of a geodesic path on the space of diffeomorphisms Diff (Ω) is difficult, and it can be attributed mainly to the infinite dimensionality of Diff (Ω). Our proposed framework, to some degree, bypasses this difficulty using the quotient map of Diff (Ω) to the quotient space Diff ( M )/ Diff ( M ) μ obtained by quotienting out the subgroup of volume-preserving diffeomorphisms Diff ( M ) μ . This quotient space was recently identified as the unit sphere in a Hilbert space in mathematics literature, a space with well-known geometric properties. Our framework leverages this recent result by computing the diffeomorphic path in two stages. First, we project the given diffeomorphism pair onto this sphere and then compute the geodesic path between these projected points. Second, we lift the geodesic on the sphere back to the space of diffeomerphisms, by solving a quadratic programming problem with bilinear constraints using the augmented Lagrangian technique with penalty terms. In this way, we can estimate the path of diffeomorphisms, first, staying in the space of diffeomorphisms, and second, preserving shapes/volumes in the deformed images along the path as much as possible. We have applied our framework to interpolate intermediate frames of frame-sub-sampled video sequences. In the reported experiments, our approach compares favorably with the popular Large Deformation Diffeomorphic Metric Mapping framework (LDDMM).

  20. Functions with disconnected spectrum sampling, interpolation, translates

    CERN Document Server

    Olevskii, Alexander M

    2016-01-01

    The classical sampling problem is to reconstruct entire functions with given spectrum S from their values on a discrete set L. From the geometric point of view, the possibility of such reconstruction is equivalent to determining for which sets L the exponential system with frequencies in L forms a frame in the space L^2(S). The book also treats the problem of interpolation of discrete functions by analytic ones with spectrum in S and the problem of completeness of discrete translates. The size and arithmetic structure of both the spectrum S and the discrete set L play a crucial role in these problems. After an elementary introduction, the authors give a new presentation of classical results due to Beurling, Kahane, and Landau. The main part of the book focuses on recent progress in the area, such as construction of universal sampling sets, high-dimensional and non-analytic phenomena. The reader will see how methods of harmonic and complex analysis interplay with various important concepts in different areas, ...

  1. Spatiotemporal video deinterlacing using control grid interpolation

    Science.gov (United States)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  2. Ripple artifact reduction using slice overlap in slice encoding for metal artifact correction.

    Science.gov (United States)

    den Harder, J Chiel; van Yperen, Gert H; Blume, Ulrike A; Bos, Clemens

    2015-01-01

    Multispectral imaging (MSI) significantly reduces metal artifacts. Yet, especially in techniques that use gradient selection, such as slice encoding for metal artifact correction (SEMAC), a residual ripple artifact may be prominent. Here, an analysis is presented of the ripple artifact and of slice overlap as an approach to reduce the artifact. The ripple artifact was analyzed theoretically to clarify its cause. Slice overlap, conceptually similar to spectral bin overlap in multi-acquisition with variable resonances image combination (MAVRIC), was achieved by reducing the selection gradient and, thus, increasing the slice profile width. Time domain simulations and phantom experiments were performed to validate the analyses and proposed solution. Discontinuities between slices are aggravated by signal displacement in the frequency encoding direction in areas with deviating B0. Specifically, it was demonstrated that ripple artifacts appear only where B0 varies both in-plane and through-plane. Simulations and phantom studies of metal implants confirmed the efficacy of slice overlap to reduce the artifact. The ripple artifact is an important limitation of gradient selection based MSI techniques, and can be understood using the presented simulations. At a scan-time penalty, slice overlap effectively addressed the artifact, thereby improving image quality near metal implants. © 2014 Wiley Periodicals, Inc.

  3. Subsurface temperature maps in French sedimentary basins: new data compilation and interpolation

    International Nuclear Information System (INIS)

    Bonte, D.; Guillou-Frottier, L.; Garibaldi, C.; Bourgine, B.; Lopez, S.; Bouchot, V.; Garibaldi, C.; Lucazeau, F.

    2010-01-01

    Assessment of the underground geothermal potential requires the knowledge of deep temperatures (1-5 km). Here, we present new temperature maps obtained from oil boreholes in the French sedimentary basins. Because of their origin, the data need to be corrected, and their local character necessitates spatial interpolation. Previous maps were obtained in the 1970's using empirical corrections and manual interpolation. In this study, we update the number of measurements by using values collected during the last thirty years, correct the temperatures for transient perturbations and carry out statistical analyses before modelling the 3D distribution of temperatures. This dataset provides 977 temperatures corrected for transient perturbations in 593 boreholes located in the French sedimentary basins. An average temperature gradient of 30.6 deg. C/km is obtained for a representative surface temperature of 10 deg. C. When surface temperature is not accounted for, deep measurements are best fitted with a temperature gradient of 25.7 deg. C/km. We perform a geostatistical analysis on a residual temperature dataset (using a drift of 25.7 deg. C/km) to constrain the 3D interpolation kriging procedure with horizontal and vertical models of variograms. The interpolated residual temperatures are added to the country-scale averaged drift in order to get a three dimensional thermal structure of the French sedimentary basins. The 3D thermal block enables us to extract isothermal surfaces and 2D sections (iso-depth maps and iso-longitude cross-sections). A number of anomalies with a limited depth and spatial extension have been identified, from shallow in the Rhine graben and Aquitanian basin, to deep in the Provence basin. Some of these anomalies (Paris basin, Alsace, south of the Provence basin) may be partly related to thick insulating sediments, while for some others (southwestern Aquitanian basin, part of the Provence basin) large-scale fluid circulation may explain superimposed

  4. Research of Cubic Bezier Curve NC Interpolation Signal Generator

    Directory of Open Access Journals (Sweden)

    Shijun Ji

    2014-08-01

    Full Text Available Interpolation technology is the core of the computer numerical control (CNC system, and the precision and stability of the interpolation algorithm directly affect the machining precision and speed of CNC system. Most of the existing numerical control interpolation technology can only achieve circular arc interpolation, linear interpolation or parabola interpolation, but for the numerical control (NC machining of parts with complicated surface, it needs to establish the mathematical model and generate the curved line and curved surface outline of parts and then discrete the generated parts outline into a large amount of straight line or arc to carry on the processing, which creates the complex program and a large amount of code, so it inevitably introduce into the approximation error. All these factors affect the machining accuracy, surface roughness and machining efficiency. The stepless interpolation of cubic Bezier curve controlled by analog signal is studied in this paper, the tool motion trajectory of Bezier curve can be directly planned out in CNC system by adjusting control points, and then these data were put into the control motor which can complete the precise feeding of Bezier curve. This method realized the improvement of CNC trajectory controlled ability from the simple linear and circular arc to the complex project curve, and it provides a new way for economy realizing the curve surface parts with high quality and high efficiency machining.

  5. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    Science.gov (United States)

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  6. Shape-based interpolation of multidimensional grey-level images

    International Nuclear Information System (INIS)

    Grevera, G.J.; Udupa, J.K.

    1996-01-01

    Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. In this paper, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n + 1)-dimensional [(n + 1)-D] space. The binary shape-based method is then applied to this image to create an (n + 1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation

  7. On Multiple Interpolation Functions of the -Genocchi Polynomials

    Directory of Open Access Journals (Sweden)

    Jin Jeong-Hee

    2010-01-01

    Full Text Available Abstract Recently, many mathematicians have studied various kinds of the -analogue of Genocchi numbers and polynomials. In the work (New approach to q-Euler, Genocchi numbers and their interpolation functions, "Advanced Studies in Contemporary Mathematics, vol. 18, no. 2, pp. 105–112, 2009.", Kim defined new generating functions of -Genocchi, -Euler polynomials, and their interpolation functions. In this paper, we give another definition of the multiple Hurwitz type -zeta function. This function interpolates -Genocchi polynomials at negative integers. Finally, we also give some identities related to these polynomials.

  8. Spectral interpolation - Zero fill or convolution. [image processing

    Science.gov (United States)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  9. Steady State Stokes Flow Interpolation for Fluid Control

    DEFF Research Database (Denmark)

    Bhatacharya, Haimasree; Nielsen, Michael Bang; Bridson, Robert

    2012-01-01

    — suffer from a common problem. They fail to capture the rotational components of the velocity field, although extrapolation in the normal direction does consider the tangential component. We address this problem by casting the interpolation as a steady state Stokes flow. This type of flow captures......Fluid control methods often require surface velocities interpolated throughout the interior of a shape to use the velocity as a feedback force or as a boundary condition. Prior methods for interpolation in computer graphics — velocity extrapolation in the normal direction and potential flow...

  10. C1 Rational Quadratic Trigonometric Interpolation Spline for Data Visualization

    Directory of Open Access Journals (Sweden)

    Shengjun Liu

    2015-01-01

    Full Text Available A new C1 piecewise rational quadratic trigonometric spline with four local positive shape parameters in each subinterval is constructed to visualize the given planar data. Constraints are derived on these free shape parameters to generate shape preserving interpolation curves for positive and/or monotonic data sets. Two of these shape parameters are constrained while the other two can be set free to interactively control the shape of the curves. Moreover, the order of approximation of developed interpolant is investigated as O(h3. Numeric experiments demonstrate that our method can construct nice shape preserving interpolation curves efficiently.

  11. Thin-Slice Perception Develops Slowly

    Science.gov (United States)

    Balas, Benjamin; Kanwisher, Nancy; Saxe, Rebecca

    2012-01-01

    Body language and facial gesture provide sufficient visual information to support high-level social inferences from "thin slices" of behavior. Given short movies of nonverbal behavior, adults make reliable judgments in a large number of tasks. Here we find that the high precision of adults' nonverbal social perception depends on the slow…

  12. Adaptive slices for acquisition of anisotropic BRDF

    Czech Academy of Sciences Publication Activity Database

    Vávra, Radomír; Filip, Jiří

    (2018) ISSN 2096-0433 R&D Projects: GA ČR GA17-18407S Institutional support: RVO:67985556 Keywords : anisotropic BRDF * slice * sampling Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2018/RO/vavra-0486116.pdf

  13. Detecting Psychopathy from Thin Slices of Behavior

    Science.gov (United States)

    Fowler, Katherine A.; Lilienfeld, Scott O.; Patrick, Christopher J.

    2009-01-01

    This study is the first to demonstrate that features of psychopathy can be reliably and validly detected by lay raters from "thin slices" (i.e., small samples) of behavior. Brief excerpts (5 s, 10 s, and 20 s) from interviews with 96 maximum-security inmates were presented in video or audio form or in both modalities combined. Forty raters used…

  14. Chiral properties of baryon interpolating fields

    International Nuclear Information System (INIS)

    Nagata, Keitaro; Hosaka, Atsushi; Dmitrasinovic, V.

    2008-01-01

    We study the chiral transformation properties of all possible local (non-derivative) interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We derive and use the relations/identities among the baryon operators with identical quantum numbers that follow from the combined color, Dirac and isospin Fierz transformations. These relations reduce the number of independent baryon operators with any given spin and isospin. The Fierz identities also effectively restrict the allowed baryon chiral multiplets. It turns out that the non-derivative baryons' chiral multiplets have the same dimensionality as their Lorentz representations. For the two independent nucleon operators the only permissible chiral multiplet is the fundamental one, ((1)/(2),0)+(0,(1)/(2)). For the Δ, admissible Lorentz representations are (1,(1)/(2))+((1)/(2),1) and ((3)/(2),0)+(0,(3)/(2)). In the case of the (1,(1)/(2))+((1)/(2),1) chiral multiplet, the I(J)=(3)/(2)((3)/(2)) Δ field has one I(J)=(1)/(2)((3)/(2)) chiral partner; otherwise it has none. We also consider the Abelian (U A (1)) chiral transformation properties of the fields and show that each baryon comes in two varieties: (1) with Abelian axial charge +3; and (2) with Abelian axial charge -1. In case of the nucleon these are the two Ioffe fields; in case of the Δ, the (1,(1)/(2))+((1)/(2),1) multiplet has an Abelian axial charge -1 and the ((3)/(2),0)+(0,(3)/(2)) multiplet has an Abelian axial charge +3. (orig.)

  15. MODIS Snow Cover Recovery Using Variational Interpolation

    Science.gov (United States)

    Tran, H.; Nguyen, P.; Hsu, K. L.; Sorooshian, S.

    2017-12-01

    Cloud obscuration is one of the major problems that limit the usages of satellite images in general and in NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) global Snow-Covered Area (SCA) products in particular. Among the approaches to resolve the problem, the Variational Interpolation (VI) algorithm method, proposed by Xia et al., 2012, obtains cloud-free dynamic SCA images from MODIS. This method is automatic and robust. However, computational deficiency is a main drawback that degrades applying the method for larger scales (i.e., spatial and temporal scales). To overcome this difficulty, this study introduces an improved version of the original VI. The modified VI algorithm integrates the MINimum RESidual (MINRES) iteration (Paige and Saunders., 1975) to prevent the system from breaking up when applied to much broader scales. An experiment was done to demonstrate the crash-proof ability of the new algorithm in comparison with the original VI method, an ability that is obtained when maintaining the distribution of the weights set after solving the linear system. After that, the new VI algorithm was applied to the whole Contiguous United States (CONUS) over four winter months of 2016 and 2017, and validated using the snow station network (SNOTEL). The resulting cloud free images have high accuracy in capturing the dynamical changes of snow in contrast with the MODIS snow cover maps. Lastly, the algorithm was applied to create a Cloud free images dataset from March 10, 2000 to February 28, 2017, which is able to provide an overview of snow trends over CONUS for nearly two decades. ACKNOWLEDGMENTSWe would like to acknowledge NASA, NOAA Office of Hydrologic Development (OHD) National Weather Service (NWS), Cooperative Institute for Climate and Satellites (CICS), Army Research Office (ARO), ICIWaRM, and UNESCO for supporting this research.

  16. Comparison of two fractal interpolation methods

    Science.gov (United States)

    Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo

    2017-03-01

    As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has

  17. Interpolation-Based Condensation Model Reduction Part 1: Frequency Window Reduction Method Application to Structural Acoustics

    National Research Council Canada - National Science Library

    Ingel, R

    1999-01-01

    ... (which require derivative information) interpolation functions as well as standard Lagrangian functions, which can be linear, quadratic or cubic, have been used to construct the interpolation windows...

  18. Rhie-Chow interpolation in strong centrifugal fields

    Science.gov (United States)

    Bogovalov, S. V.; Tronin, I. V.

    2015-10-01

    Rhie-Chow interpolation formulas are derived from the Navier-Stokes and continuity equations. These formulas are generalized to gas dynamics in strong centrifugal fields (as high as 106 g) occurring in gas centrifuges.

  19. Efficient Algorithms and Design for Interpolation Filters in Digital Receiver

    Directory of Open Access Journals (Sweden)

    Xiaowei Niu

    2014-05-01

    Full Text Available Based on polynomial functions this paper introduces a generalized design method for interpolation filters. The polynomial-based interpolation filters can be implemented efficiently by using a modified Farrow structure with an arbitrary frequency response, the filters allow many pass- bands and stop-bands, and for each band the desired amplitude and weight can be set arbitrarily. The optimization coefficients of the interpolation filters in time domain are got by minimizing the weighted mean squared error function, then converting to solve the quadratic programming problem. The optimization coefficients in frequency domain are got by minimizing the maxima (MiniMax of the weighted mean squared error function. The degree of polynomials and the length of interpolation filter can be selected arbitrarily. Numerical examples verified the proposed design method not only can reduce the hardware cost effectively but also guarantee an excellent performance.

  20. A Meshfree Quasi-Interpolation Method for Solving Burgers’ Equation

    Directory of Open Access Journals (Sweden)

    Mingzhu Li

    2014-01-01

    Full Text Available The main aim of this work is to consider a meshfree algorithm for solving Burgers’ equation with the quartic B-spline quasi-interpolation. Quasi-interpolation is very useful in the study of approximation theory and its applications, since it can yield solutions directly without the need to solve any linear system of equations and overcome the ill-conditioning problem resulting from using the B-spline as a global interpolant. The numerical scheme is presented, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the time derivative of the dependent variable. Compared to other numerical methods, the main advantages of our scheme are higher accuracy and lower computational complexity. Meanwhile, the algorithm is very simple and easy to implement and the numerical experiments show that it is feasible and valid.

  1. Interpolating and sampling sequences in finite Riemann surfaces

    OpenAIRE

    Ortega-Cerda, Joaquim

    2007-01-01

    We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.

  2. Illumination estimation via thin-plate spline interpolation.

    Science.gov (United States)

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  3. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  4. Interpolation and sampling in spaces of analytic functions

    CERN Document Server

    Seip, Kristian

    2004-01-01

    The book is about understanding the geometry of interpolating and sampling sequences in classical spaces of analytic functions. The subject can be viewed as arising from three classical topics: Nevanlinna-Pick interpolation, Carleson's interpolation theorem for H^\\infty, and the sampling theorem, also known as the Whittaker-Kotelnikov-Shannon theorem. The book aims at clarifying how certain basic properties of the space at hand are reflected in the geometry of interpolating and sampling sequences. Key words for the geometric descriptions are Carleson measures, Beurling densities, the Nyquist rate, and the Helson-Szegő condition. The book is based on six lectures given by the author at the University of Michigan. This is reflected in the exposition, which is a blend of informal explanations with technical details. The book is essentially self-contained. There is an underlying assumption that the reader has a basic knowledge of complex and functional analysis. Beyond that, the reader should have some familiari...

  5. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  6. Spatial interpolation of point velocities in stream cross-section

    Directory of Open Access Journals (Sweden)

    Hasníková Eliška

    2015-03-01

    Full Text Available The most frequently used instrument for measuring velocity distribution in the cross-section of small rivers is the propeller-type current meter. Output of measuring using this instrument is point data of a tiny bulk. Spatial interpolation of measured data should produce a dense velocity profile, which is not available from the measuring itself. This paper describes the preparation of interpolation models.

  7. The Convergence Acceleration of Two-Dimensional Fourier Interpolation

    Directory of Open Access Journals (Sweden)

    Anry Nersessian

    2008-07-01

    Full Text Available Hereby, the convergence acceleration of two-dimensional trigonometric interpolation for a smooth functions on a uniform mesh is considered. Together with theoretical estimates some numerical results are presented and discussed that reveal the potential of this method for application in image processing. Experiments show that suggested algorithm allows acceleration of conventional Fourier interpolation even for sparse meshes that can lead to an efficient image compression/decompression algorithms and also to applications in image zooming procedures.

  8. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  9. Comparing interpolation schemes in dynamic receive ultrasound beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Andresen, Henrik; Nikolov, Svetoslav

    2005-01-01

    In medical ultrasound interpolation schemes are of- ten applied in receive focusing for reconstruction of image points. This paper investigates the performance of various interpolation scheme by means of ultrasound simulations of point scatterers in Field II. The investigation includes conventional...... B-mode imaging and synthetic aperture (SA) imaging using a 192-element, 7 MHz linear array transducer with λ pitch as simulation model. The evaluation consists primarily of calculations of the side lobe to main lobe ratio, SLMLR, and the noise power of the interpolation error. When using...... conventional B-mode imaging and linear interpolation, the difference in mean SLMLR is 6.2 dB. With polynomial interpolation the ratio is in the range 6.2 dB to 0.3 dB using 2nd to 5th order polynomials, and with FIR interpolation the ratio is in the range 5.8 dB to 0.1 dB depending on the filter design...

  10. Surface interpolation with radial basis functions for medical imaging

    International Nuclear Information System (INIS)

    Carr, J.C.; Beatson, R.K.; Fright, W.R.

    1997-01-01

    Radial basis functions are presented as a practical solution to the problem of interpolating incomplete surfaces derived from three-dimensional (3-D) medical graphics. The specific application considered is the design of cranial implants for the repair of defects, usually holes, in the skull. Radial basis functions impose few restrictions on the geometry of the interpolation centers and are suited to problems where interpolation centers do not form a regular grid. However, their high computational requirements have previously limited their use to problems where the number of interpolation centers is small (<300). Recently developed fast evaluation techniques have overcome these limitations and made radial basis interpolation a practical approach for larger data sets. In this paper radial basis functions are fitted to depth-maps of the skull's surface, obtained from X-ray computed tomography (CT) data using ray-tracing techniques. They are used to smoothly interpolate the surface of the skull across defect regions. The resulting mathematical description of the skull's surface can be evaluated at any desired resolution to be rendered on a graphics workstation or to generate instructions for operating a computer numerically controlled (CNC) mill

  11. Development of a bread slicing machine from locally sourced ...

    African Journals Online (AJOL)

    This paper presents the development of a bread slicing machine which is a mechanical device that is used for slicing bread instead of the crude cumbersome and unhygienic method of manual slicing of bread. In an attempt to facilitate the final processing of bread which is a common daily food requirement of most Nigerians ...

  12. Slice through an LHC focusing magnet

    CERN Multimedia

    Slice through an LHC superconducting quadrupole (focusing) magnet. The slice includes a cut through the magnet wiring (niobium titanium), the beampipe and the steel magnet yokes. Particle beams in the Large Hadron Collider (LHC) have the same energy as a high-speed train, squeezed ready for collision into a space narrower than a human hair. Huge forces are needed to control them. Dipole magnets (2 poles) are used to bend the paths of the protons around the 27 km ring. Quadrupole magnets (4 poles) focus the proton beams and squeeze them so that more particles collide when the beams’ paths cross. Bringing beams into collision requires a precision comparable to making two knitting needles collide, launched from either side of the Atlantic Ocean.

  13. Velocity slice imaging for dissociative electron attachment

    Science.gov (United States)

    Nandi, Dhananjay; Prabhudesai, Vaibhav S.; Krishnakumar, E.; Chatterjee, A.

    2005-05-01

    A velocity slice imaging method is developed for measuring the angular distribution of fragment negative ions arising from dissociative electron attachment (DEA) to molecules. A low energy pulsed electron gun, a pulsed field ion extraction, and a two-dimensional position sensitive detector consisting of microchannel plates and a wedge-and-strip anode are used for this purpose. Detection and storage of each ion separately for its position and flight time allows analysis of the data offline for any given time slice, without resorting to pulsing the detector bias. The performance of the system is evaluated by measuring the angular distribution of O- from O2 and comparing it with existing data obtained using conventional technique. The capability of this technique in obtaining forward and backward angular distribution data is shown to have helped in resolving one of the existing problems in the electron scattering on O2.

  14. 5-D interpolation with wave-front attributes

    Science.gov (United States)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that

  15. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    Science.gov (United States)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  16. An integral conservative gridding-algorithm using Hermitian curve interpolation

    International Nuclear Information System (INIS)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-01-01

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  17. Microfilament Contraction Promotes Rounding of Tunic Slices: An Integumentary Defense System in the Colonial Ascidian Aplidium yamazii.

    Science.gov (United States)

    Hirose, E; Ishii, T

    1995-08-01

    In Aplidium yamazii, when a slice of a live colony (approximately 0.5 mm thick) was incubated in seawater for 12 h, the slice became a round tunic fragment. This tunic rounding was inhibited by freezing of the slices, incubation with Ca2+-Mg2+ -free seawater, or addition of cytochalasin B. Staining of microfilaments in the slices with phalloidin-FITC showed the existence of a cellular network in the tunic. Contraction of this cellular network probably promotes rounding of the tunic slice. In electron microscopic observations, a new tunic cuticle regenerated at the surface of the round tunic fragments; the tunic cuticle did not regenerate in newly sliced specimens nor in specimens in which rounding was experimentally inhibited. Based on these results, an integumentary defense system is proposed in this species as follows. (1) When the colony is wounded externally, contraction of the cellular network promotes tunic contraction around the wound. (2) The wound is almost closed by tunic contraction. (3) Tunic contraction increases the density of the filamentous components of the tunic at the wound, and it may accelerate the regeneration of tunic cuticle there.

  18. Effects of chemical composite, puffing temperature and intermediate moisture content on physical properties of potato and apple slices

    Science.gov (United States)

    Tabtaing, S.; Paengkanya, S.; Tanthong, P.

    2017-09-01

    Puffing technique is the process that can improve texture and volumetric of crisp fruit and vegetable. However, the effect of chemical composite in foods on puffing characteristics is still lack of study. Therefore, potato and apple slices were comparative study on their physical properties. Potato and apple were sliced into 2.5 mm thickness and 2.5 cm in diameter. Potato slices were treated by hot water for 2 min while apple slices were not treatment. After that, they were dried in 3 steps. First step, they were dried by hot air at temperature of 90°C until their moisture content reached to 30, 40, and 50 % dry basis. Then they were puffed by hot air at temperature of 130, 150, and 170°C for 2 min. Finally, they were dried again by hot air at temperature of 90°C until their final moisture content reached to 4% dry basis. The experimental results showed that chemical composite of food affected on physical properties of puffed product. Puffed potato had higher volume ratio than those puffed apple because potato slices contains starch. The higher starch content provided more hard texture of potato than those apples. Puffing temperature and moisture content strongly affected on the color, volume ratio, and textural properties of puffed potato slices. In addition, the high drying rate of puffed product observed at high puffing temperature and higher moisture content.

  19. Evaluation of TSE- and T1-3D-GRE-sequences for focal cartilage lesions in vitro in comparison to ultrahigh resolution multi-slice CT

    International Nuclear Information System (INIS)

    Stork, A.; Schulze, D.; Koops, A.; Kemper, J.; Adam, G.

    2002-01-01

    Purpose: Evaluation of TSE- and T 1 -3D-GRE-sequences for focal cartilage lesions in vitro in comparison to ultrahigh resolution multi-slice CT. Materials and methods: Forty artificial cartilage lesions in ten bovine patellae were immersed in a solution of iodinated contrast medium and assessed with ultrahigh resolution multi-slice CT. Fat-suppressed TSE images with intermediate- and T 2 -weighting at a slice thickness of 2, 3 and 4 mm as well as fat-suppressed T 1 -weighted 3D-FLASH images with an effective slice thickness of 1, 2 and 3 mm were acquired at 1.5 T. After adding Gd-DTPA to the saline solution containing the patellae, the T 1 -weighted 3D-FLASH imaging was repeated. Results: All cartilage lesions were visualised and graded with ultrahigh resolution multi-slice CT. The TSE images had a higher sensitivity and a higher inter- and intraobserver kappa compared to the FLASH-sequences (TSE: 70-95%; 0.82-0.83; 0.85-0.9; FLASH: 57.5-85%; 0.53-0.72; 0.73-0.82, respectively). An increase in slice thickness decreased the sensitivity, whereby deep lesions were even reliably depicted on TSE images at a slice thickness of 3 and 4 mm. Adding Gd-DTPA to the saline solution increased the sensitivity by 10% with no detectable advantage over the T 2 -weighted TSE images. Conclusion: TSE sequences and application of Gd-DTPA seemed to be superior to T 1 -weighted 3D-FLASH sequences without Gd-DTPA in the detection of focal cartilage lesions. The ultrahigh resolution multi-slice CT can serve as in vitro reference standard for focal cartilage lesions. (orig.) [de

  20. A Novel Slicing Method for Thin Supercapacitors.

    Science.gov (United States)

    Sun, Hao; Fu, Xuemei; Xie, Songlin; Jiang, Yishu; Guan, Guozhen; Wang, Bingjie; Li, Houpu; Peng, Huisheng

    2016-08-01

    Thin and flexible supercapacitors with low cost and individual variation are fabricated by a new and efficient slicing method. Tunable output voltage and energy can be realized with a high specific capacitance of 248.8 F g(-1) or 150.8 F cm(-3) , which is well maintained before and after bending. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Slice of a LEP bending magnet

    CERN Multimedia

    This is a slice of a LEP dipole bending magnet, made as a concrete and iron sandwich. The bending field needed in LEP is small (about 1000 Gauss), equivalent to two of the magnets people stick on fridge doors. Because it is very difficult to keep a low field steady, a high field was used in iron plates embedded in concrete. A CERN breakthrough in magnet design, LEP dipoles can be tuned easily and are cheaper than conventional magnets.

  2. Evaluation of the retrospective ECG-gated helical scan using half-second multi-slice CT. Motion phantom study for volumetry

    International Nuclear Information System (INIS)

    Yamamoto, Shuji; Matsumoto, Takashi; Nakanishi, Shohzoh; Hamada, Seiki; Takahei, Kazunari; Naito, Hiroaki; Ogata, Yuji

    2002-01-01

    ECG synchronized technique on multi-slice CT provide the thinner (less 2 mm slice thickness) and faster (0.5 sec/rotation) scan than that of the single detector CT and can acquire the coverage of the entire heart volume within one breath-hold. However, temporal resolution of multi-slice CT is insufficient on practical range of heart rate. The purpose of this study was to evaluate the accuracy of volumetry on cardiac function measurement in retrospective ECG-gated helical scan. We discussed the influence of the degradation of image quality and limitation of the heart rate in cardiac function measurement (volumetry) using motion phantom. (author)

  3. Trafficking of astrocytic vesicles in hippocampal slices

    International Nuclear Information System (INIS)

    Potokar, Maja; Kreft, Marko; Lee, So-Young; Takano, Hajime; Haydon, Philip G.; Zorec, Robert

    2009-01-01

    The increasingly appreciated role of astrocytes in neurophysiology dictates a thorough understanding of the mechanisms underlying the communication between astrocytes and neurons. In particular, the uptake and release of signaling substances into/from astrocytes is considered as crucial. The release of different gliotransmitters involves regulated exocytosis, consisting of the fusion between the vesicle and the plasma membranes. After fusion with the plasma membrane vesicles may be retrieved into the cytoplasm and may continue to recycle. To study the mobility implicated in the retrieval of secretory vesicles, these structures have been previously efficiently and specifically labeled in cultured astrocytes, by exposing live cells to primary and secondary antibodies. Since the vesicle labeling and the vesicle mobility properties may be an artifact of cell culture conditions, we here asked whether the retrieving exocytotic vesicles can be labeled in brain tissue slices and whether their mobility differs to that observed in cell cultures. We labeled astrocytic vesicles and recorded their mobility with two-photon microscopy in hippocampal slices from transgenic mice with fluorescently tagged astrocytes (GFP mice) and in wild-type mice with astrocytes labeled by Fluo4 fluorescence indicator. Glutamatergic vesicles and peptidergic granules were labeled by the anti-vesicular glutamate transporter 1 (vGlut1) and anti-atrial natriuretic peptide (ANP) antibodies, respectively. We report that the vesicle mobility parameters (velocity, maximal displacement and track length) recorded in astrocytes from tissue slices are similar to those reported previously in cultured astrocytes.

  4. Evaluation of solitary pulmonary metastasis of extrathoracic tumor with thin-slice computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Shiotani, Seiji; Yamada, Kouzo; Oshita, Fumihiro; Nomura, Ikuo; Noda, Kazumasa; Yamagata, Tatushi; Tajiri, Michihiko; Ishibashi, Makoto; Kameda, Youichi [Kanagawa Cancer Center, Yokohama (Japan)

    1995-10-01

    Thin-slice computed tomography (CT) images were compared with pathological findings in 9 specimens of solitary pulmonary nodules, which had been pathologically diagnosed as pulmonary metastasis of extrathoracic tumor. The thin-slice CT images were 2 mm-thick images reconstructed using a TCT-900S, HELIX (Toshiba, Tokyo) and examined at two different window and level settings. In every case, the surgical specimens were sliced transversely to correlate with the CT images. According to the image findings, the internal structure was of the solid-density type in every case, and the margin showed spiculation in 22%, notching in 67% and pleural indentation in 89%. Regarding the relationship between the pulmonary vessels and tumors, plural vascular involvement was revealed in every case. Thus, it was difficult to distinguish solitary pulmonary metastasis of extrathoracic tumor from primary lung cancer based on the thin-slice CT images. For some solitary pulmonary metastasis of extrathoracic tumor, a comprehensive diagnostic approach taking both the anamnesis and pathological findings into consideration was required. (author).

  5. Study on Scattered Data Points Interpolation Method Based on Multi-line Structured Light

    International Nuclear Information System (INIS)

    Fan, J Y; Wang, F G; W, Y; Zhang, Y L

    2006-01-01

    Aiming at the range image obtained through multi-line structured light, a regional interpolation method is put forward in this paper. This method divides interpolation into two parts according to the memory format of the scattered data, one is interpolation of the data on the stripes, and the other is interpolation of data between the stripes. Trend interpolation method is applied to the data on the stripes, and Gauss wavelet interpolation method is applied to the data between the stripes. Experiments prove regional interpolation method feasible and practical, and it also promotes the speed and precision

  6. RETROSPECTIVE DETECTION OF INTERLEAVED SLICE ACQUISITION PARAMETERS FROM FMRI DATA

    Science.gov (United States)

    Parker, David; Rotival, Georges; Laine, Andrew; Razlighi, Qolamreza R.

    2015-01-01

    To minimize slice excitation leakage to adjacent slices, interleaved slice acquisition is nowadays performed regularly in fMRI scanners. In interleaved slice acquisition, the number of slices skipped between two consecutive slice acquisitions is often referred to as the ‘interleave parameter’; the loss of this parameter can be catastrophic for the analysis of fMRI data. In this article we present a method to retrospectively detect the interleave parameter and the axis in which it is applied. Our method relies on the smoothness of the temporal-distance correlation function, which becomes disrupted along the axis on which interleaved slice acquisition is applied. We examined this method on simulated and real data in the presence of fMRI artifacts such as physiological noise, motion, etc. We also examined the reliability of this method in detecting different types of interleave parameters and demonstrated an accuracy of about 94% in more than 1000 real fMRI scans. PMID:26161244

  7. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    Science.gov (United States)

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  8. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    Science.gov (United States)

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  9. The crustal thickness of Australia

    Science.gov (United States)

    Clitheroe, G.; Gudmundsson, O.; Kennett, B.L.N.

    2000-01-01

    We investigate the crustal structure of the Australian continent using the temporary broadband stations of the Skippy and Kimba projects and permanent broadband stations. We isolate near-receiver information, in the form of crustal P-to-S conversions, using the receiver function technique. Stacked receiver functions are inverted for S velocity structure using a Genetic Algorithm approach to Receiver Function Inversion (GARFI). From the resulting velocity models we are able to determine the Moho depth and to classify the width of the crust-mantle transition for 65 broadband stations. Using these results and 51 independent estimates of crustal thickness from refraction and reflection profiles, we present a new, improved, map of Moho depth for the Australian continent. The thinnest crust (25 km) occurs in the Archean Yilgarn Craton in Western Australia; the thickest crust (61 km) occurs in Proterozoic central Australia. The average crustal thickness is 38.8 km (standard deviation 6.2 km). Interpolation error estimates are made using kriging and fall into the range 2.5-7.0 km. We find generally good agreement between the depth to the seismologically defined Moho and xenolith-derived estimates of crustal thickness beneath northeastern Australia. However, beneath the Lachlan Fold Belt the estimates are not in agreement, and it is possible that the two techniques are mapping differing parts of a broad Moho transition zone. The Archean cratons of Western Australia appear to have remained largely stable since cratonization, reflected in only slight variation of Moho depth. The largely Proterozoic center of Australia shows relatively thicker crust overall as well as major Moho offsets. We see evidence of the margin of the contact between the Precambrian craton and the Tasman Orogen, referred to as the Tasman Line. Copyright 2000 by the American Geophysical Union.

  10. Calculation of the Scattered Radiation Profile in 64 Slice CT Scanners Using Experimental Measurement

    Directory of Open Access Journals (Sweden)

    Afshin Akbarzadeh

    2009-06-01

    Full Text Available Introduction: One of the most important parameters in x-ray CT imaging is the noise induced by detected scattered radiation. The detected scattered radiation is completely dependent on the scanner geometry as well as size, shape and material of the scanned object. The magnitude and spatial distribution of the scattered radiation in x-ray CT should be quantified for development of robust scatter correction techniques. Empirical methods based on blocking the primary photons in a small region are not able to extract scatter in all elements of the detector array while the scatter profile is required for a scatter correction procedure. In this study, we measured scatter profiles in 64 slice CT scanners using a new experimental measurement. Material and Methods: To measure the scatter profile, a lead block array was inserted under the collimator and the phantom was exposed at the isocenter. The raw data file, which contained detector array readouts, was transferred to a PC and was read using a dedicated GUI running under MatLab 7.5. The scatter profile was extracted by interpolating the shadowed area. Results: The scatter and SPR profiles were measured. Increasing the tube voltage from 80 to 140 kVp resulted in an 80% fall off in SPR for a water phantom (d=210 mm and 86% for a polypropylene phantom (d = 350 mm. Increasing the air gap to 20.9 cm caused a 30% decrease in SPR. Conclusion: In this study, we presented a novel approach for measurement of scattered radiation distribution and SPR in a CT scanner with 64-slice capability using a lead block array. The method can also be used on other multi-slice CT scanners. The proposed technique can accurately estimate scatter profiles. It is relatively straightforward, easy to use, and can be used for any related measurement.

  11. Mass transfer characteristics of bisporus mushroom ( Agaricus bisporus) slices during convective hot air drying

    Science.gov (United States)

    Ghanbarian, Davoud; Baraani Dastjerdi, Mojtaba; Torki-Harchegani, Mehdi

    2016-05-01

    An accurate understanding of moisture transfer parameters, including moisture diffusivity and moisture transfer coefficient, is essential for efficient mass transfer analysis and to design new dryers or improve existing drying equipments. The main objective of the present study was to carry out an experimental and theoretical investigation of mushroom slices drying and determine the mass transfer characteristics of the samples dried under different conditions. The mushroom slices with two thicknesses of 3 and 5 mm were dried at air temperatures of 40, 50 and 60 °C and air flow rates of 1 and 1.5 m s-1. The Dincer and Dost model was used to determine the moisture transfer parameters and predict the drying curves. It was observed that the entire drying process took place in the falling drying rate period. The obtained lag factor and Biot number indicated that the moisture transfer in the samples was controlled by both internal and external resistance. The effective moisture diffusivity and the moisture transfer coefficient increased with increasing air temperature, air flow rate and samples thickness and varied in the ranges of 6.5175 × 10-10 to 1.6726 × 10-9 m2 s-1 and 2.7715 × 10-7 to 3.5512 × 10-7 m s-1, respectively. The validation of the Dincer and Dost model indicated a good capability of the model to describe the drying curves of the mushroom slices.

  12. A sandwich-like differential B-dot based on EACVD polycrystalline diamond slice

    Science.gov (United States)

    Xu, P.; Yu, Y.; Xu, L.; Zhou, H. Y.; Qiu, C. J.

    2018-06-01

    In this article, we present a method of mass production of a standardized high-performance differential B-dot magnetic probe together with the magnetic field measurement in a pulsed current device with the current up to hundreds of kilo-Amperes. A polycrystalline diamond slice produced in an Electron Assisted Chemical Vapor Deposition device is used as the base and insulating material to imprint two symmetric differential loops for the magnetic field measurement. The SP3 carbon bond in the cubic lattice structure of diamond is confirmed by Raman spectra. The thickness of this slice is 20 μm. A gold loop is imprinted onto each surface of the slice by using the photolithography technique. The inner diameter, width, and thickness of each loop are 0.8 mm, 50 μm, and 1 μm, respectively. It provides a way of measuring the pulsed magnetic field with a high spatial and temporal resolution, especially in limited space. This differential magnetic probe has demonstrated a very good common-mode rejection rate through the pulsed magnetic field measurement.

  13. Fresh Slice Self-Seeding and Fresh Slice Harmonic Lasing at LCLS

    Energy Technology Data Exchange (ETDEWEB)

    Amann, J.W. [SLAC National Accelerator Lab., Menlo Park, CA (United States)

    2018-04-01

    We present results from the successful demonstration of fresh slice self-seeding at the Linac Coherent Light Source (LCLS).* The performance is compared with SASE and regular self-seeding at photon energy of 5.5 keV, resulting in a relative average brightness increase of a factor of 12 and a factor of 2 respectively. Following this proof-of-principle we discuss the forthcoming plans to use the same technique** for fresh slice harmonic lasing in an upcoming experiment. The demonstration of fresh slice harmonic lasing provides an attractive solution for future XFELs aiming to achieve high efficiency, high brightness X-ray pulses at high photon energies (>12 keV).***

  14. Experimental demonstration of spectrum-sliced elastic optical path network (SLICE).

    Science.gov (United States)

    Kozicki, Bartłomiej; Takara, Hidehiko; Tsukishima, Yukio; Yoshimatsu, Toshihide; Yonenaga, Kazushige; Jinno, Masahiko

    2010-10-11

    We describe experimental demonstration of spectrum-sliced elastic optical path network (SLICE) architecture. We employ optical orthogonal frequency-division multiplexing (OFDM) modulation format and bandwidth-variable optical cross-connects (OXC) to generate, transmit and receive optical paths with bandwidths of up to 1 Tb/s. We experimentally demonstrate elastic optical path setup and spectrally-efficient transmission of multiple channels with bit rates ranging from 40 to 140 Gb/s between six nodes of a mesh network. We show dynamic bandwidth scalability for optical paths with bit rates of 40 to 440 Gb/s. Moreover, we demonstrate multihop transmission of a 1 Tb/s optical path over 400 km of standard single-mode fiber (SMF). Finally, we investigate the filtering properties and the required guard band width for spectrally-efficient allocation of optical paths in SLICE.

  15. Induksi Ginogenesis melalui Kultur Multi Ovule Slice dan Kultur Ovary Slice Dianthus chinensis

    Directory of Open Access Journals (Sweden)

    Suskandari Kartikaningrum

    2013-10-01

    Full Text Available Callus induction was studied in five genotypes of Dianthus chinensis using 2.4 D and NAA. Calluses can be obtainedfrom unfertilized ovule culture and ovary culture. The aim of the research was to study gynogenic potential and responseof Dianthus chinensis through ovule slice and ovary slice culture for obtaining haploid plants. Five genotypes of Dianthuschinensis and five media were used in ovule slice culture and two genotypes and three medium were used in ovary culture.Flower buds in the 7th stage were incubated for the purpose of dark pre-treatment at 4 oC for one day. Ovules and ovaries wereisolated and cultured in induction medium. Cultures were incubated for the purpose of dark pre-treatment at 4 oC for seven days, followed by 25 oC light incubation. The result showed that 2.4D was better than NAA in inducing callus. Percentage of regenerated calluses were produced in V11, V13 and V15 genotypes in M7 medium (MS + 2 mg L-1 2.4D + 1 mg L-1 BAP + 30 g L-1 sucrose and M10 medium (MS + 1 mg L-1 2.4D + 1 mg L-1 BAP + 20 g L-1 sucrose. All calluses originated from ovule and ovary cultures flowered prematurely. Double haploid (V11-34 were obtained from ovule slice culture based on PER (peroksidase and EST (esterase isoenzym marker.Keywords: ovule slice culture, ovary slice culture, callus, Dianthus sp., haploid

  16. Interpolant tree automata and their application in Horn clause verification

    DEFF Research Database (Denmark)

    Kafle, Bishoksan; Gallagher, John Patrick

    2016-01-01

    This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this ......This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way...... clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead....

  17. Interpolation of vector fields from human cardiac DT-MRI

    International Nuclear Information System (INIS)

    Yang, F; Zhu, Y M; Rapacchi, S; Robini, M; Croisille, P; Luo, J H

    2011-01-01

    There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.

  18. Inoculating against eyewitness suggestibility via interpolated verbatim vs. gist testing.

    Science.gov (United States)

    Pansky, Ainat; Tenenboim, Einat

    2011-01-01

    In real-life situations, eyewitnesses often have control over the level of generality in which they choose to report event information. In the present study, we adopted an early-intervention approach to investigate to what extent eyewitness memory may be inoculated against suggestibility, following two different levels of interpolated reporting: verbatim and gist. After viewing a target event, participants responded to interpolated questions that required reporting of target details at either the verbatim or the gist level. After 48 hr, both groups of participants were misled about half of the target details and were finally tested for verbatim memory of all the details. The findings were consistent with our predictions: Whereas verbatim testing was successful in completely inoculating against suggestibility, gist testing did not reduce it whatsoever. These findings are particularly interesting in light of the comparable testing effects found for these two modes of interpolated testing.

  19. Interpolation-free scanning and sampling scheme for tomographic reconstructions

    International Nuclear Information System (INIS)

    Donohue, K.D.; Saniie, J.

    1987-01-01

    In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

  20. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  1. Image interpolation used in three-dimensional range data compression.

    Science.gov (United States)

    Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian

    2016-05-20

    Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.

  2. Importance of interpolation and coincidence errors in data fusion

    Science.gov (United States)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  3. Slicing, sampling, and distance-dependent effects affect network measures in simulated cortical circuit structures

    Directory of Open Access Journals (Sweden)

    Daniel Carl Miner

    2014-11-01

    Full Text Available The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results.

  4. Slicing, sampling, and distance-dependent effects affect network measures in simulated cortical circuit structures.

    Science.gov (United States)

    Miner, Daniel C; Triesch, Jochen

    2014-01-01

    The neuroanatomical connectivity of cortical circuits is believed to follow certain rules, the exact origins of which are still poorly understood. In particular, numerous nonrandom features, such as common neighbor clustering, overrepresentation of reciprocal connectivity, and overrepresentation of certain triadic graph motifs have been experimentally observed in cortical slice data. Some of these data, particularly regarding bidirectional connectivity are seemingly contradictory, and the reasons for this are unclear. Here we present a simple static geometric network model with distance-dependent connectivity on a realistic scale that naturally gives rise to certain elements of these observed behaviors, and may provide plausible explanations for some of the conflicting findings. Specifically, investigation of the model shows that experimentally measured nonrandom effects, especially bidirectional connectivity, may depend sensitively on experimental parameters such as slice thickness and sampling area, suggesting potential explanations for the seemingly conflicting experimental results.

  5. An adaptive interpolation scheme for molecular potential energy surfaces

    Science.gov (United States)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  6. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  7. Multi-dimensional cubic interpolation for ICF hydrodynamics simulation

    International Nuclear Information System (INIS)

    Aoki, Takayuki; Yabe, Takashi.

    1991-04-01

    A new interpolation method is proposed to solve the multi-dimensional hyperbolic equations which appear in describing the hydrodynamics of inertial confinement fusion (ICF) implosion. The advection phase of the cubic-interpolated pseudo-particle (CIP) is greatly improved, by assuming the continuities of the second and the third spatial derivatives in addition to the physical value and the first derivative. These derivatives are derived from the given physical equation. In order to evaluate the new method, Zalesak's example is tested, and we obtain successfully good results. (author)

  8. Oversampling of digitized images. [effects on interpolation in signal processing

    Science.gov (United States)

    Fischel, D.

    1976-01-01

    Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

  9. Scientific data interpolation with low dimensional manifold model

    Science.gov (United States)

    Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

    2018-01-01

    We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  10. Implementing fuzzy polynomial interpolation (FPI and fuzzy linear regression (LFR

    Directory of Open Access Journals (Sweden)

    Maria Cristina Floreno

    1996-05-01

    Full Text Available This paper presents some preliminary results arising within a general framework concerning the development of software tools for fuzzy arithmetic. The program is in a preliminary stage. What has been already implemented consists of a set of routines for elementary operations, optimized functions evaluation, interpolation and regression. Some of these have been applied to real problems.This paper describes a prototype of a library in C++ for polynomial interpolation of fuzzifying functions, a set of routines in FORTRAN for fuzzy linear regression and a program with graphical user interface allowing the use of such routines.

  11. Scientific data interpolation with low dimensional manifold model

    International Nuclear Information System (INIS)

    Zhu, Wei; Wang, Bao; Barnard, Richard C.; Hauck, Cory D.

    2017-01-01

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  12. Evaluation of spinal cord vessels using multi-slice CT angiography

    International Nuclear Information System (INIS)

    Chen Shuang; Zhu Ruijiang; Feng Xiaoyuan

    2006-01-01

    Objective: To evaluate the value of Multi-slice spiral CT angiography for spinal cord vessels. Methods: 11 adult subjects with suspected of myelopathy were performed with Multi-slice spiral CT angiography, An iodine contrast agent was injected at 3.5 ml/s, for total 100 ml. The parameters were axial 16 slice mode, 0.625 mm slice thickness, 0.8 s rotation, delay time depending on smartprep(15-25 s), multi-phase scan. The coronal and sagittal MPR and SSD were generated on a workstation compared with spinal digital subtraction angiography (DSA) to analyze normal or abnormal spinal cord vessels. Results: Normal findings at spinal CTA and digital subtraction angiography in six adult normal subjects and spinal cord vascular malformations (1 intradural extramedullary AVF, 4 dural AVFs) in five cases, Recognizable intradural vessels corresponding to anterior median (midline) veins and/or anterior spinal arteries were show in six adult normal subjects. Abnormal intradural vessels were detected in all five spinal cord vascular malformation with CT angiography, in comparison with digital subtraction angiography these vessels were primarily enlarged veins of the coronal venous plexus on the cord surface, radiculomedullary-dural arteries could not be clearly shown in four dural AVF, only one anterior spinal artery was detected in one patient with intradural medullary AVF, which direct shunt between anterior spinal artery and perimedullary vein with tortuous draining vessel. Conclusion: Multi-slice CT angiography is able to visualize the normal or abnormal spinal cord vessels. It could be used as a noninvasive method to screen the spinal cord vascular disease. (authors)

  13. Plastination of whole-body slices: a new aid in cross-sectional anatomy, demonstrated for thoracic organs in dogs

    International Nuclear Information System (INIS)

    Polgar, M.; Probst, A.; Koenig, H.E.; Sora, M.-C.

    2003-01-01

    Plastic-embedded, transparent serially sectioned slices from the canine thorax were compared wit cross-sections, made with the commonly used technique and computed tomograms. Three Beagles, at the age of seven months, were cut into 4 mm thick slices and plastinated with the epoxy resin Biodur E12. The area of the thorax was examined macro-scopically and scrutinized closely. Survey and magnification photographs were taken. Compared with the commonly used prepared sections the E12-slices proved to be transparent, hard, dry, odourless, resistant and show unlimited durability. Good color maintenance of the specimens makes differentiation of the organs easy. The leading of the blood vessels, nerves and other conductions of the thoracal cavity can be followed from section to section. The colorful images help to interpret CT and MRI and provide good learning aids for clinicians and students. (author)

  14. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    Science.gov (United States)

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  15. Mixed time slicing in path integral simulations

    International Nuclear Information System (INIS)

    Steele, Ryan P.; Zwickl, Jill; Shushkov, Philip; Tully, John C.

    2011-01-01

    A simple and efficient scheme is presented for using different time slices for different degrees of freedom in path integral calculations. This method bridges the gap between full quantization and the standard mixed quantum-classical (MQC) scheme and, therefore, still provides quantum mechanical effects in the less-quantized variables. Underlying the algorithm is the notion that time slices (beads) may be 'collapsed' in a manner that preserves quantization in the less quantum mechanical degrees of freedom. The method is shown to be analogous to multiple-time step integration techniques in classical molecular dynamics. The algorithm and its associated error are demonstrated on model systems containing coupled high- and low-frequency modes; results indicate that convergence of quantum mechanical observables can be achieved with disparate bead numbers in the different modes. Cost estimates indicate that this procedure, much like the MQC method, is most efficient for only a relatively few quantum mechanical degrees of freedom, such as proton transfer. In this regime, however, the cost of a fully quantum mechanical simulation is determined by the quantization of the least quantum mechanical degrees of freedom.

  16. Thin-Slice Measurement of Wisdom

    Directory of Open Access Journals (Sweden)

    Chao S. Hu

    2017-08-01

    Full Text Available Objective Measurement of Wisdom within a short period of time is vital for both the public interest (e.g., understanding a presidential election and research (e.g., testing factors that facilitate wisdom development. A measurement of emotion associated with wisdom would be especially informative; therefore, a novel Thin-Slice measurement of wisdom was developed based on the Berlin Paradigm. For about 2 min, participants imagined the lens of a camera as the eyes of their friend/teacher whom they advised about a life dilemma. Verbal response and facial expression were both recorded by a camera: verbal responses were then rated on both the Berlin Wisdom criteria and newly developed Chinese wisdom criteria; facial expressions were analyzed by the software iMotion FACET module. Results showed acceptable inter-rater and inter-item reliability for this novel paradigm. Moreover, both wisdom ratings were not significantly correlated with Social desirability, and the Berlin wisdom rating was significantly negatively correlated with Neuroticism; feeling of surprise was significantly positively correlated with both wisdom criteria ratings. Our results provide the first evidence of this Thin-slice Wisdom Paradigm’s reliability, its immunity to social desirability, and its validity for assessing candidates’ wisdom within a short timeframe. Although still awaiting further development, this novel Paradigm contributes to an emerging Universal Wisdom Paradigm applicable across cultures.

  17. Comparison of 640-Slice Multidetector Computed Tomography Versus 32-Slice MDCT for Imaging of the Osteo-odonto-keratoprosthesis Lamina.

    Science.gov (United States)

    Norris, Joseph M; Kishikova, Lyudmila; Avadhanam, Venkata S; Koumellis, Panos; Francis, Ian S; Liu, Christopher S C

    2015-08-01

    To investigate the efficacy of 640-slice multidetector computed tomography (MDCT) for detecting osteo-odonto laminar resorption in the osteo-odonto-keratoprosthesis (OOKP) compared with the current standard 32-slice MDCT. Explanted OOKP laminae and bone-dentine fragments were scanned using 640-slice MDCT (Aquilion ONE; Toshiba) and 32-slice MDCT (LightSpeed Pro32; GE Healthcare). Pertinent comparisons including image quality, radiation dose, and scanning parameters were made. Benefits of 640-slice MDCT over 32-slice MDCT were shown. Key comparisons of 640-slice MDCT versus 32-slice MDCT included the following: percentage difference and correlation coefficient between radiological and anatomical measurements, 1.35% versus 3.67% and 0.9961 versus 0.9882, respectively; dose-length product, 63.50 versus 70.26; rotation time, 0.175 seconds versus 1.000 seconds; and detector coverage width, 16 cm versus 2 cm. Resorption of the osteo-odonto lamina after OOKP surgery can result in potentially sight-threatening complications, hence it warrants regular monitoring and timely intervention. MDCT remains the gold standard for radiological assessment of laminar resorption, which facilitates detection of subtle laminar changes earlier than the onset of clinical signs, thus indicating when preemptive measures can be taken. The 640-slice MDCT exhibits several advantages over traditional 32-slice MDCT. However, such benefits may not offset cost implications, except in rare cases, such as in young patients who might undergo years of radiation exposure.

  18. A novel method for oxygen glucose deprivation model in organotypic spinal cord slices.

    Science.gov (United States)

    Liu, Jing-Jie; Ding, Xiao-Yan; Xiang, Li; Zhao, Feng; Huang, Sheng-Li

    2017-10-01

    This study aimed to establish a model to closely mimic spinal cord hypoxic-ischemic injury with high production and high reproducibility. Fourteen-day cultured organotypic spinal cord slices were divided into 4 groups: control (Ctrl), oxygen glucose deprived for 30min (OGD 30min), OGD 60min, and OGD 120min. The Ctrl slices were incubated with 1ml propidium iodide (PI) solution (5μg/ml) for 30min. The OGD groups were incubated with 1ml glucose-free DMEM/F12 medium and 5μl PI solution (1mg/ml) for 30min, 60min and 120min, respectively. Positive control slice was fixed by 4% paraformaldehyde for 20min. The culture medium in each group was then collected and the Lactate Dehydrogenase (LDH) level in the medium was tested using Multi-Analyte ELISArray kits. Structure and refraction of the spinal cord slices were observed by light microscope. Fluorescence intensity of PI was examined by fluorescence microscopy and was tested by IPP Software. Morphology of astrocytes was observed by immunofluorescence histochemistry. Caspase 3 and caspase 3 active in different groups were tested by Western blot. In the OGD groups, the refraction of spinal cord slices decreased and the structure was unclear. The changes of refraction and structure in the OGD 120min group were similar to that in the positive control slice. Astrocyte morphology changed significantly. With the increase of OGD time, processes became thick and twisted, and nuclear condensations became more apparent. Obvious changes in morphology were observed in the OGD 60min group, and normal morphology disappeared in the OGD 120min group. Fluorescence intensity of PI increased along with the extension of OGD time. The difference was significant between 30min and 60min, but not significant between 60min and 120min. The intensity at OGD 120min was close to that in the positive control. Compare with the Ctrl group, the OGD groups had significantly higher LDH levels and caspase 3 active/caspase 3 ratios. The values increased

  19. Tumor Slice Culture: A New Avatar in Personalized Oncology

    Science.gov (United States)

    2017-09-01

    AWARD NUMBER: W81XWH-16-1-0149 TITLE: Tumor Slice Culture: A New Avatar in Personalized Oncology PRINCIPAL INVESTIGATOR: Raymond Yeung...CONTRACT NUMBER Tumor Slice Culture: A New Avatar in Personalized Oncology 5b. GRANT NUMBER W81XWH-16-1-0149 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...10 Annual Report 2017: Tumor Slice Culture: A new avatar for personalized oncology 1. INTRODUCTION: The goal of this research is to advance our

  20. Biased motion vector interpolation for reduced video artifacts.

    NARCIS (Netherlands)

    2011-01-01

    In a video processing system where motion vectors are estimated for a subset of the blocks of data forming a video frame, and motion vectors are interpolated for the remainder of the blocks of the frame, a method includes determining, for at least at least one block of the current frame for which a

  1. A Note on Interpolation of Stable Processes | Nassiuma | Journal of ...

    African Journals Online (AJOL)

    Interpolation procedures tailored for gaussian processes may not be applied to infinite variance stable processes. Alternative techniques suitable for a limited set of stable case with index α∈(1,2] were initially studied by Pourahmadi (1984) for harmonizable processes. This was later extended to the ARMA stable process ...

  2. Analysis of Spatial Interpolation in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2010-01-01

    are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...

  3. Hybrid vehicle optimal control : Linear interpolation and singular control

    NARCIS (Netherlands)

    Delprat, S.; Hofman, T.

    2015-01-01

    Hybrid vehicle energy management can be formulated as an optimal control problem. Considering that the fuel consumption is often computed using linear interpolation over lookup table data, a rigorous analysis of the necessary conditions provided by the Pontryagin Minimum Principle is conducted. For

  4. Fast interpolation for Global Positioning System (GPS) satellite orbits

    OpenAIRE

    Clynch, James R.; Sagovac, Christopher Patrick; Danielson, D. A. (Donald A.); Neta, Beny

    1995-01-01

    In this report, we discuss and compare several methods for polynomial interpolation of Global Positioning Systems ephemeris data. We show that the use of difference tables is more efficient than the method currently in use to construct and evaluate the Lagrange polynomials.

  5. Interpolation in computing science : the semantics of modularization

    NARCIS (Netherlands)

    Renardel de Lavalette, Gerard R.

    2008-01-01

    The Interpolation Theorem, first formulated and proved by W. Craig fifty years ago for predicate logic, has been extended to many other logical frameworks and is being applied in several areas of computer science. We give a short overview, and focus on the theory of software systems and modules. An

  6. Parallel optimization of IDW interpolation algorithm on multicore platform

    Science.gov (United States)

    Guan, Xuefeng; Wu, Huayi

    2009-10-01

    Due to increasing power consumption, heat dissipation, and other physical issues, the architecture of central processing unit (CPU) has been turning to multicore rapidly in recent years. Multicore processor is packaged with multiple processor cores in the same chip, which not only offers increased performance, but also presents significant challenges to application developers. As a matter of fact, in GIS field most of current GIS algorithms were implemented serially and could not best exploit the parallelism potential on such multicore platforms. In this paper, we choose Inverse Distance Weighted spatial interpolation algorithm (IDW) as an example to study how to optimize current serial GIS algorithms on multicore platform in order to maximize performance speedup. With the help of OpenMP, threading methodology is introduced to split and share the whole interpolation work among processor cores. After parallel optimization, execution time of interpolation algorithm is greatly reduced and good performance speedup is achieved. For example, performance speedup on Intel Xeon 5310 is 1.943 with 2 execution threads and 3.695 with 4 execution threads respectively. An additional output comparison between pre-optimization and post-optimization is carried out and shows that parallel optimization does to affect final interpolation result.

  7. LIP: The Livermore Interpolation Package, Version 1.6

    Energy Technology Data Exchange (ETDEWEB)

    Fritsch, F. N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-04

    This report describes LIP, the Livermore Interpolation Package. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since it is a general-purpose package that need not be restricted to equation of state data, which uses variables ρ (density) and T (temperature).

  8. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  9. Functional Commutant Lifting and Interpolation on Generalized Analytic Polyhedra

    Czech Academy of Sciences Publication Activity Database

    Ambrozie, Calin-Grigore

    2008-01-01

    Roč. 34, č. 2 (2008), s. 519-543 ISSN 0362-1588 R&D Projects: GA ČR(CZ) GA201/06/0128 Institutional research plan: CEZ:AV0Z10190503 Keywords : intertwining lifting * interpolation * analytic functions Subject RIV: BA - General Mathematics Impact factor: 0.327, year: 2008

  10. Interpolation solution of the single-impurity Anderson model

    International Nuclear Information System (INIS)

    Kuzemsky, A.L.

    1990-10-01

    The dynamical properties of the single-impurity Anderson model (SIAM) is studied using a novel Irreducible Green's Function method (IGF). The new solution for one-particle GF interpolating between the strong and weak correlation limits is obtained. The unified concept of relevant mean-field renormalizations is indispensable for strong correlation limit. (author). 21 refs

  11. Interpolant Tree Automata and their Application in Horn Clause Verification

    Directory of Open Access Journals (Sweden)

    Bishoksan Kafle

    2016-07-01

    Full Text Available This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this paper. The role of an interpolant tree automaton is to provide a generalisation of a spurious counterexample during refinement, capturing a possibly infinite set of spurious counterexample traces. In our approach these traces are then eliminated using a transformation of the Horn clauses. We compare this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead.

  12. Two-dimensional interpolation with experimental data smoothing

    International Nuclear Information System (INIS)

    Trejbal, Z.

    1989-01-01

    A method of two-dimensional interpolation with smoothing of time statistically deflected points is developed for processing of magnetic field measurements at the U-120M field measurements at the U-120M cyclotron. Mathematical statement of initial requirements and the final result of relevant algebraic transformations are given. 3 refs

  13. Data interpolation for vibration diagnostics using two-variable correlations

    International Nuclear Information System (INIS)

    Branagan, L.

    1991-01-01

    This paper reports that effective machinery vibration diagnostics require a clear differentiation between normal vibration changes caused by plant process conditions and those caused by degradation. The normal relationship between vibration and a process parameter can be quantified by developing the appropriate correlation. The differences in data acquisition requirements between dynamic signals (vibration spectra) and static signals (pressure, temperature, etc.) result in asynchronous data acquisition; the development of any correlation must then be based on some form of interpolated data. This interpolation can reproduce or distort the original measured quantity depending on the characteristics of the data and the interpolation technique. Relevant data characteristics, such as acquisition times, collection cycle times, compression method, storage rate, and the slew rate of the measured variable, are dependent both on the data handling and on the measured variable. Linear and staircase interpolation, along with the use of clustering and filtering, provide the necessary options to develop accurate correlations. The examples illustrate the appropriate application of these options

  14. Recent developments in free-viewpoint interpolation for 3DTV

    NARCIS (Netherlands)

    Zinger, S.; Do, Q.L.; With, de P.H.N.

    2012-01-01

    Current development of 3D technologies brings 3DTV within reach for the customers. We discuss in this article the recent advancements in free-viewpoint interpolation for 3D video. This technology is still a research topic and many efforts are dedicated to creation, evaluation and improvement of new

  15. A temporal interpolation approach for dynamic reconstruction in perfusion CT

    International Nuclear Information System (INIS)

    Montes, Pau; Lauritsch, Guenter

    2007-01-01

    This article presents a dynamic CT reconstruction algorithm for objects with time dependent attenuation coefficient. Projection data acquired over several rotations are interpreted as samples of a continuous signal. Based on this idea, a temporal interpolation approach is proposed which provides the maximum temporal resolution for a given rotational speed of the CT scanner. Interpolation is performed using polynomial splines. The algorithm can be adapted to slow signals, reducing the amount of data acquired and the computational cost. A theoretical analysis of the approximations made by the algorithm is provided. In simulation studies, the temporal interpolation approach is compared with three other dynamic reconstruction algorithms based on linear regression, linear interpolation, and generalized Parker weighting. The presented algorithm exhibits the highest temporal resolution for a given sampling interval. Hence, our approach needs less input data to achieve a certain quality in the reconstruction than the other algorithms discussed or, equivalently, less x-ray exposure and computational complexity. The proposed algorithm additionally allows the possibility of using slow rotating scanners for perfusion imaging purposes

  16. Twitch interpolation technique in testing of maximal muscle strength

    DEFF Research Database (Denmark)

    Bülow, P M; Nørregaard, J; Danneskiold-Samsøe, B

    1993-01-01

    The aim was to study the methodological aspects of the muscle twitch interpolation technique in estimating the maximal force of contraction in the quadriceps muscle utilizing commercial muscle testing equipment. Six healthy subjects participated in seven sets of experiments testing the effects...

  17. Limiting reiteration for real interpolation with slowly varying functions

    Czech Academy of Sciences Publication Activity Database

    Gogatishvili, Amiran; Opic, Bohumír; Trebels, W.

    2005-01-01

    Roč. 278, 1-2 (2005), s. 86-107 ISSN 0025-584X R&D Projects: GA ČR(CZ) GA201/01/0333 Institutional research plan: CEZ:AV0Z10190503 Keywords : real interpolation * K-functional * limiting reiteration Subject RIV: BA - General Mathematics Impact factor: 0.465, year: 2005

  18. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    Science.gov (United States)

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  19. Blind Authentication Using Periodic Properties ofInterpolation

    Czech Academy of Sciences Publication Activity Database

    Mahdian, Babak; Saic, Stanislav

    2008-01-01

    Roč. 3, č. 3 (2008), s. 529-538 ISSN 1556-6013 R&D Projects: GA ČR GA102/08/0470 Institutional research plan: CEZ:AV0Z10750506 Keywords : image forensics * digital forgery * image tampering * interpolation detection * resampling detection Subject RIV: IN - Informatics, Computer Science Impact factor: 2.230, year: 2008

  20. Interpolation Inequalities and Spectral Estimates for Magnetic Operators

    Science.gov (United States)

    Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael

    2018-05-01

    We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.

  1. Research on Electronic Transformer Data Synchronization Based on Interpolation Methods and Their Error Analysis

    Directory of Open Access Journals (Sweden)

    Pang Fubin

    2015-09-01

    Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.

  2. Single-slice epicardial fat area measurement. Do we need to measure the total epicardial fat volume?

    International Nuclear Information System (INIS)

    Oyama, Noriko; Goto, Daisuke; Ito, Yoichi M.

    2011-01-01

    The aim of this study was to assess a method for measuring epicardial fat volume (EFV) by means of a single-slice area measurement. We investigated the relation between a single-slice fat area measurement and total EFV. A series of 72 consecutive patients (ages 65±11 years; 36 men) who had undergone cardiac computed tomography (CT) on a 64-slice multidetector scanner with prospective electrocardiographic triggering were retrospectively reviewed. Pixels in the pericardium with a density range from -230 to -30 Hounsfield units were considered fat, giving the per-slice epicardial fat area (EFA). The EFV was estimated by the summation of EFAs multiplied by the slice thickness. We investigated the relation between total EFV and each EFA. EFAs measured at several anatomical landmarks - right pulmonary artery, origins of the left main coronary artery, right coronary artery, coronary sinus - all correlated with the EFV (r=0.77-0.92). The EFA at the LMCA level was highly reproducible and showed an excellent correlation with the EFV (r=0.92). The EFA is significantly correlated with the EFV. The EFA is a simple, quick method for representing the time-consuming EFV, which has been used as a predictive indicator of cardiovascular diseases. (author)

  3. Gravitational collapse of charged dust shell and maximal slicing condition

    International Nuclear Information System (INIS)

    Maeda, Keiichi

    1980-01-01

    The maximal slicing condition is a good time coordinate condition qualitatively when pursuing the gravitational collapse by the numerical calculation. The analytic solution of the gravitational collapse under the maximal slicing condition is given in the case of a spherical charged dust shell and the behavior of time slices with this coordinate condition is investigated. It is concluded that under the maximal slicing condition we can pursue the gravitational collapse until the radius of the shell decreases to about 0.7 x (the radius of the event horizon). (author)

  4. Thin slices of child personality: Perceptual, situational, and behavioral contributions.

    Science.gov (United States)

    Tackett, Jennifer L; Herzhoff, Kathrin; Kushner, Shauna C; Rule, Nicholas

    2016-01-01

    The present study examined whether thin-slice ratings of child personality serve as a resource-efficient and theoretically valid measurement of child personality traits. We extended theoretical work on the observability, perceptual accuracy, and situational consistency of childhood personality traits by examining intersource and interjudge agreement, cross-situational consistency, and convergent, divergent, and predictive validity of thin-slice ratings. Forty-five unacquainted independent coders rated 326 children's (ages 8-12) personality in 1 of 15 thin-slice behavioral scenarios (i.e., 3 raters per slice, for over 14,000 independent thin-slice ratings). Mothers, fathers, and children rated children's personality, psychopathology, and competence. We found robust evidence for correlations between thin-slice and mother/father ratings of child personality, within- and across-task consistency of thin-slice ratings, and convergent and divergent validity with psychopathology and competence. Surprisingly, thin-slice ratings were more consistent across situations in this child sample than previously found for adults. Taken together, these results suggest that thin slices are a valid and reliable measure to assess child personality, offering a useful method of measurement beyond questionnaires, helping to address novel questions of personality perception and consistency in childhood. (c) 2016 APA, all rights reserved).

  5. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    Science.gov (United States)

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  6. Assessment of sphenoid sinus volume in order to determine sexual identity, using multi-slice CT images

    Directory of Open Access Journals (Sweden)

    Habibeh Farazdaghi

    2017-02-01

    Full Text Available Background and Aims: Gender determination is an important step in identification. For gender determination, anthropometric evaluation is one of the main forensic evaluations. The aim of this study was the assessment of sphenoid sinus volume in order to determine sexual identity, using multi-slice CT images. Materials and Methods: For volumetric analysis, axial paranasal sinus CT scan with 3-mm slice thickness was used. For this study, 80 images (40 women and 40 men older than 18 years were selected. For the assessment of sphenoid sinus volume, Digimizer software was used. The volume of sphenoid sinus was calculated using the following equation: v=∑ (area of each slice × thickness of each slice. Statistical analysis was performed by independent T-test. Results: The mean volume of sphenoid sinus was significantly greater in male gender (P=0.01.The assessed Cut off point was 9/35 cm3, showing that 63.4% of volume assessments greater than cut off point was supposed to be male and 64.1% of volumetry lesser than cut off point were female. Conclusion: According to the area under Roc curve (1.65%, sphenoid sinus volume is not an appropriate factor for differentiation of male and female from each other, which means the predictability of cut off point (9/35 cm3 is 65/1% close to reality.

  7. Gluteal fat thickness in pelvic CT

    International Nuclear Information System (INIS)

    Park, Jeong Mi; Jung, Se Young; Lee, Jae Mun; Park, Seog Hee; Kim, Choon Yul; Bahk, Yong Whee

    1986-01-01

    Many calcifications due to fat necrosis in the buttocks detected on the pelvis roentgenograms suggest that the majority of injections intended to be intramuscular actually are delivered into fat. We measured thickness of adult gluteal fat to decide whether the injection using needle of usual length is done into fat or muscle. We measured the vertical thickness of the subcutaneous fat at a point of 2-3cm above the femoral head cut slice with randomly collected 116 cases of adults in the department of Radiology, St. Mary's Hospital, Catholic Medical College. We found that 32% female cases might actually receive on intra adipose injection when a needle of maximum 3.8cm length is inserted into the buttock. If deposition into muscle is desirable, we need to choose needle whose length is appropriate for the site of injection and the patient's deposits of fat.

  8. The use of maxillary sinus dimensions in gender determination: a thin-slice multidetector computed tomography assisted morphometric study.

    Science.gov (United States)

    Ekizoglu, Oguzhan; Inci, Ercan; Hocaoglu, Elif; Sayin, Ibrahim; Kayhan, Fatma Tulin; Can, Ismail Ozgur

    2014-05-01

    Gender determination is an important step in identification. For gender determination, anthropometric evaluation is one of the main forensic evaluations. In the present study, morphometric analysis of maxillary sinuses was performed to determine gender. For morphometric analysis, coronal and axial paranasal sinus computed tomography (CT) scan with 1-mm slice thickness was used. For this study, 140 subjects (70 women and 70 men) were enrolled (age ranged between 18 and 63). The size of each subject's maxillary sinuses was measured in anteroposterior, transverse, cephalocaudal, and volume directions. In each measurement, the size of the maxillary sinus is significantly small in female gender (P discrimination analysis was performed, the accuracy rate was detected as 80% for women and 74.3% for men with an overall rate of 77.15%. With the use of 1-mm slice thickness CT, morphometric analysis of maxillary sinuses will be helpful for gender determination.

  9. Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation

    KAUST Repository

    Murarasu, Alin; Weidendorfer, Josef

    2012-01-01

    bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation

  10. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    Science.gov (United States)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With

  11. Computation of a voxelized anthropomorphic phantom from Computer Tomography slices and 3D dose distribution calculation utilizing the MCNP5 Code

    International Nuclear Information System (INIS)

    Abella, V.; Miro, R.; Juste, B.; Verdu, G.

    2008-01-01

    Full text: The purpose of this work is to obtain the voxelization of a series of tomography slices in order to provide a voxelized human phantom throughout a MatLab algorithm, and the consequent simulation of the irradiation of such phantom with the photon beam generated in a Theratron 780 (MDS Nordion) 60 Co radiotherapy unit, using the Monte Carlo transport code MCNP (Monte Carlo N-Particle), version 5. The project provides as results dose mapping calculations inside the voxelized anthropomorphic phantom. Prior works have validated the cobalt therapy model utilizing a simple heterogeneous water cube-shaped phantom. The reference phantom model utilized in this work is the Zubal phantom, which consists of a group of pre-segmented CT slices of a human body. The CT slices are to be input into the Matlab program which computes the voxelization by means of two-dimensional pixel and material identification on each slice, and three-dimensional interpolation, in order to depict the phantom geometry via small cubic cells. Each slice is divided in squares with the size of the desired voxelization, and then the program searches for the pixel intensity with a predefined material at each square, making a subsequent three-dimensional interpolation. At the end of this process, the program produces a voxelized phantom in which each voxel defines the mixture of the different materials that compose it. In the case of the Zubal phantom, the voxels result in pure organ materials due to the fact that the phantom is presegmented. The output of this code follows the MCNP input deck format and is integrated in a full input model including the 60 Co radiotherapy unit. Dose rates are calculated using the MCNP5 tool FMESH, superimposed mesh tally. This feature allows to tally particles on an independent mesh over the problem geometry, and to obtain the length estimation of the particle flux, in units of particles/cm 2 (tally F4). Furthermore, the particle flux is transformed into dose by

  12. Use of 60 Co gamma radiation to expend the shell life of packaged sliced loaves

    International Nuclear Information System (INIS)

    Nazato, R.E.S.

    1991-11-01

    The evaluation of conservation of sliced loaves (bread cut into slices). baked by five bakeries of Piracicaba, after gamma irradiation and maintained into polyethylene begs of low density, of 47,5 and 85 μm of thicknesses is shown. The sliced loaves were put into the bags and thermo-sealed by hand, like they were handled by the bakers. After this, they were irradiated with doses of 0.0: 2.0; 4.0; 6.0; 8.0 and 10.0 kGy of gamma radiation in a irradiation chamber of Cobalt-60 at a dose rate of 2,68 kGy per hour, at the room temperature (28 0 C). After irradiation the samples were maintained under at the room temperature (26 - 34 0 C), and humidity, as similar as possible to the conditions of the markets, bakeries and shops they were sold. The samples were evaluated every days and if any of them presented sign of contamination. It was threw away because it was inappropriate for human consumption. (author)

  13. Mathematical Modeling of Thin Layer Microwave Drying of Taro Slices

    Science.gov (United States)

    Kumar, Vivek; Sharma, H. K.; Singh, K.

    2016-03-01

    The present study investigated the drying kinetics of taro slices precooked in different medium viz water (WC), steam (SC) and Lemon Solution (LC) and dried at different microwave power 360, 540 and 720 W. Drying curves of all precooked slices at all microwave powers showed falling rate period along with a very short accelerating period at the beginning of the drying. At all microwave powers, higher drying rate was observed for LC slices as compared to WC and SC slices. To select a suitable drying curve, seven thin-layer drying models were fitted to the experimental data. The data revealed that the Page model was most adequate in describing the microwave drying behavior of taro slices precooked in different medium. The highest effective moisture diffusivity value of 2.11 × 10-8 m2/s was obtained for LC samples while the lowest 0.83 × 10-8 m2/s was obtained for WC taro slices. The activation energy (E a ) of LC taro slices was lower than the E a of WC and SC taro slices.

  14. Effect of interpolation on parameters extracted from seating interface pressure arrays

    OpenAIRE

    Michael Wininger, PhD; Barbara Crane, PhD, PT

    2015-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pre...

  15. Thermoluminescence results on slices from a Hiroshima tile UHFSFT03

    International Nuclear Information System (INIS)

    Stoneham, Doreen

    1987-01-01

    As was reported at the May 1984 Utah thermoluminescence (TL) workshop, high fired tiles and porcelain fragments can be sliced into 200 μm sections with constant surface area. When conventional pre-dose measurements were carried out on these slices the doses evaluated were in good agreement with results obtained by other workers using conventional quartz separation techniques. There are several advantages in using slices. First, less sample is needed as about 50 consecutive slices can be cut from a block measuring typically 1 cm 2 cross section and 2 cm in length. There are no problems with securing grains to the plate or loss of grains during measurement. Hypothetically there is less damage to the grains when they are cut slowly under cold water than when they are crushed. The disadvantage is that other minerals besides quartz are present in the slice and the signal is weaker than that obtained using quartz inclusions

  16. Correlation of NTD-silicon rod and slice resistivity

    International Nuclear Information System (INIS)

    Wolverton, W.M.

    1984-01-01

    Neutron transmutation doped silicon is an electronic material which presents an opportunity to explore a high level of resistivity characterization. This is due to its excellent uniformity of dopant concentration. Appropriate resistivity measurements on the ingot raw material can be used as a predictor of slice resistivity. Correlation of finished NTD rod (i.e. ingot) resistivity to as-cut slice resistivity (after the sawing process) is addressed in the scope of this paper. Empirical data show that the shift of slice-center resistivity compared to rod-end center resistivity is a function of a new kind of rod radial-resistivity gradient. This function has two domains, and most rods are in domain ''A''. Correlating equations show how to significantly improve the prediction of slice resistivity of rods in domain ''A''. The new rod resistivity specifications have resulted in manufacturing economies in the production of NTD silicon slices

  17. A survey of program slicing for software engineering

    Science.gov (United States)

    Beck, Jon

    1993-01-01

    This research concerns program slicing which is used as a tool for program maintainence of software systems. Program slicing decreases the level of effort required to understand and maintain complex software systems. It was first designed as a debugging aid, but it has since been generalized into various tools and extended to include program comprehension, module cohesion estimation, requirements verification, dead code elimination, and maintainence of several software systems, including reverse engineering, parallelization, portability, and reuse component generation. This paper seeks to address and define terminology, theoretical concepts, program representation, different program graphs, developments in static slicing, dynamic slicing, and semantics and mathematical models. Applications for conventional slicing are presented, along with a prognosis of future work in this field.

  18. RF slice profile effects in magnetic resonance fingerprinting.

    Science.gov (United States)

    Hong, Taehwa; Han, Dongyeob; Kim, Min-Oh; Kim, Dong-Hyun

    2017-09-01

    The radio frequency (RF) slice profile effects on T1 and T2 estimation in magnetic resonance fingerprinting (MRF) are investigated with respect to time-bandwidth product (TBW), flip angle (FA) level and field inhomogeneities. Signal evolutions are generated incorporating the non-ideal slice selective excitation process using Bloch simulation and matched to the original dictionary with and without the non-ideal slice profile taken into account. For validation, phantom and in vivo experiments are performed at 3T. Both simulations and experiments results show that T1 and T2 error from non-ideal slice profile increases with increasing FA level, off-resonance, and low TBW values. Therefore, RF slice profile effects should be compensated for accurate determination of the MR parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Efficient GPU-based texture interpolation using uniform B-splines

    NARCIS (Netherlands)

    Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.

    2008-01-01

    This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and

  20. A parameterization of observer-based controllers: Bumpless transfer by covariance interpolation

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Komareji, Mohammad

    2009-01-01

    This paper presents an algorithm to interpolate between two observer-based controllers for a linear multivariable system such that the closed loop system remains stable throughout the interpolation. The method interpolates between the inverse Lyapunov functions for the two original state feedback...

  1. Dynamic Stability Analysis Using High-Order Interpolation

    Directory of Open Access Journals (Sweden)

    Juarez-Toledo C.

    2012-10-01

    Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.

  2. LINTAB, Linear Interpolable Tables from any Continuous Variable Function

    International Nuclear Information System (INIS)

    1988-01-01

    1 - Description of program or function: LINTAB is designed to construct linearly interpolable tables from any function. The program will start from any function of a single continuous variable... FUNKY(X). By user input the function can be defined, (1) Over 1 to 100 X ranges. (2) Within each X range the function is defined by 0 to 50 constants. (3) At boundaries between X ranges the function may be continuous or discontinuous (depending on the constants used to define the function within each X range). 2 - Method of solution: LINTAB will construct a table of X and Y values where the tabulated (X,Y) pairs will be exactly equal to the function (Y=FUNKY(X)) and linear interpolation between the tabulated pairs will be within any user specified fractional uncertainty of the function for all values of X within the requested X range

  3. Single image interpolation via adaptive nonlocal sparsity-based modeling.

    Science.gov (United States)

    Romano, Yaniv; Protter, Matan; Elad, Michael

    2014-07-01

    Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

  4. Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery.

    Science.gov (United States)

    Ratliff, Bradley M; LaCasse, Charles F; Tyo, J Scott

    2009-05-25

    Microgrid polarimeters are composed of an array of micro-polarizing elements overlaid upon an FPA sensor. In the past decade systems have been designed and built in all regions of the optical spectrum. These systems have rugged, compact designs and the ability to obtain a complete set of polarimetric measurements during a single image capture. However, these systems acquire the polarization measurements through spatial modulation and each measurement has a varying instantaneous field-of-view (IFOV). When these measurements are combined to estimate the polarization images, strong edge artifacts are present that severely degrade the estimated polarization imagery. These artifacts can be reduced when interpolation strategies are first applied to the intensity data prior to Stokes vector estimation. Here we formally study IFOV error and the performance of several bilinear interpolation strategies used for reducing it.

  5. Bi-local baryon interpolating fields with two flavors

    Energy Technology Data Exchange (ETDEWEB)

    Dmitrasinovic, V. [Belgrade University, Institute of Physics, Pregrevica 118, Zemun, P.O. Box 57, Beograd (RS); Chen, Hua-Xing [Institutos de Investigacion de Paterna, Departamento de Fisica Teorica and IFIC, Centro Mixto Universidad de Valencia-CSIC, Valencia (Spain); Peking University, Department of Physics and State Key Laboratory of Nuclear Physics and Technology, Beijing (China)

    2011-02-15

    We construct bi-local interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We use the restrictions following from the Pauli principle to derive relations/identities among the baryon operators with identical quantum numbers. Such relations that follow from the combined spatial, Dirac, color, and isospin Fierz transformations may be called the (total/complete) Fierz identities. These relations reduce the number of independent baryon operators with any given spin and isospin. We also study the Abelian and non-Abelian chiral transformation properties of these fields and place them into baryon chiral multiplets. Thus we derive the independent baryon interpolating fields with given values of spin (Lorentz group representation), chiral symmetry (U{sub L}(2) x U{sub R}(2) group representation) and isospin appropriate for the first angular excited states of the nucleon. (orig.)

  6. Kriging for interpolation of sparse and irregularly distributed geologic data

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, K.

    1986-12-31

    For many geologic problems, subsurface observations are available only from a small number of irregularly distributed locations, for example from a handful of drill holes in the region of interest. These observations will be interpolated one way or another, for example by hand-drawn stratigraphic cross-sections, by trend-fitting techniques, or by simple averaging which ignores spatial correlation. In this paper we consider an interpolation technique for such situations which provides, in addition to point estimates, the error estimates which are lacking from other ad hoc methods. The proposed estimator is like a kriging estimator in form, but because direct estimation of the spatial covariance function is not possible the parameters of the estimator are selected by cross-validation. Its use in estimating subsurface stratigraphy at a candidate site for geologic waste repository provides an example.

  7. The modal surface interpolation method for damage localization

    Science.gov (United States)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  8. Reconstruction of reflectance data using an interpolation technique.

    Science.gov (United States)

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  9. Direct Trajectory Interpolation on the Surface using an Open CNC

    OpenAIRE

    Beudaert , Xavier; Lavernhe , Sylvain; Tournier , Christophe

    2014-01-01

    International audience; Free-form surfaces are used for many industrial applications from aeronautical parts, to molds or biomedical implants. In the common machining process, computer-aided manufacturing (CAM) software generates approximated tool paths because of the limitation induced by the input tool path format of the industrial CNC. Then, during the tool path interpolation, marks on finished surfaces can appear induced by non smooth feedrate planning. Managing the geometry of the tool p...

  10. Image interpolation via graph-based Bayesian label propagation.

    Science.gov (United States)

    Xianming Liu; Debin Zhao; Jiantao Zhou; Wen Gao; Huifang Sun

    2014-03-01

    In this paper, we propose a novel image interpolation algorithm via graph-based Bayesian label propagation. The basic idea is to first create a graph with known and unknown pixels as vertices and with edge weights encoding the similarity between vertices, then the problem of interpolation converts to how to effectively propagate the label information from known points to unknown ones. This process can be posed as a Bayesian inference, in which we try to combine the principles of local adaptation and global consistency to obtain accurate and robust estimation. Specially, our algorithm first constructs a set of local interpolation models, which predict the intensity labels of all image samples, and a loss term will be minimized to keep the predicted labels of the available low-resolution (LR) samples sufficiently close to the original ones. Then, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on all samples. Moreover, a graph-Laplacian-based manifold regularization term is incorporated to penalize the global smoothness of intensity labels, such smoothing can alleviate the insufficient training of the local models and make them more robust. Finally, we construct a unified objective function to combine together the global loss of the locally linear regression, square error of prediction bias on the available LR samples, and the manifold regularization term. It can be solved with a closed-form solution as a convex optimization problem. Experimental results demonstrate that the proposed method achieves competitive performance with the state-of-the-art image interpolation algorithms.

  11. Software Method for Computed Tomography Cylinder Data Unwrapping, Re-slicing, and Analysis

    Science.gov (United States)

    Roth, Don J.

    2013-01-01

    A software method has been developed that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography (CT). This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2D sheets (or flattened onion skins ) in addition to a series of top view slices and 3D volume rendering. The advantages of viewing the data in this fashion are as follows: (1) the use of standard and specialized image processing and analysis methods is facilitated having 2D array data versus a volume rendering; (2) accurate lateral dimensional analysis of flaws is possible in the unwrapped sheets versus volume rendering; (3) flaws in the part jump out at the inspector with the proper contrast expansion settings in the unwrapped sheets; and (4) it is much easier for the inspector to locate flaws in the unwrapped sheets versus top view slices for very thin cylinders. The method is fully automated and requires no input from the user except proper voxel dimension from the CT experiment and wall thickness of the part. The software is available in 32-bit and 64-bit versions, and can be used with binary data (8- and 16-bit) and BMP type CT image sets. The software has memory (RAM) and hard-drive based modes. The advantage of the (64-bit) RAM-based mode is speed (and is very practical for users of 64-bit Windows operating systems and computers having 16 GB or more RAM). The advantage of the hard-drive based analysis is one can work with essentially unlimited-sized data sets. Separate windows are spawned for the unwrapped/re-sliced data view and any image processing interactive capability. Individual unwrapped images and un -wrapped image series can be saved in common image formats. More information is available at http://www.grc.nasa.gov/WWW/OptInstr/ NDE_CT_CylinderUnwrapper.html.

  12. Effect of ultrasound and centrifugal force on carambola (Averrhoa carambola L.) slices during osmotic dehydration.

    Science.gov (United States)

    Barman, Nirmali; Badwaik, Laxmikant S

    2017-01-01

    Osmotic dehydration (OD) of carambola slices were carried out using glucose, sucrose, fructose and glycerol as osmotic agents with 70°Bx solute concentration, 50°C of temperature and for time of 180min. Glycerol and sucrose were selected on the basis of their higher water loss, weight reduction and lowers solid gain. Further the optimization of OD of carambola slices (5mm thick) were carried out under different process conditions of temperature (40-60°C), concentration of sucrose and glycerol (50-70°Bx), time (180min) and fruit to solution ratio (1:10) against various responses viz. water loss, solid gain, texture, rehydration ratio and sensory score according to a composite design. The optimized value for temperature, concentration of sucrose and glycerol has been found to be 50°C, 66°Bx and 66°Bx respectively. Under optimized conditions the effect of ultrasound for 10, 20, 30min and centrifugal force (2800rpm) for 15, 30, 45 and 60min on OD of carambola slices were checked. The controlled samples showed 68.14% water loss and 13.05% solid gain in carambola slices. While, the sample having 30min ultrasonic treatment showed 73.76% water loss and 9.79% solid gain; and the sample treated with centrifugal force for 60min showed 75.65% water loss and 6.76% solid gain. The results showed that with increasing in treatment time the water loss, rehydration ratio were increased and solid gain, texture were decreased. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Importance of interpolation and coincidence errors in data fusion

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2018-02-01

    Full Text Available The complete data fusion (CDF method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  14. Interpolation of daily rainfall using spatiotemporal models and clustering

    KAUST Repository

    Militino, A. F.

    2014-06-11

    Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

  15. Interpolation of daily rainfall using spatiotemporal models and clustering

    KAUST Repository

    Militino, A. F.; Ugarte, M. D.; Goicoa, T.; Genton, Marc G.

    2014-01-01

    Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

  16. Global sensitivity analysis using sparse grid interpolation and polynomial chaos

    International Nuclear Information System (INIS)

    Buzzard, Gregery T.

    2012-01-01

    Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990) [1] and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) [2]. Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008) [3] to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial. - Highlights: ► Efficient estimation of variance-based sensitivity coefficients. ► Efficient estimation of derivative-based sensitivity coefficients. ► Use of homotopy methods for approximation of local maxima and minima.

  17. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    Science.gov (United States)

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  18. Stereo matching and view interpolation based on image domain triangulation.

    Science.gov (United States)

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  19. On removing interpolation and resampling artifacts in rigid image registration.

    Science.gov (United States)

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  20. High-resolution ex vivo imaging of coronary artery stents using 64-slice computed tomography - initial experience

    International Nuclear Information System (INIS)

    Rist, Carsten; Nikolaou, Konstantin; Wintersperger, Bernd J.; Reiser, Maximilian F.; Becker, Christoph R.; Flohr, Thomas

    2006-01-01

    The aim of the study was to evaluate the potential of new-generation multi-slice computed tomography (CT) scanner technology for the delineation of coronary artery stents in an ex vivo setting. Nine stents of various diameters (seven stents 3 mm, two stents 2.5 mm) were implanted into the coronary arteries of ex vivo porcine hearts and filled with a mixture of an iodine-containing contrast agent. Specimens were scanned with a 16-slice CT (16SCT) machine; (Somatom Sensation 16, Siemens Medical Solutions), slice thickness 0.75 mm, and a 64-slice CT (64SCT, Somatom Sensation 64), slice-thickness 0.6 mm. Stent diameters as well as contrast densities were measured, on both the 16SCT and 64SCT images. No significant differences of CT densities were observed between the 16SCT and 64SCT images outside the stent lumen: 265±25HU and 254±16HU (P=0.33), respectively. CT densities derived from the 64SCT images and 16SCT images within the stent lumen were 367±36HU versus 402±28HU, P<0.05, respectively. Inner and outer stent diameters as measured from 16SCT and 64SCT images were 2.68±0.08 mm versus 2.81±0.07 mm and 3.29±0.06 mm versus 3.18±0.07 mm (P<0.05), respectively. The new 64SCT scanner proved to be superior in the ex vivo assessment of coronary artery stents to the conventional 16SCT machine. Increased spatial resolution allows for improved assessment of the coronary artery stent lumen. (orig.)

  1. Influence of 60Co γ irradiation pre-treatment on characteristics of hot air drying sweet potato slices

    International Nuclear Information System (INIS)

    Jiang Ning; Liu Chunquan; Li Dajing; Liu Xia; Yan Qimei

    2012-01-01

    The influences of irradiation, hot air temperature and thicknesses of the slices on the characters of dehydration and surface temperature of 60 Co γ-rays irradiated sweet potato were investigated. Meanwhile, microscopic observation and determination of water activity of irradiated sweet potato were conducted. The results show that the drying rate and the surface temperature rose with the increasing of irradiation dose. When the dry basis moisture content was 150%, the drying rate of the samples were 1.92, 1.97, 2.05, 2.28, 3.12% /min while the irradiation dose were 0, 2, 5, 8, 10 kGy, and the surface temperature were 48.5 ℃, 46.3℃, 44.5 ℃, 42.2 ℃, 41.5 ℃, respectively. With higher air temperature and thinner of the sweet potato slices, the dehydration of the irradiated sweet potato slices were faster. The drying speed of sweet potato slices at 85 ℃ was 170 min faster than that of 65 ℃. The drying speed of 7 mm sweet potato slices was 228 min faster than that of 3 mm sample. The cell wall and the vacuole of the sweet potato slices were broken after irradiation, and its water activity increased with the increase is radiation dose. The water activity of the irradiated samples were 0.92, 0.945, 0.958, 0.969, 0.979 with the irradiation doses of 0, 2, 5, 8, 10 kGy, respectively. The hot air drying rate, surface temperature and water activity of sweet potato are significantly impacted by irradiation. The conclusion provides a theoretical foundation for further processing technology of combined radiation and hot air drying sweet potato. (authors)

  2. The hypothalamic slice approach to neuroendocrinology.

    Science.gov (United States)

    Hatton, G I

    1983-07-01

    The magnocellular peptidergic cells of the supraoptic and paraventricular nuclei comprise much of what is known as the hypothalamo-neurohypophysial system and is involved in several functions, including body fluid balance, parturition and lactation. While we have learned much from experiments in vivo, they have not produced a clear understanding of some of the crucial features associated with the functioning of this system. In particular, questions relating to the osmosensitivity of magnocellular neurones and the mechanism(s) by which their characteristic firing patterns are generated have not been answered using the older approaches. Electrophysiological studies with brain slices present direct evidence for osmosensitivity, and perhaps even osmoreceptivity, of magnocellular neurones. Other evidence indicates that the phasic bursting patterns of activity associated with vasopressin-releasing neurones (a) occur in the absence of patterned chemical synaptic input, (b) may be modulated by electrotonic conduction across gap junctions connecting magnocellular neurones and (c) are likely to be generated by endogenous membrane currents. These results make untenable the formerly held idea that phasic bursting activity is dependent upon recurrent synaptic inhibition.

  3. Microtome Sliced Block Copolymers and Nanoporous Polymers as Masks for Nanolithography

    DEFF Research Database (Denmark)

    Shvets, Violetta; Schulte, Lars; Ndoni, Sokol

    2014-01-01

    Introduction. Block copolymers self-assembling properties are commonly used for creation of very fine nanostructures [1]. Goal of our project is to test new methods of the block-copolymer lithography mask preparation: macroscopic pieces of block-copolymers or nanoporous polymers with cross...... PDMS can be chemically etched from the PB matrix by tetrabutylammonium fluoride in tetrahydrofuran and macroscopic nanoporous PB piece is obtained. Both block-copolymer piece and nanoporous polymer piece were sliced with cryomicrotome perpendicular to the axis of cylinder alignment and flakes...... of etching patterns appear only under the certain parts of thick flakes and are not continuous. Although flakes from block copolymer are thinner and more uniform in thickness than flakes from nanoporous polymer, quality of patterns under nanoporous flakes appeared to be better than under block copolymer...

  4. Thin-layer catalytic far-infrared radiation drying and flavour of tomato slices

    Directory of Open Access Journals (Sweden)

    Ernest Ekow Abano

    2014-06-01

    Full Text Available A far-infrared radiation (FIR catalytic laboratory dryer was designed by us and used to dry tomato. The kinetics of drying of tomato slices with FIR energy was dependent on both the distance from the heat source and the sample thickness. Numerical evaluation of the simplified Fick’s law for Fourier number showed that the effective moisture diffusivity increased from 0.193×10–9 to 1.893×10–9 m2/s, from 0.059×10–9 to 2.885×10–9 m2/s, and, from 0.170×10–9 to 4.531×10–9 m2/s for the 7, 9, and 11 mm thick slices as moisture content decreased. Application of FIR enhanced the flavour of the dried tomatoes by 36.6% when compared with the raw ones. The results demonstrate that in addition to shorter drying times, the flavour of the products can be enhanced with FIR. Therefore, FIR drying should be considered as an efficient drying method for tomato with respect to minimization of processing time, enhancement in flavour, and improvements in the quality and functional property of dried tomatoes.

  5. Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  6. The relationship between image quality and CT dose index of multi-slice low-dose chest CT

    International Nuclear Information System (INIS)

    Zhu Xiaohua; Shao Jiang; Shi Jingyun; You Zhengqian; Li Shijun; Xue Yongming

    2003-01-01

    Objective: To explore the rationality and possibility of multi-slice low-dose CT scan in the examination of the chest. Methods: (1) X-ray dose index measurement: 120 kV tube voltage, 0.75 s rotation, 8 mm and 3 mm slice thickness, and the tube current setting of 115.0, 40.0, 25.0, and 7.5 mAs were employed in every section. The X-ray radiation dose was measured and compared statistically. (2) phantom measurement of homogeneity and noise: The technical parameters were 120 kV, 0.75 s, 8 mm and 3 mm sections, and every slice was scanned using tube current of 115.0, 40.0, 25.0, and 7.5 mAs. Five same regions of interest were measured on every image. The homogeneity and noise level of CT were appraised. (3) The multi-slice low-dose CT in patients: 30 patients with mass and 30 with patch shadow in the lung were selected randomly. The technical parameters were 120 kV, 0.75 s, 8 mm and 3 mm slice thickness. 115.0, 40.0, 25.0, 15.0, and 7.5 mAs tube current were employed in each same slice. Otherwise, 15 cases with helical scan were examined using 190, 150, 40, 25, and 15 mAs tube current. The reconstruction images of MIP, MPR, CVR, HRCT, 3D, CT virtual endoscopy, and variety of interval reconstruction were compared. (4) Evaluation of image quality: CT images were evaluated by four doctors using single-blind method, and 3 degrees including normal image, image with few artifact, and image with excessive artifact, were employed and analyzed statistically. Results: (1) The CT dose index with 115.0 mAs tube current exceeded those of 40.0, 25.0, and 7.5 mAs by about 60%, 70%, and 85%, respectively. (2) The phantom measurement showed that the lower of CT dose the lower of homogeneity, the lower of CT dose the higher of noise level. (3) Result of image quality evaluation: The percentage of the normal image had no significant difference between 8 and 3 mm in 115, 40, and 25 mAs (P>0.05). Conclusion: Multi-slice low-dose chest CT technology may protect the patients and guarantee the

  7. Evaluation of the relative biological effectiveness of carbon ion beams in the cerebellum using the rat organotypic slice culture system

    International Nuclear Information System (INIS)

    Yoshida, Yukari; Katoh, Hiroyuki; Nakano, Takashi; Suzuki, Yoshiyuki; Al-Jahdari, Wael S.; Shirai, Katsuyuki; Hamada, Nobuyuki; Funayama, Tomoo; Sakashita, Tetsuya; Kobayashi, Yasuhiko

    2012-01-01

    The purpose of this study was to clarify the relative biological effectiveness (RBE) values of carbon ion (C) beams in normal brain tissues, a rat organotypic slice culture system was used. The cerebellum was dissected from 10-day-old Wistar rats, cut parasagittally into approximately 600-μm-thick slices and cultivated using a membrane-based culture system with a liquid-air interface. Slices were irradiated with 140 kV X-rays and 18.3 MeV/amu C-beams (linear energy transfer=108 keV/μm). After irradiation, the slices were evaluated histopathologically using hematoxylin and eosin staining, and apoptosis was quantified using the TdT-mediated dUTP-biotin nick-end labeling (TUNEL) assay. Disorganization of the external granule cell layer (EGL) and apoptosis of the external granule cells (EGCs) were induced within 24 h after exposure to doses of more than 5 Gy from C-beams and X-rays. In the early postnatal cerebellum, morphological changes following exposure to C-beams were similar to those following exposure to X-rays. The RBEs values of C-beams using the EGL disorganization and the EGC TUNEL index endpoints ranged from 1.4 to 1.5. This system represents a useful model for assaying the biological effects of radiation on the brain, especially physiological and time-dependent phenomena. (author)

  8. Assessment of Myocardial Bridge and Mural Coronary Artery Using ECG-Gated 256-Slice CT Angiography: A Retrospective Study

    Directory of Open Access Journals (Sweden)

    En-sen Ma

    2013-01-01

    Full Text Available Recent clinical reports have indicated that myocardial bridge and mural coronary artery complex (MB-MCA might cause major adverse cardiac events. 256-slice CT angiography (256-slice CTA is a newly developed CT system with faster scanning and lower radiation dose compared with other CT systems. The objective of this study is to evaluate the morphological features of MB-MCA and determine its changes from diastole to systole phase using 256-slice CTA. The imaging data of 2462 patients were collected retrospectively. Two independent radiologists reviewed the collected images and the diagnosis of MB-MCA was confirmed when consistency was obtained. The length, diameter, and thickness of MB-MCA in diastole and systole phases were recorded, and changes of MB-MCA were calculated. Our results showed that among the 2462 patients examined, 336 have one or multiple MB-MCA (13.6%. Out of 389 MB-MCA segments, 235 sites were located in LAD2 (60.41%. The average diameter change of MCA in LAD2 from systole phase to diastole phase was  mm, and 34.9% of MCA have more than 50% diameter stenosis in systole phase. This study suggested that 256-slice CTA multiple-phase reconstruction technique is a reliable method to determine the changes of MB-MCA from diastole to systole phase.

  9. Lead Thickness Measurements

    International Nuclear Information System (INIS)

    Rucinski, R.

    1998-01-01

    The preshower lead thickness applied to the outside of D-Zero's superconducting solenoid vacuum shell was measured at the time of application. This engineering documents those thickness measurements. The lead was ordered in sheets 0.09375-inch and 0.0625-inch thick. The tolerance on thickness was specified to be +/- 0.003-inch. The sheets all were within that thickness tolerance. The nomenclature for each sheet was designated 1T, 1B, 2T, 2B where the numeral designates it's location in the wrap and 'T' or 'B' is short for 'top' or 'bottom' half of the solenoid. Micrometer measurements were taken at six locations around the perimeter of each sheet. The width,length, and weight of each piece was then measured. Using an assumed pure lead density of 0.40974 lb/in 3 , an average sheet thickness was calculated and compared to the perimeter thickness measurements. In every case, the calculated average thickness was a few mils thinner than the perimeter measurements. The ratio was constant, 0.98. This discrepancy is likely due to the assumed pure lead density. It is not felt that the perimeter is thicker than the center regions. The data suggests that the physical thickness of the sheets is uniform to +/- 0.0015-inch.

  10. The Slice Algorithm For Irreducible Decomposition of Monomial Ideals

    DEFF Research Database (Denmark)

    Roune, Bjarke Hammersholt

    2009-01-01

    Irreducible decomposition of monomial ideals has an increasing number of applications from biology to pure math. This paper presents the Slice Algorithm for computing irreducible decompositions, Alexander duals and socles of monomial ideals. The paper includes experiments showing good performance...

  11. Optimal interpolation method for intercomparison of atmospheric measurements.

    Science.gov (United States)

    Ridolfi, Marco; Ceccherini, Simone; Carli, Bruno

    2006-04-01

    Intercomparison of atmospheric measurements is often a difficult task because of the different spatial response functions of the experiments considered. We propose a new method for comparison of two atmospheric profiles characterized by averaging kernels with different vertical resolutions. The method minimizes the smoothing error induced by the differences in the averaging kernels by exploiting an optimal interpolation rule to map one profile into the retrieval grid of the other. Compared with the techniques published so far, this method permits one to retain the vertical resolution of the less-resolved profile involved in the intercomparison.

  12. Advantage of Fast Fourier Interpolation for laser modeling

    International Nuclear Information System (INIS)

    Epatko, I.V.; Serov, R.V.

    2006-01-01

    The abilities of a new algorithm: the 2-dimensional Fast Fourier Interpolation (FFI) with magnification factor (zoom) 2 n whose purpose is to improve the spatial resolution when necessary, are analyzed in details. FFI procedure is useful when diaphragm/aperture size is less than half of the current simulation scale. The computation noise due to FFI procedure is less than 10 -6 . The additional time for FFI is approximately equal to one Fast Fourier Transform execution time. For some applications using FFI procedure, the execution time decreases by a 10 4 factor compared with other laser simulation codes. (authors)

  13. Rate of convergence of Bernstein quasi-interpolants

    International Nuclear Information System (INIS)

    Diallo, A.T.

    1995-09-01

    We show that if f is an element of C[0,1] and B (2r-1) n f (r integer ≥ 1) is the Bernstein Quasi-Interpolant defined by Sablonniere, then parallel B (2r-1) n f - f parallel C[0,1] ≤ ω 2r φ (f, 1/√n) where ω 2r φ is the Ditzian-Totik modulus of smoothness with φ(x) = √ x(1-x), x is an element of [0,1]. (author). 6 refs

  14. Data mining techniques in sensor networks summarization, interpolation and surveillance

    CERN Document Server

    Appice, Annalisa; Fumarola, Fabio; Malerba, Donato

    2013-01-01

    Sensor networks comprise of a number of sensors installed across a spatially distributed network, which gather information and periodically feed a central server with the measured data. The server monitors the data, issues possible alarms and computes fast aggregates. As data analysis requests may concern both present and past data, the server is forced to store the entire stream. But the limited storage capacity of a server may reduce the amount of data stored on the disk. One solution is to compute summaries of the data as it arrives, and to use these summaries to interpolate the real data.

  15. Hörmander spaces, interpolation, and elliptic problems

    CERN Document Server

    Mikhailets, Vladimir A; Malyshev, Peter V

    2014-01-01

    The monograph gives a detailed exposition of the theory of general elliptic operators (scalar and matrix) and elliptic boundary value problems in Hilbert scales of Hörmander function spaces. This theory was constructed by the authors in a number of papers published in 2005-2009. It is distinguished by a systematic use of the method of interpolation with a functional parameter of abstract Hilbert spaces and Sobolev inner product spaces. This method, the theory and their applications are expounded for the first time in the monographic literature. The monograph is written in detail and in a

  16. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  17. Calibration method of microgrid polarimeters with image interpolation.

    Science.gov (United States)

    Chen, Zhenyue; Wang, Xia; Liang, Rongguang

    2015-02-10

    Microgrid polarimeters have large advantages over conventional polarimeters because of the snapshot nature and because they have no moving parts. However, they also suffer from several error sources, such as fixed pattern noise (FPN), photon response nonuniformity (PRNU), pixel cross talk, and instantaneous field-of-view (IFOV) error. A characterization method is proposed to improve the measurement accuracy in visible waveband. We first calibrate the camera with uniform illumination so that the response of the sensor is uniform over the entire field of view without IFOV error. Then a spline interpolation method is implemented to minimize IFOV error. Experimental results show the proposed method can effectively minimize the FPN and PRNU.

  18. Cardinal Basis Piecewise Hermite Interpolation on Fuzzy Data

    Directory of Open Access Journals (Sweden)

    H. Vosoughi

    2016-01-01

    Full Text Available A numerical method along with explicit construction to interpolation of fuzzy data through the extension principle results by widely used fuzzy-valued piecewise Hermite polynomial in general case based on the cardinal basis functions, which satisfy a vanishing property on the successive intervals, has been introduced here. We have provided a numerical method in full detail using the linear space notions for calculating the presented method. In order to illustrate the method in computational examples, we take recourse to three prime cases: linear, cubic, and quintic.

  19. New extended interpolating operators for hadron correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Scardino, Francesco; Papinutto, Mauro [Roma ' ' Sapienza' ' Univ. (Italy). Dipt. di Fisica; INFN, Sezione di Roma (Italy); Schaefer, Stefan [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2016-12-22

    New extended interpolating operators made of quenched three dimensional fermions are introduced in the context of lattice QCD. The mass of the 3D fermions can be tuned in a controlled way to find a better overlap of the extended operators with the states of interest. The extended operators have good renormalisation properties and are easy to control when taking the continuum limit. Moreover the short distance behaviour of the two point functions built from these operators is greatly improved. The operators have been numerically implemented and a comparison to point sources and Jacobi smeared sources has been performed on the new CLS configurations.

  20. New extended interpolating operators for hadron correlation functions

    International Nuclear Information System (INIS)

    Scardino, Francesco; Papinutto, Mauro; Schaefer, Stefan

    2016-01-01

    New extended interpolating operators made of quenched three dimensional fermions are introduced in the context of lattice QCD. The mass of the 3D fermions can be tuned in a controlled way to find a better overlap of the extended operators with the states of interest. The extended operators have good renormalisation properties and are easy to control when taking the continuum limit. Moreover the short distance behaviour of the two point functions built from these operators is greatly improved. The operators have been numerically implemented and a comparison to point sources and Jacobi smeared sources has been performed on the new CLS configurations.

  1. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    Science.gov (United States)

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  2. Geometries and interpolations for symmetric positive definite matrices

    DEFF Research Database (Denmark)

    Feragen, Aasa; Fuster, Andrea

    2017-01-01

    . In light of the simulation results, we discuss the mathematical and qualitative properties of these new metrics in comparison with the classical ones. Finally, we explore the nonlinear variation of properties such as shape and scale throughout principal geodesics in different metrics, which affects...... the visualization of scale and shape variation in tensorial data. With the paper, we will release a software package with Matlab scripts for computing the interpolations and statistics used for the experiments in the paper (Code is available at https://sites.google.com/site/aasaferagen/home/software)....

  3. Trends in Continuity and Interpolation for Computer Graphics.

    Science.gov (United States)

    Gonzalez Garcia, Francisco

    2015-01-01

    In every computer graphics oriented application today, it is a common practice to texture 3D models as a way to obtain realistic material. As part of this process, mesh texturing, deformation, and visualization are all key parts of the computer graphics field. This PhD dissertation was completed in the context of these three important and related fields in computer graphics. The article presents techniques that improve on existing state-of-the-art approaches related to continuity and interpolation in texture space (texturing), object space (deformation), and screen space (rendering).

  4. Resource slicing in virtual wireless networks: a survey

    OpenAIRE

    Richart, Matias; Baliosian De Lazzari, Javier Ernesto; Serrat Fernández, Juan; Gorricho Moreno, Juan Luis

    2016-01-01

    New architectural and design approaches for radio access networks have appeared with the introduction of network virtualization in the wireless domain. One of these approaches splits the wireless network infrastructure into isolated virtual slices under their own management, requirements, and characteristics. Despite the advances in wireless virtualization, there are still many open issues regarding the resource allocation and isolation of wireless slices. Because of the dynamics and share...

  5. Geometry Processing of Conventionally Produced Mouse Brain Slice Images.

    Science.gov (United States)

    Agarwal, Nitin; Xu, Xiangmin; Gopi, M

    2018-04-21

    Brain mapping research in most neuroanatomical laboratories relies on conventional processing techniques, which often introduce histological artifacts such as tissue tears and tissue loss. In this paper we present techniques and algorithms for automatic registration and 3D reconstruction of conventionally produced mouse brain slices in a standardized atlas space. This is achieved first by constructing a virtual 3D mouse brain model from annotated slices of Allen Reference Atlas (ARA). Virtual re-slicing of the reconstructed model generates ARA-based slice images corresponding to the microscopic images of histological brain sections. These image pairs are aligned using a geometric approach through contour images. Histological artifacts in the microscopic images are detected and removed using Constrained Delaunay Triangulation before performing global alignment. Finally, non-linear registration is performed by solving Laplace's equation with Dirichlet boundary conditions. Our methods provide significant improvements over previously reported registration techniques for the tested slices in 3D space, especially on slices with significant histological artifacts. Further, as one of the application we count the number of neurons in various anatomical regions using a dataset of 51 microscopic slices from a single mouse brain. To the best of our knowledge the presented work is the first that automatically registers both clean as well as highly damaged high-resolutions histological slices of mouse brain to a 3D annotated reference atlas space. This work represents a significant contribution to this subfield of neuroscience as it provides tools to neuroanatomist for analyzing and processing histological data. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. A COMPARATIVE STUDY OF TYMPANOPLASTY USING SLICED CARTILAGE GRAFT VS. TEMPORALIS FASCIA GRAFT

    Directory of Open Access Journals (Sweden)

    Rahul Ashok Telang

    2018-02-01

    Full Text Available BACKGROUND The objective of the study was to compare the hearing improvement after using sliced cartilage graft with that of temporalis fascia and to compare the graft take-up between the two graft materials. MATERIALS AND METHODS A prospective clinical study including 60 patients with chronic mucosal otitis media, who were selected randomly from the outpatient department, after obtaining their consent were divided into 2 groups of 30 each, and evaluated according the study protocol. Their pre-operative audiometry was recorded and both groups of patients underwent surgery with one of the graft materials- temporalis fascia or sliced tragal cartilage with a thickness of 0.5 mm. All patients were regularly followed up and post-operative audiometry was done at 3 months. The hearing improvement in the form of closure of air-bone-gap and graft take-up was analysed statistically. RESULTS The temporalis fascia graft group had a pre-operative ABG of 22.33 ± 6.24 dB and post-operative ABG of 12.33 ± 4.72 dB with hearing improvement of 10.00 dB. The sliced cartilage graft group had a pre-operative ABG of 20.77 ± 5.75 dB and postoperative ABG of 10.50 ± 4.46 dB with hearing improvement of 10.27 dB. In the temporalis fascia group, 28 (93.3% patients had good graft take-up and in the sliced cartilage group 29 (96.7% had good graft take-up. There was statistically significant hearing improvement in both of our study groups but there was no statistically significant difference between the two groups. There was no statistically significant difference in graft take-up also. CONCLUSION Sliced cartilage graft is a good auto-graft material in tympanoplasty, which can give good hearing improvement and has good graft take-up, which is comparable with that of temporalis fascia.

  7. Effect of interpolation on parameters extracted from seating interface pressure arrays.

    Science.gov (United States)

    Wininger, Michael; Crane, Barbara

    2014-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

  8. A simple method for multiday imaging of slice cultures.

    Science.gov (United States)

    Seidl, Armin H; Rubel, Edwin W

    2010-01-01

    The organotypic slice culture (Stoppini et al. A simple method for organotypic cultures of nervous tissue. 1991;37:173-182) has become the method of choice to answer a variety of questions in neuroscience. For many experiments, however, it would be beneficial to image or manipulate a slice culture repeatedly, for example, over the course of many days. We prepared organotypic slice cultures of the auditory brainstem of P3 and P4 mice and kept them in vitro for up to 4 weeks. Single cells in the auditory brainstem were transfected with plasmids expressing fluorescent proteins by way of electroporation (Haas et al. Single-cell electroporation for gene transfer in vivo. 2001;29:583-591). The culture was then placed in a chamber perfused with oxygenated ACSF and the labeled cell imaged with an inverted wide-field microscope repeatedly for multiple days, recording several time-points per day, before returning the slice to the incubator. We describe a simple method to image a slice culture preparation during the course of multiple days and over many continuous hours, without noticeable damage to the tissue or photobleaching. Our method uses a simple, inexpensive custom-built insulator constructed around the microscope to maintain controlled temperature and uses a perfusion chamber as used for in vitro slice recordings. (c) 2009 Wiley-Liss, Inc.

  9. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    Science.gov (United States)

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for

  10. Education and "Thick" Epistemology

    Science.gov (United States)

    Kotzee, Ben

    2011-01-01

    In this essay Ben Kotzee addresses the implications of Bernard Williams's distinction between "thick" and "thin" concepts in ethics for epistemology and for education. Kotzee holds that, as in the case of ethics, one may distinguish between "thick" and "thin" concepts of epistemology and, further, that this distinction points to the importance of…

  11. Thick film hydrogen sensor

    Science.gov (United States)

    Hoffheins, Barbara S.; Lauf, Robert J.

    1995-01-01

    A thick film hydrogen sensor element includes an essentially inert, electrically-insulating substrate having deposited thereon a thick film metallization forming at least two resistors. The metallization is a sintered composition of Pd and a sinterable binder such as glass frit. An essentially inert, electrically insulating, hydrogen impermeable passivation layer covers at least one of the resistors.

  12. The effect of slicing type on drying kinetics and quality of dried carrot

    Directory of Open Access Journals (Sweden)

    M Naghipour zadeh mahani

    2016-04-01

    Full Text Available Introduction: Carrot is one of the most common vegetables used for human nutrition because of its high vitamin and fiber contents. Drying improves the product shelf life without addition of any chemical preservative and reduces both the size of package and the transport cost. Drying also aidsto reduce postharvest losses of fruits and vegetables especially, which can be as high as 70%. Dried carrots are used in dehydrated soups and in the form of powder in pastries and sauces. The main aim of drying agricultural products is decrease the moisture content to a level which allows safe storage over an extended period. Many fruits and vegetables can be sliced before drying.because of different tissue of a fruit or vegetable, cutting them in different direction and shape created different tissue slices. Due to drying is the exiting process of the moisture from internal tissue so different tissue slices caused different drying kinetics. Therefore, the study on effect of cutting parameters on drying is necessary. Materials and Methods: Carrots (Daucus carota L. were purchased from the local market (Kerman, Iran and stored in a refrigerator at 5°C. The initial moisture contents of the Carrot samples were determined by the oven drying method. The sample was dried in an oven at 105±2°C about 24 hours. The carrots cut by 3 models blade at 3 directions. The samples were dried in an oven at 70°C. Moisture content of the carrot slices were determined by weighting of samples during drying. Volume changes because of sample shrinkage were measured by a water displacement method. Rehydration experiment was performed by immersing a weighted amount of dried samples into hot water 50 °C for 30 min. In this study the effect of some cutting parameters was considered on carrot drying and the quality of final drying product. The tests were performed as a completely random design. The effects of carrot thickness at two levels (3 and 6 mm, blade in 3 models (flat blade

  13. THE EFFECT OF STIMULUS ANTICIPATION ON THE INTERPOLATED TWITCH TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Duane C. Button

    2008-12-01

    Full Text Available The objective of this study was to investigate the effect of expected and unexpected interpolated stimuli (IT during a maximum voluntary contraction on quadriceps force output and activation. Two groups of male subjects who were either inexperienced (MI: no prior experience with IT tests or experienced (ME: previously experienced 10 or more series of IT tests received an expected or unexpected IT while performing quadriceps isometric maximal voluntary contractions (MVCs. Measurements included MVC force, quadriceps and hamstrings electromyographic (EMG activity, and quadriceps inactivation as measured by the interpolated twitch technique (ITT. When performing MVCs with the expectation of an IT, the knowledge or lack of knowledge of an impending IT occurring during a contraction did not result in significant overall differences in force, ITT inactivation, quadriceps or hamstrings EMG activity. However, the expectation of an IT significantly (p < 0.0001 reduced MVC force (9.5% and quadriceps EMG activity (14.9% when compared to performing MVCs with prior knowledge that stimulation would not occur. While ME exhibited non-significant decreases when expecting an IT during a MVC, MI force and EMG activity significantly decreased 12.4% and 20.9% respectively. Overall, ME had significantly (p < 0.0001 higher force (14.5% and less ITT inactivation (10.4% than MI. The expectation of the noxious stimuli may account for the significant decrements in force and activation during the ITT

  14. Flip-avoiding interpolating surface registration for skull reconstruction.

    Science.gov (United States)

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Optimal Interpolation scheme to generate reference crop evapotranspiration

    Science.gov (United States)

    Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco

    2018-05-01

    We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.

  16. A New Interpolation Approach for Linearly Constrained Convex Optimization

    KAUST Repository

    Espinoza, Francisco

    2012-08-01

    In this thesis we propose a new class of Linearly Constrained Convex Optimization methods based on the use of a generalization of Shepard\\'s interpolation formula. We prove the properties of the surface such as the interpolation property at the boundary of the feasible region and the convergence of the gradient to the null space of the constraints at the boundary. We explore several descent techniques such as steepest descent, two quasi-Newton methods and the Newton\\'s method. Moreover, we implement in the Matlab language several versions of the method, particularly for the case of Quadratic Programming with bounded variables. Finally, we carry out performance tests against Matab Optimization Toolbox methods for convex optimization and implementations of the standard log-barrier and active-set methods. We conclude that the steepest descent technique seems to be the best choice so far for our method and that it is competitive with other standard methods both in performance and empirical growth order.

  17. 3D Interpolation Method for CT Images of the Lung

    Directory of Open Access Journals (Sweden)

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  18. Interpolation methods for creating a scatter radiation exposure map

    Energy Technology Data Exchange (ETDEWEB)

    Gonçalves, Elicardo A. de S., E-mail: elicardo.goncalves@ifrj.edu.br [Instituto Federal do Rio de Janeiro (IFRJ), Paracambi, RJ (Brazil); Gomes, Celio S.; Lopes, Ricardo T. [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F. [Universidade do Estado do Rio de Janeiro (UFRJ), RJ (Brazil). Instituto de Física

    2017-07-01

    A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)

  19. Interpolation on the manifold of K component GMMs.

    Science.gov (United States)

    Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas

    2015-12-01

    Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.

  20. MAGIC: A Tool for Combining, Interpolating, and Processing Magnetograms

    Science.gov (United States)

    Allred, Joel

    2012-01-01

    Transients in the solar coronal magnetic field are ultimately the source of space weather. Models which seek to track the evolution of the coronal field require magnetogram images to be used as boundary conditions. These magnetograms are obtained by numerous instruments with different cadences and resolutions. A tool is required which allows modelers to fmd all available data and use them to craft accurate and physically consistent boundary conditions for their models. We have developed a software tool, MAGIC (MAGnetogram Interpolation and Composition), to perform exactly this function. MAGIC can manage the acquisition of magneto gram data, cast it into a source-independent format, and then perform the necessary spatial and temporal interpolation to provide magnetic field values as requested onto model-defined grids. MAGIC has the ability to patch magneto grams from different sources together providing a more complete picture of the Sun's field than is possible from single magneto grams. In doing this, care must be taken so as not to introduce nonphysical current densities along the seam between magnetograms. We have designed a method which minimizes these spurious current densities. MAGIC also includes a number of post-processing tools which can provide additional information to models. For example, MAGIC includes an interface to the DA VE4VM tool which derives surface flow velocities from the time evolution of surface magnetic field. MAGIC has been developed as an application of the KAMELEON data formatting toolkit which has been developed by the CCMC.

  1. Image re-sampling detection through a novel interpolation kernel.

    Science.gov (United States)

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Interpolation methods for creating a scatter radiation exposure map

    International Nuclear Information System (INIS)

    Gonçalves, Elicardo A. de S.; Gomes, Celio S.; Lopes, Ricardo T.; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F.

    2017-01-01

    A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)

  3. Motion compensated frame interpolation with a symmetric optical flow constraint

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Roholm, Lars; Bruhn, Andrés

    2012-01-01

    We consider the problem of interpolating frames in an image sequence. For this purpose accurate motion estimation can be very helpful. We propose to move the motion estimation from the surrounding frames directly to the unknown frame by parametrizing the optical flow objective function such that ......We consider the problem of interpolating frames in an image sequence. For this purpose accurate motion estimation can be very helpful. We propose to move the motion estimation from the surrounding frames directly to the unknown frame by parametrizing the optical flow objective function...... methods. The proposed reparametrization is generic and can be applied to almost every existing algorithm. In this paper we illustrate its advantages by considering the classic TV-L1 optical flow algorithm as a prototype. We demonstrate that this widely used method can produce results that are competitive...... with current state-of-the-art methods. Finally we show that the scheme can be implemented on graphics hardware such that it be- comes possible to double the frame rate of 640 × 480 video footage at 30 fps, i.e. to perform frame doubling in realtime....

  4. Some observations on interpolating gauges and non-covariant gauges

    International Nuclear Information System (INIS)

    Joglekar, Satish D.

    2003-01-01

    We discuss the viability of using interpolating gauges to define the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition defining term. We show that the boundary condition needed to maintain gauge invariance as the interpolating parameter θ varies, depends very sensitively on the parameter variation. We do this with a gauge used by Doust. We also consider the Lagrangian path-integrals in Minkowski space for gauges with a residual gauge-invariance. We point out the necessity of inclusion of an ε-term (even) in the formal treatments, without which one may reach incorrect conclusions. We, further, point out that the ε-term can contribute to the BRST WT-identities in a non-trivial way (even as ε → 0). We point out that these contributions lead to additional constraints on Green's function that are not normally taken into account in the BRST formalism that ignores the ε-term, and that they are characteristic of the way the singularities in propagators are handled. We argue that a prescription, in general, will require renormalization; if at all it is to be viable. (author)

  5. Anisotropic interpolation theorems of Musielak-Orlicz type

    Directory of Open Access Journals (Sweden)

    Jinxia Li

    2016-10-01

    Full Text Available Abstract Anisotropy is a common attribute of Nature, which shows different characterizations in different directions of all or part of the physical or chemical properties of an object. The anisotropic property, in mathematics, can be expressed by a fairly general discrete group of dilations { A k : k ∈ Z } $\\{A^{k}: k\\in\\mathbb{Z}\\}$ , where A is a real n × n $n\\times n$ matrix with all its eigenvalues λ satisfy | λ | > 1 $|\\lambda|>1$ . Let φ : R n × [ 0 , ∞ → [ 0 , ∞ $\\varphi: \\mathbb{R}^{n}\\times[0, \\infty\\to[0,\\infty$ be an anisotropic Musielak-Orlicz function such that φ ( x , ⋅ $\\varphi(x,\\cdot$ is an Orlicz function and φ ( ⋅ , t $\\varphi(\\cdot,t$ is a Muckenhoupt A ∞ ( A $\\mathbb {A}_{\\infty}(A$ weight. The aim of this article is to obtain two anisotropic interpolation theorems of Musielak-Orlicz type, which are weighted anisotropic extension of Marcinkiewicz interpolation theorems. The above results are new even for the isotropic weighted settings.

  6. Radiation exposure in multi-slice versus single-slice spiral CT: results of a nationwide survey

    International Nuclear Information System (INIS)

    Brix, G.; Nagel, H.D.; Stamm, G.; Veit, R.; Lechel, U.; Griebel, J.; Galanski, M.

    2003-01-01

    Multi-slice (MS) technology increases the efficacy of CT procedures and offers new promising applications. The expanding use of MSCT, however, may result in an increase in both frequency of procedures and levels of patient exposure. It was, therefore, the aim of this study to gain an overview of MSCT examinations conducted in Germany in 2001. All MSCT facilities were requested to provide information about 14 standard examinations with respect to scan parameters and frequency. Based on this data, dosimetric quantities were estimated using an experimentally validated formalism. Results are compared with those of a previous survey for single-slice (SS) spiral CT scanners. According to the data provided for 39 dual- and 73 quad-slice systems, the average annual number of patients examined at MSCT is markedly higher than that examined at SSCT scanners (5500 vs 3500). The average effective dose to patients was changed from 7.4 mSv at single-slice to 5.5 mSv and 8.1 mSv at dual- and quad-slice scanners, respectively. There is a considerable potential for dose reduction at quad-slice systems by an optimisation of scan protocols and better education of the personnel. To avoid an increase in the collective effective dose from CT procedures, a clear medical justification is required in each case. (orig.)

  7. Relationships of clinical protocols and reconstruction kernels with image quality and radiation dose in a 128-slice CT scanner: Study with an anthropomorphic and water phantom

    International Nuclear Information System (INIS)

    Paul, Jijo; Krauss, B.; Banckwitz, R.; Maentele, W.; Bauer, R.W.; Vogl, T.J.

    2012-01-01

    Research highlights: ► Clinical protocol, reconstruction kernel, reconstructed slice thickness, phantom diameter or the density of material it contains directly affects the image quality of DSCT. ► Dual energy protocol shows the lowest DLP compared to all other protocols examined. ► Dual-energy fused images show excellent image quality and the noise is same as that of single- or high-pitch mode protocol images. ► Advanced CT technology improves image quality and considerably reduce radiation dose. ► An important finding is the comparatively higher DLP of the dual-source high-pitch protocol compared to other single- or dual-energy protocols. - Abstract: Purpose: The aim of this study was to explore the relationship of scanning parameters (clinical protocols), reconstruction kernels and slice thickness with image quality and radiation dose in a DSCT. Materials and methods: The chest of an anthropomorphic phantom was scanned on a DSCT scanner (Siemens Somatom Definition flash) using different clinical protocols, including single- and dual-energy modes. Four scan protocols were investigated: 1) single-source 120 kV, 110 mA s, 2) single-source 100 kV, 180 mA s, 3) high-pitch 120 kV, 130 mA s and 4) dual-energy with 100/Sn140 kV, eff.mA s 89, 76. The automatic exposure control was switched off for all the scans and the CTDIvol selected was in between 7.12 and 7.37 mGy. The raw data were reconstructed using the reconstruction kernels B31f, B80f and B70f, and slice thicknesses were 1.0 mm and 5.0 mm. Finally, the same parameters and procedures were used for the scanning of water phantom. Friedman test and Wilcoxon-Matched-Pair test were used for statistical analysis. Results: The DLP based on the given CTDIvol values showed significantly lower exposure for protocol 4, when compared to protocol 1 (percent difference 5.18%), protocol 2 (percent diff. 4.51%), and protocol 3 (percent diff. 8.81%). The highest change in Hounsfield Units was observed with dual

  8. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

  9. Multi-slice computed tomography-assisted endoscopic transsphenoidal surgery for pituitary macroadenoma: a comparison with conventional microscopic transsphenoidal surgery.

    Science.gov (United States)

    Tosaka, Masahiko; Nagaki, Tomohito; Honda, Fumiaki; Takahashi, Katsumasa; Yoshimoto, Yuhei

    2015-11-01

    Intraoperative computed tomography (iCT) is a reliable method for the detection of residual tumour, but previous single-slice low-resolution computed tomography (CT) without coronal or sagittal reconstructions was not of adequate quality for clinical use. The present study evaluated the results of multi-slice iCT-assisted endoscopic transsphenoidal surgery for pituitary macroadenoma. This retrospective study included 30 consecutive patients with newly diagnosed or recurrent pituitary macroadenoma with supradiaphragmatic extension who underwent endoscopic transsphenoidal surgery using iCT (eTSS+iCT group), and control 30 consecutive patients who underwent conventional endoscope-assisted transsphenoidal surgery (cTSS group). The tumour volume was calculated by multiplying the tumour area by the slice thickness. Visual acuity and visual field were estimated by the visual impairment score (VIS). The resection extent, (preoperative tumour volume - postoperative residual tumour volume)/preoperative tumour volume, was 98.9% (median) in the eTSS+iCT group and 91.7% in the cTSS group, and had significant difference between the groups (P = 0.04). Greater than 95 and >90% removal rates were significantly higher in the eTSS+iCT group than in the cTSS group (P = 0.02 and P = 0.001, respectively). However, improvement in VIS showed no significant difference between the groups. The rate of complications also showed no significant difference. Multi-slice iCT-assisted endoscopic transsphenoidal surgery may improve the resection extent of pituitary macroadenoma. Multi-slice iCT may have advantages over intraoperative magnetic resonance imaging in less expensive, short acquisition time, and that special protection against magnetic fields is not needed.

  10. Tumor tissue slice cultures as a platform for analyzing tissue-penetration and biological activities of nanoparticles.

    Science.gov (United States)

    Merz, Lea; Höbel, Sabrina; Kallendrusch, Sonja; Ewe, Alexander; Bechmann, Ingo; Franke, Heike; Merz, Felicitas; Aigner, Achim

    2017-03-01

    The success of therapeutic nanoparticles depends, among others, on their ability to penetrate a tissue for actually reaching the target cells, and their efficient cellular uptake in the context of intact tissue and stroma. Various nanoparticle modifications have been implemented for altering physicochemical and biological properties. Their analysis, however, so far mainly relies on cell culture experiments which only poorly reflect the in vivo situation, or is based on in vivo experiments that are often complicated by whole-body pharmacokinetics and are rather tedious especially when analyzing larger nanoparticle sets. For the more precise analysis of nanoparticle properties at their desired site of action, efficient ex vivo systems closely mimicking in vivo tissue properties are needed. In this paper, we describe the setup of organotypic tumor tissue slice cultures for the analysis of tissue-penetrating properties and biological activities of nanoparticles. As a model system, we employ 350μm thick slice cultures from different tumor xenograft tissues, and analyze modified or non-modified polyethylenimine (PEI) complexes as well as their lipopolyplex derivatives for siRNA delivery. The described conditions for tissue slice preparation and culture ensure excellent tissue preservation for at least 14days, thus allowing for prolonged experimentation and analysis. When using fluorescently labeled siRNA for complex visualization, fluorescence microscopy of cryo-sectioned tissue slices reveals different degrees of nanoparticle tissue penetration, dependent on their surface charge. More importantly, the determination of siRNA-mediated knockdown efficacies of an endogenous target gene, the oncogenic survival factor Survivin, reveals the possibility to accurately assess biological nanoparticle activities in situ, i.e. in living cells in their original environment. Taken together, we establish tumor (xenograft) tissue slices for the accurate and facile ex vivo assessment of

  11. An application of gain-scheduled control using state-space interpolation to hydroactive gas bearings

    DEFF Research Database (Denmark)

    Theisen, Lukas Roy Svane; Camino, Juan F.; Niemann, Hans Henrik

    2016-01-01

    with a gain-scheduling strategy using state-space interpolation, which avoids both the performance loss and the increase of controller order associated to the Youla parametrisation. The proposed state-space interpolation for gain-scheduling is applied for mass imbalance rejection for a controllable gas...... bearing scheduled in two parameters. Comparisons against the Youla-based scheduling demonstrate the superiority of the state-space interpolation....

  12. Convergence acceleration of quasi-periodic and quasi-periodic-rational interpolations by polynomial corrections

    OpenAIRE

    Lusine Poghosyan

    2014-01-01

    The paper considers convergence acceleration of the quasi-periodic and the quasi-periodic-rational interpolations by application of polynomial corrections. We investigate convergence of the resultant quasi-periodic-polynomial and quasi-periodic-rational-polynomial interpolations and derive exact constants of the main terms of asymptotic errors in the regions away from the endpoints. Results of numerical experiments clarify behavior of the corresponding interpolations for moderate number of in...

  13. A map for the thick beam-beam interaction

    International Nuclear Information System (INIS)

    Irwin, J.; Chen, T.

    1995-01-01

    The authors give a closed-form expression for the thick beam-beam interaction for a small disruption parameter, as typical in electron-positron storage rings. The dependence on transverse angle and position of the particle trajectory as well as the longitudinal position of collision and the waist-modified shape of the beam distribution are included. Large incident angles, as are present for beam-halo particles or for large crossing-angle geometry, are accurately represented. The closed-form expression is well approximated by polynomials times the complex error function. Comparisons with multi-slice representations show even the first order terms are more accurate than a five slice representation, saving a factor of 5 in computation time

  14. Interpolation in Time Series: An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

    Directory of Open Access Journals (Sweden)

    Mathieu Lepot

    2017-10-01

    Full Text Available A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are many methods and criteria to estimate efficiencies of these methods, but uncertainties on the interpolated values are rarely calculated. Furthermore, while they are estimated according to standard methods, the prediction uncertainty is not taken into account: a discussion is thus presented on the uncertainty estimation of interpolated/extrapolated data. Finally, some suggestions for further research and a new method are proposed.

  15. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    Science.gov (United States)

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  16. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  17. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Mingjian Sun

    2015-01-01

    Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

  18. The diagnostic value of multi-slice spiral CT virtual bronchoscopy in tracheal and bronchial disease

    International Nuclear Information System (INIS)

    Han Ying; Ma Daqing

    2006-01-01

    Objective: To assess the diagnostic value of multi-slice spiral CT virtual bronchoscopy (CTVB) in tracheal and bronchial disease. Methods: Forty-two patients including central lung cancer (n=35), endobronchial tuberculosis (n=3), intrabronchial benign tumor (n=3), and intrabronchial foreign body (n=1) were examined by using multi-slice spiral CT examinations. All the final diagnosis were proved by pathology except 1 patient with endoluminal foreign body was proved by clinic. All patients were scanned on GE Lightspeed 99 scanner, using 10 mm collimation, pitch of 1.35, and reconstructed at 1 mm intervals and 1.25 mm thickness. The chest images of transverse CT and virtual bronchoscopy were viewed by two separate radiologists who were familiar with the tracheal and bronchial anatomy. Results: Among the 42 patients, the tumor of trachea and bronchial lumen appeared as masses in 22 of 35 patients with central lung cancer and bronchial stenosis was found in 13 of 35 patients with central lung cancer, and bronchial wall thickening was revealed on transverse CT in all 35 cases. 3 patients of endobronchial tuberculosis showed bronchial lumen narrowing on CTVB, the bronchial wall thickening was revealed on transverse CT, and the length of the wall thickening was long. 3 patients with intrabronchial benign tumor showed nodules in trachea and bronchial lumen on CTVB, and without wall thickening on transverse CT. CTVB could detect the occlusion of bronchial lumen in 1 patient with intrabronchial foreign body and CTVB was able to visualize the areas beyond stenosis, and the bronchial wall was without thickening on transverse CT. Conclusion: Multi- slice spiral CTVB could reflect the morphology of tracheal and bronchial disease. Combined with transverse CT, it could provide diagnostic reference value for bronchial disease. (authors)

  19. ANGELO-LAMBDA, Covariance matrix interpolation and mathematical verification

    International Nuclear Information System (INIS)

    Kodeli, Ivo

    2007-01-01

    1 - Description of program or function: The codes ANGELO-2.3 and LAMBDA-2.3 are used for the interpolation of the cross section covariance data from the original to a user defined energy group structure, and for the mathematical tests of the matrices, respectively. The LAMBDA-2.3 code calculates the eigenvalues of the matrices (both for the original or the converted) and lists them accordingly into positive and negative matrices. This verification is strongly recommended before using any covariance matrices. These versions of the two codes are the extended versions of the previous codes available in the Packages NEA-1264 - ZZ-VITAMIN-J/COVA. They were specifically developed for the purposes of the OECD LWR UAM benchmark, in particular for the processing of the ZZ-SCALE5.1/COVA-44G cross section covariance matrix library retrieved from the SCALE-5.1 package. Either the original SCALE-5.1 libraries or the libraries separated into several files by Nuclides can be (in principle) processed by ANGELO/LAMBDA codes, but the use of the one-nuclide data is strongly recommended. Due to large deviations of the correlation matrix terms from unity observed in some SCALE5.1 covariance matrices, the previous more severe acceptance condition in the ANGELO2.3 code was released. In case the correlation coefficients exceed 1.0, only a warning message is issued, and coefficients are replaced by 1.0. 2 - Methods: ANGELO-2.3 interpolates the covariance matrices to a union grid using flat weighting. LAMBDA-2.3 code includes the mathematical routines to calculate the eigenvalues of the covariance matrices. 3 - Restrictions on the complexity of the problem: The algorithm used in ANGELO is relatively simple, therefore the interpolations involving energy group structure which are very different from the original (e.g. large difference in the number of energy groups) may not be accurate. In particular in the case of the MT=1018 data (fission spectra covariances) the algorithm may not be

  20. Ocean Sediment Thickness Contours

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Ocean sediment thickness contours in 200 meter intervals for water depths ranging from 0 - 18,000 meters. These contours were derived from a global sediment...

  1. Facial soft tissue thickness in North Indian adult population

    Directory of Open Access Journals (Sweden)

    Tanushri Saxena

    2012-01-01

    Full Text Available Objectives: Forensic facial reconstruction is an attempt to reproduce a likeness of facial features of an individual, based on characteristics of the skull, for the purpose of individual identification - The aim of this study was to determine the soft tissue thickness values of individuals of Bareilly population, Uttar Pradesh, India and to evaluate whether these values can help in forensic identification. Study design: A total of 40 individuals (19 males, 21 females were evaluated using spiral computed tomographic (CT scan with 2 mm slice thickness in axial sections and soft tissue thicknesses were measured at seven midfacial anthropological facial landmarks. Results: It was found that facial soft tissue thickness values decreased with age. Soft tissue thickness values were less in females than in males, except at ramus region. Comparing the left and right values in individuals it was found to be not significant. Conclusion: Soft tissue thickness values are an important factor in facial reconstruction and also help in forensic identification of an individual. CT scan gives a good representation of these values and hence is considered an important tool in facial reconstruction- This study has been conducted in North Indian population and further studies with larger sample size can surely add to the data regarding soft tissue thicknesses.

  2. Diabat Interpolation for Polymorph Free-Energy Differences.

    Science.gov (United States)

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  3. Basis set approach in the constrained interpolation profile method

    International Nuclear Information System (INIS)

    Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.

    2003-07-01

    We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)

  4. Plasma simulation with the Differential Algebraic Cubic Interpolated Propagation scheme

    Energy Technology Data Exchange (ETDEWEB)

    Utsumi, Takayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    A computer code based on the Differential Algebraic Cubic Interpolated Propagation scheme has been developed for the numerical solution of the Boltzmann equation for a one-dimensional plasma with immobile ions. The scheme advects the distribution function and its first derivatives in the phase space for one time step by using a numerical integration method for ordinary differential equations, and reconstructs the profile in phase space by using a cubic polynomial within a grid cell. The method gives stable and accurate results, and is efficient. It is successfully applied to a number of equations; the Vlasov equation, the Boltzmann equation with the Fokker-Planck or the Bhatnagar-Gross-Krook (BGK) collision term and the relativistic Vlasov equation. The method can be generalized in a straightforward way to treat cases such as problems with nonperiodic boundary conditions and higher dimensional problems. (author)

  5. Estimating Frequency by Interpolation Using Least Squares Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Changwei Ma

    2015-01-01

    Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.

  6. Finite element analysis of rotating beams physics based interpolation

    CERN Document Server

    Ganguli, Ranjan

    2017-01-01

    This book addresses the solution of rotating beam free-vibration problems using the finite element method. It provides an introduction to the governing equation of a rotating beam, before outlining the solution procedures using Rayleigh-Ritz, Galerkin and finite element methods. The possibility of improving the convergence of finite element methods through a judicious selection of interpolation functions, which are closer to the problem physics, is also addressed. The book offers a valuable guide for students and researchers working on rotating beam problems – important engineering structures used in helicopter rotors, wind turbines, gas turbines, steam turbines and propellers – and their applications. It can also be used as a textbook for specialized graduate and professional courses on advanced applications of finite element analysis.

  7. Spatial Interpolation of Historical Seasonal Rainfall Indices over Peninsular Malaysia

    Directory of Open Access Journals (Sweden)

    Hassan Zulkarnain

    2018-01-01

    Full Text Available The inconsistency in inter-seasonal rainfall due to climate change will cause a different pattern in the rainfall characteristics and distribution. Peninsular Malaysia is not an exception for this inconsistency, in which it is resulting extreme events such as flood and water scarcity. This study evaluates the seasonal patterns in rainfall indices such as total amount of rainfall, the frequency of wet days, rainfall intensity, extreme frequency, and extreme intensity in Peninsular Malaysia. 40 years (1975-2015 data records have been interpolated using Inverse Distance Weighted method. The results show that the formation of rainfall characteristics are significance during the Northeast monsoon (NEM, as compared to Southwest monsoon (SWM. Also, there is a high rainfall intensity and frequency related to extreme over eastern coasts of Peninsula during the NEM season.

  8. Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Budi I. Setiawan

    2007-09-01

    Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.

  9. An algorithm for centerline extraction using natural neighbour interpolation

    DEFF Research Database (Denmark)

    Mioc, Darka; Antón Castro, Francesc/François; Dharmaraj, Girija

    2004-01-01

    , especially due to the lack of explicit topology in commercial GIS systems. Indeed, each map update might require the batch processing of the whole map. Currently, commercial GIS do not offer completely automatic raster/vector conversion even for simple scanned black and white maps. Various commercial raster...... they need user defined tolerances settings, what causes difficulties in the extraction of complex spatial features, for example: road junctions, curved or irregular lines and complex intersections of linear features. The approach we use here is based on image processing filtering techniques to extract...... to the improvement of data caption and conversion in GIS and to develop a software toolkit for automated raster/vector conversion. The approach is based on computing the skeleton from Voronoi diagrams using natural neighbour interpolation. In this paper we present the algorithm for skeleton extraction from scanned...

  10. Spatial Interpolation of Historical Seasonal Rainfall Indices over Peninsular Malaysia

    Science.gov (United States)

    Hassan, Zulkarnain; Haidir, Ahmad; Saad, Farah Naemah Mohd; Ayob, Afizah; Rahim, Mustaqqim Abdul; Ghazaly, Zuhayr Md.

    2018-03-01

    The inconsistency in inter-seasonal rainfall due to climate change will cause a different pattern in the rainfall characteristics and distribution. Peninsular Malaysia is not an exception for this inconsistency, in which it is resulting extreme events such as flood and water scarcity. This study evaluates the seasonal patterns in rainfall indices such as total amount of rainfall, the frequency of wet days, rainfall intensity, extreme frequency, and extreme intensity in Peninsular Malaysia. 40 years (1975-2015) data records have been interpolated using Inverse Distance Weighted method. The results show that the formation of rainfall characteristics are significance during the Northeast monsoon (NEM), as compared to Southwest monsoon (SWM). Also, there is a high rainfall intensity and frequency related to extreme over eastern coasts of Peninsula during the NEM season.

  11. On the exact interpolating function in ABJ theory

    Energy Technology Data Exchange (ETDEWEB)

    Cavaglià, Andrea [Dipartimento di Fisica and INFN, Università di Torino,Via P. Giuria 1, 10125 Torino (Italy); Gromov, Nikolay [Mathematics Department, King’s College London,The Strand, London WC2R 2LS (United Kingdom); St. Petersburg INP,Gatchina, 188 300, St.Petersburg (Russian Federation); Levkovich-Maslyuk, Fedor [Mathematics Department, King’s College London,The Strand, London WC2R 2LS (United Kingdom); Nordita, KTH Royal Institute of Technology and Stockholm University,Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden)

    2016-12-16

    Based on the recent indications of integrability in the planar ABJ model, we conjecture an exact expression for the interpolating function h(λ{sub 1},λ{sub 2}) in this theory. Our conjecture is based on the observation that the integrability structure of the ABJM theory given by its Quantum Spectral Curve is very rigid and does not allow for a simple consistent modification. Under this assumption, we revised the previous comparison of localization results and exact all loop integrability calculations done for the ABJM theory by one of the authors and Grigory Sizov, fixing h(λ{sub 1},λ{sub 2}). We checked our conjecture against various weak coupling expansions, at strong coupling and also demonstrated its invariance under the Seiberg-like duality. This match also gives further support to the integrability of the model. If our conjecture is correct, it extends all the available integrability results in the ABJM model to the ABJ model.

  12. A fast and accurate dihedral interpolation loop subdivision scheme

    Science.gov (United States)

    Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

    2018-04-01

    In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

  13. Differential maps, difference maps, interpolated maps, and long term prediction

    International Nuclear Information System (INIS)

    Talman, R.

    1988-06-01

    Mapping techniques may be thought to be attractive for the long term prediction of motion in accelerators, especially because a simple map can approximately represent an arbitrarily complicated lattice. The intention of this paper is to develop prejudices as to the validity of such methods by applying them to a simple, exactly solveable, example. It is shown that a numerical interpolation map, such as can be generated in the accelerator tracking program TEAPOT, predicts the evolution more accurately than an analytically derived differential map of the same order. Even so, in the presence of ''appreciable'' nonlinearity, it is shown to be impractical to achieve ''accurate'' prediction beyond some hundreds of cycles of oscillation. This suggests that the value of nonlinear maps is restricted to the parameterization of only the ''leading'' deviation from linearity. 41 refs., 6 figs

  14. Modelling and experimental validation of thin layer indirect solar drying of mango slices

    Energy Technology Data Exchange (ETDEWEB)

    Dissa, A.O.; Bathiebo, J.; Kam, S.; Koulidiati, J. [Laboratoire de Physique et de Chimie de l' Environnement (LPCE), Unite de Formation et de Recherche en Sciences Exactes et Appliquee (UFR/SEA), Universite de Ouagadougou, Avenue Charles de Gaulle, BP 7021 Kadiogo (Burkina Faso); Savadogo, P.W. [Laboratoire Sol Eau Plante, Institut de l' Environnement et de Recherches Agricoles, 01 BP 476, Ouagadougou (Burkina Faso); Desmorieux, H. [Laboratoire d' Automatisme et de Genie des Procedes (LAGEP), UCBL1-CNRS UMR 5007-CPE Lyon, Bat.308G, 43 bd du 11 Nov. 1918 Villeurbanne, Universite Claude Bernard Lyon1, Lyon (France)

    2009-04-15

    The thin layer solar drying of mango slices of 8 mm thick was simulated and experimented using a solar dryer designed and constructed in laboratory. Under meteorological conditions of harvest period of mangoes, the results showed that 3 'typical days' of drying were necessary to reach the range of preservation water contents. During these 3 days of solar drying, 50%, 40% and 5% of unbound water were eliminated, respectively, at the first, second and the third day. The final water content obtained was about 16 {+-} 1.33% d.b. (13.79% w.b.). This final water content and the corresponding water activity (0.6 {+-} 0.02) were in accordance with previous work. The drying rates with correction for shrinkage and the critical water content were experimentally determined. The critical water content was close to 70% of the initial water content and the drying rates were reduced almost at 6% of their maximum value at night. The thin layer drying model made it possible to simulate suitably the solar drying kinetics of mango slices with a correlation coefficient of r{sup 2} = 0.990. This study thus contributed to the setting of solar drying time of mango and to the establishment of solar drying rates' curves of this fruit. (author)

  15. Optimum slicing of radical prostatectomy specimens for correlation between histopathology and medical images

    International Nuclear Information System (INIS)

    Chen, Li Hong; Ng, Wan Sing; Ho, Henry; Yuen, John; Cheng, Chris; Lazaro, Richie; Thng, Choon Hua

    2010-01-01

    There is a need for methods which enable precise correlation of histologic sections with in vivo prostate images. Such methods would allow direct comparison between imaging features and functional or histopathological heterogeneity of tumors. Correlation would be particularly useful for validating the accuracy of imaging modalities, developing imaging techniques, assessing image-guided therapy, etc. An optimum prostate slicing method for accurate correlation between the histopathological and medical imaging planes in terms of section angle, thickness and level was sought. Literature review (51 references from 1986-2009 were cited) was done on the various sectioning apparatus or techniques used to slice the prostate specimen for accurate correlation between histopathological data and medical imaging. Technology evaluation was performed with review and discussion of various methods used to section other organs and their possible applications for sectioning prostatectomy specimens. No consensus has been achieved on how the prostate should be dissected to achieve a good correlation. Various customized sectioning instruments and techniques working with different mechanism are used in different research institutes to improve the correlation. Some of the methods have convincingly shown significant potential for improving image-specimen correlation. However, the semisolid consistent property of prostate tissue and the lack of identifiable landmarks remain challenges to be overcome, especially for fresh prostate sectioning and microtomy without external fiducials. A standardized optimum protocol to dissect prostatectomy specimens is needed for the validation of medical imaging modalities by histologic correlation. These standards can enhance disease management by improving the comparability between different modalities. (orig.)

  16. Hybrid kriging methods for interpolating sparse river bathymetry point data

    Directory of Open Access Journals (Sweden)

    Pedro Velloso Gomes Batista

    Full Text Available ABSTRACT Terrain models that represent riverbed topography are used for analyzing geomorphologic changes, calculating water storage capacity, and making hydrologic simulations. These models are generated by interpolating bathymetry points. River bathymetry is usually surveyed through cross-sections, which may lead to a sparse sampling pattern. Hybrid kriging methods, such as regression kriging (RK and co-kriging (CK employ the correlation with auxiliary predictors, as well as inter-variable correlation, to improve the predictions of the target variable. In this study, we use the orthogonal distance of a (x, y point to the river centerline as a covariate for RK and CK. Given that riverbed elevation variability is abrupt transversely to the flow direction, it is expected that the greater the Euclidean distance of a point to the thalweg, the greater the bed elevation will be. The aim of this study was to evaluate if the use of the proposed covariate improves the spatial prediction of riverbed topography. In order to asses such premise, we perform an external validation. Transversal cross-sections are used to make the spatial predictions, and the point data surveyed between sections are used for testing. We compare the results from CK and RK to the ones obtained from ordinary kriging (OK. The validation indicates that RK yields the lowest RMSE among the interpolators. RK predictions represent the thalweg between cross-sections, whereas the other methods under-predict the river thalweg depth. Therefore, we conclude that RK provides a simple approach for enhancing the quality of the spatial prediction from sparse bathymetry data.

  17. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    Science.gov (United States)

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level

  18. A method to generate fully multi-scale optimal interpolation by combining efficient single process analyses, illustrated by a DINEOF analysis spiced with a local optimal interpolation

    Directory of Open Access Journals (Sweden)

    J.-M. Beckers

    2014-10-01

    Full Text Available We present a method in which the optimal interpolation of multi-scale processes can be expanded into a succession of simpler interpolations. First, we prove how the optimal analysis of a superposition of two processes can be obtained by different mathematical formulations involving iterations and analysis focusing on a single process. From the different mathematical equivalent formulations, we then select the most efficient ones by analyzing the behavior of the different possibilities in a simple and well-controlled test case. The clear guidelines deduced from this experiment are then applied to a real situation in which we combine large-scale analysis of hourly Spinning Enhanced Visible and Infrared Imager (SEVIRI satellite images using data interpolating empirical orthogonal functions (DINEOF with a local optimal interpolation using a Gaussian covariance. It is shown that the optimal combination indeed provides the best reconstruction and can therefore be exploited to extract the maximum amount of useful information from the original data.

  19. SPATIOTEMPORAL VISUALIZATION OF TIME-SERIES SATELLITE-DERIVED CO2 FLUX DATA USING VOLUME RENDERING AND GPU-BASED INTERPOLATION ON A CLOUD-DRIVEN DIGITAL EARTH

    Directory of Open Access Journals (Sweden)

    S. Wu

    2017-10-01

    Full Text Available The ocean carbon cycle has a significant influence on global climate, and is commonly evaluated using time-series satellite-derived CO2 flux data. Location-aware and globe-based visualization is an important technique for analyzing and presenting the evolution of climate change. To achieve realistic simulation of the spatiotemporal dynamics of ocean carbon, a cloud-driven digital earth platform is developed to support the interactive analysis and display of multi-geospatial data, and an original visualization method based on our digital earth is proposed to demonstrate the spatiotemporal variations of carbon sinks and sources using time-series satellite data. Specifically, a volume rendering technique using half-angle slicing and particle system is implemented to dynamically display the released or absorbed CO2 gas. To enable location-aware visualization within the virtual globe, we present a 3D particlemapping algorithm to render particle-slicing textures onto geospace. In addition, a GPU-based interpolation framework using CUDA during real-time rendering is designed to obtain smooth effects in both spatial and temporal dimensions. To demonstrate the capabilities of the proposed method, a series of satellite data is applied to simulate the air-sea carbon cycle in the China Sea. The results show that the suggested strategies provide realistic simulation effects and acceptable interactive performance on the digital earth.

  20. Color changes and acrylamide formation in fried potato slices

    DEFF Research Database (Denmark)

    Pedreschi, Franco; Moyano, Pedro; Kaack, Karl

    2005-01-01

    The objective of this work was to study the kinetics of browning during deep-fat frying of blanched and unblanched potato chips by using the dynamic method and to find a relationship between browning development and acrylamide formation. Prior to frying, potato slices were blanched in hot water...... at 85degreesC for 3.5 min. Unblanched slices were used as the control. Control and blanched potato slices (Panda variety, diameter: 37 mm, width: 2.2 mm) were fried at 120, 150 and 180degreesC until reaching moisture contents of similar to1.8% (total basis) and their acrylamide content and final color...... were measured. Color changes were recorded at different sampling times during frying at the three mentioned temperatures using the chromatic redness parameter a(*). Experimental data of surface temperature, moisture content and color change in potato chips during frying were fit to empirical...

  1. An overview of 5G network slicing architecture

    Science.gov (United States)

    Chen, Qiang; Wang, Xiaolei; Lv, Yingying

    2018-05-01

    With the development of mobile communication technology, the traditional single network model has been unable to meet the needs of users, and the demand for differentiated services is increasing. In order to solve this problem, the fifth generation of mobile communication technology came into being, and as one of the key technologies of 5G, network slice is the core technology of network virtualization and software defined network, enabling network slices to flexibly provide one or more network services according to users' needs[1]. Each slice can independently tailor the network functions according to the requirements of the business scene and the traffic model and manage the layout of the corresponding network resources, to improve the flexibility of network services and the utilization of resources, and enhance the robustness and reliability of the whole network [2].

  2. Preservation of low slice emittance in bunch compressors

    Directory of Open Access Journals (Sweden)

    S. Bettoni

    2016-03-01

    Full Text Available Minimizing the dilution of the electron beam emittance is crucial for the performance of accelerators, in particular for free electron laser facilities, where the length of the machine and the efficiency of the lasing process depend on it. Measurements performed at the SwissFEL Injector Test Facility revealed an increase in slice emittance after compressing the bunch even for moderate compression factors. The phenomenon was experimentally studied by characterizing the dependence of the effect on beam and machine parameters relevant for the bunch compression. The reproduction of these measurements in simulation required the use of a 3D beam dynamics model along the bunch compressor that includes coherent synchrotron radiation. Our investigations identified transverse effects, such as coherent synchrotron radiation and transverse space charge as the sources of the observed emittance dilution, excluding other effects, such as chromatic effects on single slices or spurious dispersion. We also present studies, both experimental and simulation based, on the effect of the optics mismatch of the slices on the variation of the slice emittance along the bunch. After a corresponding reoptimization of the beam optics in the test facility we reached slice emittances below 200 nm for the central slices along the longitudinal dimension with a moderate increase up to 300 nm in the head and tail for a compression factor of 7.5 and a bunch charge of 200 pC, equivalent to a final current of 150 A, at about 230 MeV energy.

  3. Error analysis of the microradiographical determination of mineral content in mineralised tissue slices

    International Nuclear Information System (INIS)

    Jong, E. de J. de; Bosch, J.J. ten

    1985-01-01

    The microradiographic method, used to measure the mineral content in slices of mineralised tissues as a function of position, is analysed. The total error in the measured mineral content is split into systematic errors per microradiogram and random noise errors. These errors are measured quantitatively. Predominant contributions to systematic errors appear to be x-ray beam inhomogeneity, the determination of the step wedge thickness and stray light in the densitometer microscope, while noise errors are under the influence of the choice of film, the value of the optical film transmission of the microradiographic image and the area of the densitometer window. Optimisation criteria are given. The authors used these criteria, together with the requirement that the method be fast and easy to build an optimised microradiographic system. (author)

  4. [Anatomy of the skull base and the cranial nerves in slice imaging].

    Science.gov (United States)

    Bink, A; Berkefeld, J; Zanella, F

    2009-07-01

    Computed tomography (CT) and magnetic resonance imaging (MRI) are suitable methods for examination of the skull base. Whereas CT is used to evaluate mainly bone destruction e.g. for planning surgical therapy, MRI is used to show pathologies in the soft tissue and bone invasion. High resolution and thin slice thickness are indispensible for both modalities of skull base imaging. Detailed anatomical knowledge is necessary even for correct planning of the examination procedures. This knowledge is a requirement to be able to recognize and interpret pathologies. MRI is the method of choice for examining the cranial nerves. The total path of a cranial nerve can be visualized by choosing different sequences taking into account the tissue surrounding this cranial nerve. This article summarizes examination methods of the skull base in CT and MRI, gives a detailed description of the anatomy and illustrates it with image examples.

  5. Anatomy of the skull base and the cranial nerves in slice imaging

    International Nuclear Information System (INIS)

    Bink, A.; Berkefeld, J.; Zanella, F.

    2009-01-01

    Computed tomography (CT) and magnetic resonance imaging (MRI) are suitable methods for examination of the skull base. Whereas CT is used to evaluate mainly bone destruction e.g. for planning surgical therapy, MRI is used to show pathologies in the soft tissue and bone invasion. High resolution and thin slice thickness are indispensible for both modalities of skull base imaging. Detailed anatomical knowledge is necessary even for correct planning of the examination procedures. This knowledge is a requirement to be able to recognize and interpret pathologies. MRI is the method of choice for examining the cranial nerves. The total path of a cranial nerve can be visualized by choosing different sequences taking into account the tissue surrounding this cranial nerve. This article summarizes examination methods of the skull base in CT and MRI, gives a detailed description of the anatomy and illustrates it with image examples. (orig.) [de

  6. (Non)perturbative gravity, nonlocality, and nice slices

    International Nuclear Information System (INIS)

    Giddings, Steven B.

    2006-01-01

    Perturbative dynamics of gravity is investigated for high-energy scattering and in black hole backgrounds. In the latter case, a straightforward perturbative analysis fails, in a close parallel to the failure of the former when the impact parameter reaches the Schwarzschild radius. This suggests a flaw in a semiclassical description of physics on spatial slices that intersect both outgoing Hawking radiation and matter that has carried information into a black hole; such slices are instrumental in a general argument for black hole information loss. This indicates a possible role for the proposal that nonperturbative gravitational physics is intrinsically nonlocal

  7. Verification-Driven Slicing of UML/OCL Models

    DEFF Research Database (Denmark)

    Shaikh, Asadullah; Clarisó Viladrosa, Robert; Wiil, Uffe Kock

    2010-01-01

    computational complexity can limit their scalability. In this paper, we consider a specific static model (UML class diagrams annotated with unrestricted OCL constraints) and a specific property to verify (satisfiability, i.e., “is it possible to create objects without violating any constraint?”). Current...... approaches to this problem have an exponential worst-case runtime. We propose a technique to improve their scalability by partitioning the original model into submodels (slices) which can be verified independently and where irrelevant information has been abstracted. The definition of the slicing procedure...

  8. Novel culturing platform for brain slices and neuronal cells

    DEFF Research Database (Denmark)

    Svendsen, Winnie Edith; Al Atraktchi, Fatima Al-Zahraa; Bakmand, Tanya

    2015-01-01

    In this paper we demonstrate a novel culturing system for brain slices and neuronal cells, which can control the concentration of nutrients and the waste removal from the culture by adjusting the fluid flow within the device. The entire system can be placed in an incubator. The system has been...... tested successfully with brain slices and PC12 cells. The culture substrate can be modified using metal electrodes and/or nanostructures for conducting electrical measurements while culturing and for better mimicking the in vivo conditions....

  9. Can a polynomial interpolation improve on the Kaplan-Yorke dimension?

    International Nuclear Information System (INIS)

    Richter, Hendrik

    2008-01-01

    The Kaplan-Yorke dimension can be derived using a linear interpolation between an h-dimensional Lyapunov exponent λ (h) >0 and an h+1-dimensional Lyapunov exponent λ (h+1) <0. In this Letter, we use a polynomial interpolation to obtain generalized Lyapunov dimensions and study the relationships among them for higher-dimensional systems

  10. Kriging interpolation in seismic attribute space applied to the South Arne Field, North Sea

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Schiøtt, Christian

    2010-01-01

    Seismic attributes can be used to guide interpolation in-between and extrapolation away from well log locations using for example linear regression, neural networks, and kriging. Kriging-based estimation methods (and most other types of interpolation/extrapolation techniques) are intimately linke...

  11. Application of ordinary kriging for interpolation of micro-structured technical surfaces

    International Nuclear Information System (INIS)

    Raid, Indek; Kusnezowa, Tatjana; Seewig, Jörg

    2013-01-01

    Kriging is an interpolation technique used in geostatistics. In this paper we present kriging applied in the field of three-dimensional optical surface metrology. Technical surfaces are not always optically cooperative, meaning that measurements of technical surfaces contain invalid data points because of different effects. These data points need to be interpolated to obtain a complete area in order to fulfil further processing. We present an elementary type of kriging, known as ordinary kriging, and apply it to interpolate measurements of different technical surfaces containing different kinds of realistic defects. The result of the interpolation with kriging is compared to six common interpolation techniques: nearest neighbour, natural neighbour, inverse distance to a power, triangulation with linear interpolation, modified Shepard's method and radial basis function. In order to quantify the results of different interpolations, the topographies are compared to defect-free reference topographies. Kriging is derived from a stochastic model that suggests providing an unbiased, linear estimation with a minimized error variance. The estimation with kriging is based on a preceding statistical analysis of the spatial structure of the surface. This comprises the choice and adaptation of specific models of spatial continuity. In contrast to common methods, kriging furthermore considers specific anisotropy in the data and adopts the interpolation accordingly. The gained benefit requires some additional effort in preparation and makes the overall estimation more time-consuming than common methods. However, the adaptation to the data makes this method very flexible and accurate. (paper)

  12. Application of Time-Frequency Domain Transform to Three-Dimensional Interpolation of Medical Images.

    Science.gov (United States)

    Lv, Shengqing; Chen, Yimin; Li, Zeyu; Lu, Jiahui; Gao, Mingke; Lu, Rongrong

    2017-11-01

    Medical image three-dimensional (3D) interpolation is an important means to improve the image effect in 3D reconstruction. In image processing, the time-frequency domain transform is an efficient method. In this article, several time-frequency domain transform methods are applied and compared in 3D interpolation. And a Sobel edge detection and 3D matching interpolation method based on wavelet transform is proposed. We combine wavelet transform, traditional matching interpolation methods, and Sobel edge detection together in our algorithm. What is more, the characteristics of wavelet transform and Sobel operator are used. They deal with the sub-images of wavelet decomposition separately. Sobel edge detection 3D matching interpolation method is used in low-frequency sub-images under the circumstances of ensuring high frequency undistorted. Through wavelet reconstruction, it can get the target interpolation image. In this article, we make 3D interpolation of the real computed tomography (CT) images. Compared with other interpolation methods, our proposed method is verified to be effective and superior.

  13. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    Science.gov (United States)

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  14. Okounkov's BC-Type Interpolation Macdonald Polynomials and Their q=1 Limit

    NARCIS (Netherlands)

    Koornwinder, T.H.

    2015-01-01

    This paper surveys eight classes of polynomials associated with A-type and BC-type root systems: Jack, Jacobi, Macdonald and Koornwinder polynomials and interpolation (or shifted) Jack and Macdonald polynomials and their BC-type extensions. Among these the BC-type interpolation Jack polynomials were

  15. Interpolation in Time Series : An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

    NARCIS (Netherlands)

    Lepot, M.J.; Aubin, Jean Baptiste; Clemens, F.H.L.R.

    2017-01-01

    A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are

  16. Spatiotemporal interpolation of elevation changes derived from satellite altimetry for Jakobshavn Isbræ, Greenland

    DEFF Research Database (Denmark)

    Hurkmans, R.T.W.L.; Bamber, J.L.; Sørensen, Louise Sandberg

    2012-01-01

    . In those areas, straightforward interpolation of data is unlikely to reflect the true patterns of dH/dt. Here, four interpolation methods are compared and evaluated over Jakobshavn Isbræ, an outlet glacier for which widespread airborne validation data are available from NASA's Airborne Topographic Mapper...

  17. Interaction-Strength Interpolation Method for Main-Group Chemistry : Benchmarking, Limitations, and Perspectives

    NARCIS (Netherlands)

    Fabiano, E.; Gori-Giorgi, P.; Seidl, M.W.J.; Della Sala, F.

    2016-01-01

    We have tested the original interaction-strength-interpolation (ISI) exchange-correlation functional for main group chemistry. The ISI functional is based on an interpolation between the weak and strong coupling limits and includes exact-exchange as well as the Görling–Levy second-order energy. We

  18. Researches Regarding The Circular Interpolation Algorithms At CNC Laser Cutting Machines

    Science.gov (United States)

    Tîrnovean, Mircea Sorin

    2015-09-01

    This paper presents an integrated simulation approach for studying the circular interpolation regime of CNC laser cutting machines. The circular interpolation algorithm is studied, taking into consideration the numerical character of the system. A simulation diagram, which is able to generate the kinematic inputs for the feed drives of the CNC laser cutting machine is also presented.

  19. Interpolation of polytopic control Lyapunov functions for discrete–time linear systems

    NARCIS (Netherlands)

    Nguyen, T.T.; Lazar, M.; Spinu, V.; Boje, E.; Xia, X.

    2014-01-01

    This paper proposes a method for interpolating two (or more) polytopic control Lyapunov functions (CLFs) for discrete--time linear systems subject to polytopic constraints, thereby combining different control objectives. The corresponding interpolated CLF is used for synthesis of a stabilizing

  20. Abstract interpolation in vector-valued de Branges-Rovnyak spaces

    NARCIS (Netherlands)

    Ball, J.A.; Bolotnikov, V.; ter Horst, S.

    2011-01-01

    Following ideas from the Abstract Interpolation Problem of Katsnelson et al. (Operators in spaces of functions and problems in function theory, vol 146, pp 83–96, Naukova Dumka, Kiev, 1987) for Schur class functions, we study a general metric constrained interpolation problem for functions from a

  1. Investigation of the slice sensitivity profile for step-and-shoot mode multi-slice computed tomography

    International Nuclear Information System (INIS)

    Hsieh Jiang

    2001-01-01

    Multislice computed tomography (MCT) is one of the recent technology advancements in CT. Compared to single slice CT, MCT significantly improves examination time, x-ray tube efficiency, and contrast material utilization. Although the scan mode of MCT is predominately helical, step-and-shoot (axial) scans continue to be an important part of routine clinical protocols. In this paper, we present a detailed investigation on the slice sensitivity profile (SSP) of MCT in the step-and-shoot mode. Our investigation shows that, unlike single slice CT, the SSP for MCT exhibits multiple peaks and valleys resulting from intercell gaps between detector rows. To fully understand the characteristics of the SSP, we developed an analytical model to predict the behavior of MCT. We propose a simple experimental technique that can quickly and accurately measure SSP. The impact of the SSP on image artifacts and low contrast detectability is also investigated

  2. Monitoring production target thickness

    International Nuclear Information System (INIS)

    Oothoudt, M.A.

    1993-01-01

    Pion and muon production targets at the Clinton P. Anderson Meson Physics Facility consist of rotating graphite wheels. The previous target thickness monitoring Procedure scanned the target across a reduced intensity beam to determine beam center. The fractional loss in current across the centered target gave a measure of target thickness. This procedure, however, required interruption of beam delivery to experiments and frequently indicated a different fractional loss than at normal beam currents. The new monitoring Procedure compares integrated ups and downs toroid current monitor readings. The current monitors are read once per minute and the integral of readings are logged once per eight-hour shift. Changes in the upstream to downstream fractional difference provide a nonintrusive continuous measurement of target thickness under nominal operational conditions. Target scans are now done only when new targets are installed or when unexplained changes in the current monitor data are observed

  3. A Hybrid Interpolation Method for Geometric Nonlinear Spatial Beam Elements with Explicit Nodal Force

    Directory of Open Access Journals (Sweden)

    Huiqing Fang

    2016-01-01

    Full Text Available Based on geometrically exact beam theory, a hybrid interpolation is proposed for geometric nonlinear spatial Euler-Bernoulli beam elements. First, the Hermitian interpolation of the beam centerline was used for calculating nodal curvatures for two ends. Then, internal curvatures of the beam were interpolated with a second interpolation. At this point, C1 continuity was satisfied and nodal strain measures could be consistently derived from nodal displacement and rotation parameters. The explicit expression of nodal force without integration, as a function of global parameters, was founded by using the hybrid interpolation. Furthermore, the proposed beam element can be degenerated into linear beam element under the condition of small deformation. Objectivity of strain measures and patch tests are also discussed. Finally, four numerical examples are discussed to prove the validity and effectivity of the proposed beam element.

  4. Fast digital zooming system using directionally adaptive image interpolation and restoration.

    Science.gov (United States)

    Kang, Wonseok; Jeon, Jaehwan; Yu, Soohwan; Paik, Joonki

    2014-01-01

    This paper presents a fast digital zooming system for mobile consumer cameras using directionally adaptive image interpolation and restoration methods. The proposed interpolation algorithm performs edge refinement along the initially estimated edge orientation using directionally steerable filters. Either the directionally weighted linear or adaptive cubic-spline interpolation filter is then selectively used according to the refined edge orientation for removing jagged artifacts in the slanted edge region. A novel image restoration algorithm is also presented for removing blurring artifacts caused by the linear or cubic-spline interpolation using the directionally adaptive truncated constrained least squares (TCLS) filter. Both proposed steerable filter-based interpolation and the TCLS-based restoration filters have a finite impulse response (FIR) structure for real time processing in an image signal processing (ISP) chain. Experimental results show that the proposed digital zooming system provides high-quality magnified images with FIR filter-based fast computational structure.

  5. Conformal Interpolating Algorithm Based on Cubic NURBS in Aspheric Ultra-Precision Machining

    International Nuclear Information System (INIS)

    Li, C G; Zhang, Q R; Cao, C G; Zhao, S L

    2006-01-01

    Numeric control machining and on-line compensation for aspheric surface are key techniques in ultra-precision machining. In this paper, conformal cubic NURBS interpolating curve is applied to fit the character curve of aspheric surface. Its algorithm and process are also proposed and imitated by Matlab7.0 software. To evaluate the performance of the conformal cubic NURBS interpolation, we compare it with the linear interpolations. The result verifies this method can ensure smoothness of interpolating spline curve and preserve original shape characters. The surface quality interpolated by cubic NURBS is higher than by line. The algorithm is benefit to increasing the surface form precision of workpieces in ultra-precision machining

  6. TU-EF-204-11: Impact of Using Multi-Slice Training Sets On the Performance of a Channelized Hotelling Observer in a Low-Contrast Detection Task in CT

    Energy Technology Data Exchange (ETDEWEB)

    Favazza, C; Yu, L; Leng, S; McCollough, C [Mayo Clinic, Rochester, MN (United States)

    2015-06-15

    Purpose: To investigate using multiple CT image slices from a single acquisition as independent training images for a channelized Hotelling observer (CHO) model to reduce the number of repeated scans for CHO-based CT image quality assessment. Methods: We applied a previously validated CHO model to detect low contrast disk objects formed from cross-sectional images of three epoxy-resin-based rods (diameters: 3, 5, and 9 mm; length: ∼5cm). The rods were submerged in a 35x 25 cm2 iodine-doped water filled phantom, yielding-15 HU object contrast. The phantom was scanned 100 times with and without the rods present. Scan and reconstruction parameters include: 5 mm slice thickness at 0.5 mm intervals, 120 kV, 480 Quality Reference mAs, and a 128-slice scanner. The CHO’s detectability index was evaluated as a function of factors related to incorporating multi-slice image data: object misalignment along the z-axis, inter-slice pixel correlation, and number of unique slice locations. In each case, the CHO training set was fixed to 100 images. Results: Artificially shifting the object’s center position by as much as 3 pixels in any direction relative to the Gabor channel filters had insignificant impact on object detectability. An inter-slice pixel correlation of >∼0.2 yielded positive bias in the model’s performance. Incorporating multi-slice image data yielded slight negative bias in detectability with increasing number of slices, likely due to physical variations in the objects. However, inclusion of image data from up to 5 slice locations yielded detectability indices within measurement error of the single slice value. Conclusion: For the investigated model and task, incorporating image data from 5 different slice locations of at least 5 mm intervals into the CHO model yielded detectability indices within measurement error of the single slice value. Consequently, this methodology would Result in a 5-fold reduction in number of image acquisitions. This project

  7. TU-EF-204-11: Impact of Using Multi-Slice Training Sets On the Performance of a Channelized Hotelling Observer in a Low-Contrast Detection Task in CT

    International Nuclear Information System (INIS)

    Favazza, C; Yu, L; Leng, S; McCollough, C

    2015-01-01

    Purpose: To investigate using multiple CT image slices from a single acquisition as independent training images for a channelized Hotelling observer (CHO) model to reduce the number of repeated scans for CHO-based CT image quality assessment. Methods: We applied a previously validated CHO model to detect low contrast disk objects formed from cross-sectional images of three epoxy-resin-based rods (diameters: 3, 5, and 9 mm; length: ∼5cm). The rods were submerged in a 35x 25 cm2 iodine-doped water filled phantom, yielding-15 HU object contrast. The phantom was scanned 100 times with and without the rods present. Scan and reconstruction parameters include: 5 mm slice thickness at 0.5 mm intervals, 120 kV, 480 Quality Reference mAs, and a 128-slice scanner. The CHO’s detectability index was evaluated as a function of factors related to incorporating multi-slice image data: object misalignment along the z-axis, inter-slice pixel correlation, and number of unique slice locations. In each case, the CHO training set was fixed to 100 images. Results: Artificially shifting the object’s center position by as much as 3 pixels in any direction relative to the Gabor channel filters had insignificant impact on object detectability. An inter-slice pixel correlation of >∼0.2 yielded positive bias in the model’s performance. Incorporating multi-slice image data yielded slight negative bias in detectability with increasing number of slices, likely due to physical variations in the objects. However, inclusion of image data from up to 5 slice locations yielded detectability indices within measurement error of the single slice value. Conclusion: For the investigated model and task, incorporating image data from 5 different slice locations of at least 5 mm intervals into the CHO model yielded detectability indices within measurement error of the single slice value. Consequently, this methodology would Result in a 5-fold reduction in number of image acquisitions. This project

  8. A slice through a prototype LHC bending magnet

    CERN Multimedia

    Laurent Guiraud

    1998-01-01

    This slice through a prototype LHC magnet clearly shows the superconducting cable in several blocks around the central hole – the beam pipe in which the LHC’s accelerated beams will travel. Magnet design is crucial to the LHC’s success and this sample is among the first to be built to the final cable configuration.

  9. Continuous Slice Functional Calculus in Quaternionic Hilbert Spaces

    Science.gov (United States)

    Ghiloni, Riccardo; Moretti, Valter; Perotti, Alessandro

    2013-04-01

    The aim of this work is to define a continuous functional calculus in quaternionic Hilbert spaces, starting from basic issues regarding the notion of spherical spectrum of a normal operator. As properties of the spherical spectrum suggest, the class of continuous functions to consider in this setting is the one of slice quaternionic functions. Slice functions generalize the concept of slice regular function, which comprises power series with quaternionic coefficients on one side and that can be seen as an effective generalization to quaternions of holomorphic functions of one complex variable. The notion of slice function allows to introduce suitable classes of real, complex and quaternionic C*-algebras and to define, on each of these C*-algebras, a functional calculus for quaternionic normal operators. In particular, we establish several versions of the spectral map theorem. Some of the results are proved also for unbounded operators. However, the mentioned continuous functional calculi are defined only for bounded normal operators. Some comments on the physical significance of our work are included.

  10. Blanching, salting and sun drying of different pumpkin fruit slices.

    Science.gov (United States)

    Workneh, T S; Zinash, A; Woldetsadik, K

    2014-11-01

    The study was aimed at assessing the quality of pumpkin (Cucuribita Spp.) slices that were subjected to pre-drying treatments and drying using two drying methods (uncontrolled sun and oven) fruit accessions. Pre-drying had significant (P ≤ 0.05) effect on the quality of dried pumpkin slices. 10 % salt solution dipped pumpkin fruit slices had good chemical quality. The two-way interaction between drying methods and pre-drying treatments had significant (P ≤ 0.05) effect on chemical qualities. Pumpkin subjected to salt solution dipping treatment and oven dried had higher chemical concentrations. Among the pumpkin fruit accessions, pumpkin accession 8007 had the superior TSS, total sugar and sugar to acid ratio after drying. Among the three pre-drying treatment, salt solution dipping treatment had significant (P ≤ 0.05) effect and the most efficient pre-drying treatment to retain the quality of dried pumpkin fruits without significant chemical quality deterioration. Salt dipping treatment combined with low temperature (60 °C) oven air circulation drying is recommended to maintain quality of dried pumpkin slices. However, since direct sun drying needs extended drying time due to fluctuation in temperature, it is recommended to develop or select best successful solar dryer for use in combination with pre-drying salt dipping or blanching treatments.

  11. A slicing-based approach for locating type errors

    NARCIS (Netherlands)

    T.B. Dinesh; F. Tip (Frank)

    1998-01-01

    htmlabstractThe effectiveness of a type checking tool strongly depends on the accuracy of the positional information that is associated with type errors. We present an approach where the location associated with an error message e is defined as a slice P_e of the program P being type checked. We

  12. A slicing-based approach for locating type errors

    NARCIS (Netherlands)

    T.B. Dinesh; F. Tip (Frank)

    1998-01-01

    textabstractThe effectiveness of a type checking tool strongly depends on the accuracy of the positional information that is associated with type errors. We present an approach where the location associated with an error message e is defined as a slice P_e of the program P being type checked. We

  13. Bacteriological Quality of Dried Sliced Beef (Kilishi) Sold In Ilorin ...

    African Journals Online (AJOL)

    DR. MIKE HORSFALL

    ABSTRACT: The bacteriological quality of dried sliced beef (kilishi) obtained from three selling points in. Ilorin metropolis was determined in order to ascertain its safety. The total bacterial count, Enterobacteriaceae count, Staphylococcus aureus count and E.coli counts were used as index of bacteriological quality. Samples.

  14. Thin slice impressions : How advertising evaluation depends on exposure duration

    NARCIS (Netherlands)

    Pieters, Rik; Elsen, M.; Wedel, M.

    The duration of exposures to advertising is often brief. Then, consumers can only obtain “thin slices” of information from the ads, such as which product and brand are being promoted. This research is the first to examine the influence that such thin slices of information have on ad and brand

  15. A novel lung slice system with compromised antioxidant defenses

    Energy Technology Data Exchange (ETDEWEB)

    Hardwick, S.J.; Adam, A.; Cohen, G.M. (Univ. of London (England)); Smith, L.L. (Imperial Chemical Industries PLC, Cheshire (England))

    1990-04-01

    In order to facilitate the study of oxidative stress in lung tissue, rat lung slices with impaired antioxidant defenses were prepared and used. Incubation of lung slices with the antineoplastic agent 1,3-bis(2-chloroethyl)-1-nitrosourea (BCNU) (100 {mu}M) in an amino acid-rich medium for 45 min produced a near-maximal (approximately 85%), irreversible inhibition of glutathione reductase, accompanied by only a modest (approximately 15%) decrease in pulmonary nonprotein sulfhydryls (NPSH) and no alteration in intracellular ATP, NADP{sup +}, and NADPH levels. The amounts of NADP(H), ATP, and NPSH were stable over a 4-hr incubation period following the removal from BCNU. The viability of the system was further evaluated by measuring the rate of evolution of {sup 14}CO{sub 2} from D-({sup 14}C(U))-glucose. The rates of evolution were almost identical in the compromised system when compared with control slices over a 4-hr time period. By using slices with compromised oxidative defenses, preliminary results have been obtained with paraquat, nitrofurantoin, and 2,3-dimethoxy-1,4-naphthoquinone.

  16. Automatic Solitary Lung Nodule Detection in Computed Tomography Images Slices

    Science.gov (United States)

    Sentana, I. W. B.; Jawas, N.; Asri, S. A.

    2018-01-01

    Lung nodule is an early indicator of some lung diseases, including lung cancer. In Computed Tomography (CT) based image, nodule is known as a shape that appears brighter than lung surrounding. This research aim to develop an application that automatically detect lung nodule in CT images. There are some steps in algorithm such as image acquisition and conversion, image binarization, lung segmentation, blob detection, and classification. Data acquisition is a step to taking image slice by slice from the original *.dicom format and then each image slices is converted into *.tif image format. Binarization that tailoring Otsu algorithm, than separated the background and foreground part of each image slices. After removing the background part, the next step is to segment part of the lung only so the nodule can localized easier. Once again Otsu algorithm is use to detect nodule blob in localized lung area. The final step is tailoring Support Vector Machine (SVM) to classify the nodule. The application has succeed detecting near round nodule with a certain threshold of size. Those detecting result shows drawback in part of thresholding size and shape of nodule that need to enhance in the next part of the research. The algorithm also cannot detect nodule that attached to wall and Lung Chanel, since it depend the searching only on colour differences.

  17. The Sliced Pineapple Grid Feature for Predicting Grasping Affordances

    DEFF Research Database (Denmark)

    Thomsen, Mikkel Tang; Kraft, Dirk; Krüger, Norbert

    2017-01-01

    The problem of grasping unknown objects utilising vision is addressed in this work by introducing a novel feature, the Sliced Pineapple Grid Feature (SPGF). The SPGF encode semi-local surfaces and allows for distinguishing structures such as “walls”,“edges” and “rims”. These structures are shown...

  18. Water-activity of dehydrated guava slices sweeteners

    International Nuclear Information System (INIS)

    Ayub, M.; Zeb, A.; Ullah, J.

    2005-01-01

    A study was carried out to investigate the individual and combined effect of caloric sweeteners (sucrose, glucose and fructose) and non-caloric sweeteners (saccharine, cyclamate and aspartame) along with antioxidants (citric acid and ascorbic acid) and chemical preservatives (potassium metabisulphite and potassium sorbate) on the water-activity (a/sub w/) of dehydrated guava slices. Different dilutions of caloric sweeteners (20, 30, 40 and 50 degree brix (bx) and non-caloric sweeteners (equivalent to sucrose sweetness) were used. Guava slices were osmotically dehydrated in these solutions and then dehydrated initially at 0 and then at 60 degree C to final moisture-content of 20-25%. Guava slices prepared with sucrose: glucose 7:3 potassium metabisulphite, ascorbic acid and citric acid produced best quality products, which have minimum (a/sub w/) and best overall sensory characteristics. The analysis showed that treatments and their various concentrations had a significant effect (p=0.05) on (a/sub w/) of dehydrated guava slices. (author)

  19. Colour behaviour on mango ( Mangifera indica ) slices self ...

    African Journals Online (AJOL)

    The effect of the syrup composition on behaviour colour of self stabilized mango slices in glass jars by hurdle technology during 180 days of storage was studied through 26-2 fractional factorial design. L* (lightness), a* (redness and greenness), and b* (yellowness and blueness) values were measured with a colorimeter ...

  20. A novel lung slice system with compromised antioxidant defenses

    International Nuclear Information System (INIS)

    Hardwick, S.J.; Adam, A.; Cohen, G.M.; Smith, L.L.

    1990-01-01

    In order to facilitate the study of oxidative stress in lung tissue, rat lung slices with impaired antioxidant defenses were prepared and used. Incubation of lung slices with the antineoplastic agent 1,3-bis(2-chloroethyl)-1-nitrosourea (BCNU) (100 μM) in an amino acid-rich medium for 45 min produced a near-maximal (approximately 85%), irreversible inhibition of glutathione reductase, accompanied by only a modest (approximately 15%) decrease in pulmonary nonprotein sulfhydryls (NPSH) and no alteration in intracellular ATP, NADP + , and NADPH levels. The amounts of NADP(H), ATP, and NPSH were stable over a 4-hr incubation period following the removal from BCNU. The viability of the system was further evaluated by measuring the rate of evolution of 14 CO 2 from D-[ 14 C(U)]-glucose. The rates of evolution were almost identical in the compromised system when compared with control slices over a 4-hr time period. By using slices with compromised oxidative defenses, preliminary results have been obtained with paraquat, nitrofurantoin, and 2,3-dimethoxy-1,4-naphthoquinone

  1. Three-dimensional electrode array for brain slice culture

    DEFF Research Database (Denmark)

    Vazquez Rodriguez, Patricia

    Multielektroder arrays (MEA) er rækker af elektroder mest i mikrometer størrelse, som er blevet brugt i stor omfang til at stimulere og måle elektrisk aktivitet fra neuronale netværker. Brug af disse for at analysere hjerne slices (hjerneskiver) kan give indsigt i interaktioner mellem neuroner, e...

  2. Gravitational clustering of galaxies in the CfA slice

    International Nuclear Information System (INIS)

    Crane, P.; Saslaw, W.C.

    1988-01-01

    The clustering properties of the Galaxies in the CfA slice have been analyzed by comparing the properties of the neighbor distributions to the predictions of gravitational clustering theory. The agreement is excellent and implies that the observed structures can be explained by gravitational effects alone and do not require exotic explanations

  3. Long-term brain slice culturing in a microfluidic platform

    DEFF Research Database (Denmark)

    Vedarethinam, Indumathi; Avaliani, N.; Tønnesen, J.

    2011-01-01

    In this work, we present the development of a transparent poly(methyl methacrylate) (PMMA) based microfluidic culture system for handling long-term brain slice cultures independent of an incubator. The different stages of system development have been validated by culturing GFP producing brain sli...

  4. Pyrethroid insecticides evoke neurotransmitter release from rabbit striatal slices

    International Nuclear Information System (INIS)

    Eells, J.T.; Dubocovich, M.L.

    1988-01-01

    The effects of the synthetic pyrethroid insecticide fenvalerate ([R,S]-alpha-cyano-3-phenoxybenzyl[R,S]-2-(4-chlorophenyl)-3- methylbutyrate) on neurotransmitter release in rabbit brain slices were investigated. Fenvalerate evoked a calcium-dependent release of [ 3 H]dopamine and [ 3 H]acetylcholine from rabbit striatal slices that was concentration-dependent and specific for the toxic stereoisomer of the insecticide. The release of [ 3 H]dopamine and [ 3 H]acetylcholine by fenvalerate was modulated by D2 dopamine receptor activation and antagonized completely by the sodium channel blocker, tetrodotoxin. These findings are consistent with an action of fenvalerate on the voltage-dependent sodium channels of the presynaptic membrane resulting in membrane depolarization, and the release of dopamine and acetylcholine by a calcium-dependent exocytotic process. In contrast to results obtained in striatal slices, fenvalerate did not elicit the release of [ 3 H]norepinephrine or [ 3 H]acetylcholine from rabbit hippocampal slices indicative of regional differences in sensitivity to type II pyrethroid actions

  5. The effect of propofol on CA1 pyramidal cell excitability and GABAA-mediated inhibition in the rat hippocampal slice.

    Science.gov (United States)

    Albertson, T E; Walby, W F; Stark, L G; Joy, R M

    1996-05-24

    An in vitro paired-pulse orthodromic stimulation technique was used to examine the effects of propofol on excitatory afferent terminals, CA1 pyramidal cells and recurrent collateral evoked inhibition in the rat hippocampal slice. Hippocampal slices 400 microns thick were perfused with oxygenated artificial cerebrospinal fluid, and electrodes were placed in the CA1 region to record extracellular field population spike (PS) or excitatory postsynaptic potential (EPSP) responses to stimulation of Schaffer collateral/commissural fibers. Gamma-aminobutyric acid (GABA)-mediated recurrent inhibition was measured using a paired-pulse technique. The major effect of propofol (7-28 microM) was a dose and time dependent increase in the intensity and duration of GABA-mediated inhibition. This propofol effect could be rapidly and completely reversed by exposure to known GABAA antagonists, including picrotoxin, bicuculline and pentylenetetrazol. It was also reversed by the chloride channel antagonist, 4,4'-diisothiocyanostilbene-2,2'-disulfonic acid (DIDS). It was not antagonized by central (flumazenil) or peripheral (PK11195) benzodiazepine antagonists. Reversal of endogenous inhibition was also noted with the antagonists picrotoxin and pentylenetetrazol. Input/output curves constructed using stimulus propofol caused only a small enhancement of EPSPs at higher stimulus intensities but had no effect on PS amplitudes. These studies are consistent with propofol having a GABAA-chloride channel mechanism causing its effect on recurrent collateral evoked inhibition in the rat hippocampal slice.

  6. Coating thickness measurement

    International Nuclear Information System (INIS)

    1976-12-01

    The standard specifies measurements of the coating thickness, which make use of beta backscattering and/or x-ray fluorescence. For commonly used combinations of coating material and base material the appropriate measuring ranges and radionuclides to be used are given for continuous as well as for discontinuous measurements

  7. Comparison of iterative model, hybrid iterative, and filtered back projection reconstruction techniques in low-dose brain CT: impact of thin-slice imaging

    Energy Technology Data Exchange (ETDEWEB)

    Nakaura, Takeshi; Iyama, Yuji; Kidoh, Masafumi; Yokoyama, Koichi [Amakusa Medical Center, Diagnostic Radiology, Amakusa, Kumamoto (Japan); Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Oda, Seitaro; Yamashita, Yasuyuki [Kumamoto University, Department of Diagnostic Radiology, Graduate School of Medical Sciences, Kumamoto (Japan); Tokuyasu, Shinichi [Philips Electronics, Kumamoto (Japan); Harada, Kazunori [Amakusa Medical Center, Department of Surgery, Kumamoto (Japan)

    2016-03-15

    The purpose of this study was to evaluate the utility of iterative model reconstruction (IMR) in brain CT especially with thin-slice images. This prospective study received institutional review board approval, and prior informed consent to participate was obtained from all patients. We enrolled 34 patients who underwent brain CT and reconstructed axial images with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and IMR with 1 and 5 mm slice thicknesses. The CT number, image noise, contrast, and contrast noise ratio (CNR) between the thalamus and internal capsule, and the rate of increase of image noise in 1 and 5 mm thickness images between the reconstruction methods, were assessed. Two independent radiologists assessed image contrast, image noise, image sharpness, and overall image quality on a 4-point scale. The CNRs in 1 and 5 mm slice thickness were significantly higher with IMR (1.2 ± 0.6 and 2.2 ± 0.8, respectively) than with FBP (0.4 ± 0.3 and 1.0 ± 0.4, respectively) and HIR (0.5 ± 0.3 and 1.2 ± 0.4, respectively) (p < 0.01). The mean rate of increasing noise from 5 to 1 mm thickness images was significantly lower with IMR (1.7 ± 0.3) than with FBP (2.3 ± 0.3) and HIR (2.3 ± 0.4) (p < 0.01). There were no significant differences in qualitative analysis of unfamiliar image texture between the reconstruction techniques. IMR offers significant noise reduction and higher contrast and CNR in brain CT, especially for thin-slice images, when compared to FBP and HIR. (orig.)

  8. Exploring a new S U (4 ) symmetry of meson interpolators

    Science.gov (United States)

    Glozman, L. Ya.; Pak, M.

    2015-07-01

    In recent lattice calculations it has been discovered that mesons upon truncation of the quasizero modes of the Dirac operator obey a symmetry larger than the S U (2 )L×S U (2 )R×U (1 )A symmetry of the QCD Lagrangian. This symmetry has been suggested to be S U (4 )⊃S U (2 )L×S U (2 )R×U (1 )A that mixes not only the u- and d-quarks of a given chirality, but also the left- and right-handed components. Here it is demonstrated that bilinear q ¯q interpolating fields of a given spin J ≥1 transform into each other according to irreducible representations of S U (4 ) or, in general, S U (2 NF). This fact together with the coincidence of the correlation functions establishes S U (4 ) as a symmetry of the J ≥1 mesons upon quasizero mode reduction. It is shown that this symmetry is a symmetry of the confining instantaneous charge-charge interaction in QCD. Different subgroups of S U (4 ) as well as the S U (4 ) algebra are explored.

  9. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

    Energy Technology Data Exchange (ETDEWEB)

    Bouland, Adam; Easther, Richard; Rosenfeld, Katherine, E-mail: adam.bouland@aya.yale.edu, E-mail: richard.easther@yale.edu, E-mail: krosenfeld@cfa.harvard.edu [Department of Physics, Yale University, New Haven CT 06520 (United States)

    2011-05-01

    We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user.

  10. Caching and interpolated likelihoods: accelerating cosmological Monte Carlo Markov chains

    International Nuclear Information System (INIS)

    Bouland, Adam; Easther, Richard; Rosenfeld, Katherine

    2011-01-01

    We describe a novel approach to accelerating Monte Carlo Markov Chains. Our focus is cosmological parameter estimation, but the algorithm is applicable to any problem for which the likelihood surface is a smooth function of the free parameters and computationally expensive to evaluate. We generate a high-order interpolating polynomial for the log-likelihood using the first points gathered by the Markov chains as a training set. This polynomial then accurately computes the majority of the likelihoods needed in the latter parts of the chains. We implement a simple version of this algorithm as a patch (InterpMC) to CosmoMC and show that it accelerates parameter estimatation by a factor of between two and four for well-converged chains. The current code is primarily intended as a ''proof of concept'', and we argue that there is considerable room for further performance gains. Unlike other approaches to accelerating parameter fits, we make no use of precomputed training sets or special choices of variables, and InterpMC is almost entirely transparent to the user

  11. Statistical analysis and interpolation of compositional data in materials science.

    Science.gov (United States)

    Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M

    2015-02-09

    Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.

  12. Insect brains use image interpolation mechanisms to recognise rotated objects.

    Directory of Open Access Journals (Sweden)

    Adrian G Dyer

    Full Text Available Recognising complex three-dimensional objects presents significant challenges to visual systems when these objects are rotated in depth. The image processing requirements for reliable individual recognition under these circumstances are computationally intensive since local features and their spatial relationships may significantly change as an object is rotated in the horizontal plane. Visual experience is known to be important in primate brains learning to recognise rotated objects, but currently it is unknown how animals with comparatively simple brains deal with the problem of reliably recognising objects when seen from different viewpoints. We show that the miniature brain of honeybees initially demonstrate a low tolerance for novel views of complex shapes (e.g. human faces, but can learn to recognise novel views of stimuli by interpolating between or 'averaging' views they have experienced. The finding that visual experience is also important for bees has important implications for understanding how three dimensional biologically relevant objects like flowers are recognised in complex environments, and for how machine vision might be taught to solve related visual problems.

  13. Combining the Hanning windowed interpolated FFT in both directions

    Science.gov (United States)

    Chen, Kui Fu; Li, Yan Feng

    2008-06-01

    The interpolated fast Fourier transform (IFFT) has been proposed as a way to eliminate the picket fence effect (PFE) of the fast Fourier transform. The modulus based IFFT, cited in most relevant references, makes use of only the 1st and 2nd highest spectral lines. An approach using three principal spectral lines is proposed. This new approach combines both directions of the complex spectrum based IFFT with the Hanning window. The optimal weight to minimize the estimation variance is established on the first order Taylor series expansion of noise interference. A numerical simulation is carried out, and the results are compared with the Cramer-Rao bound. It is demonstrated that the proposed approach has a lower estimation variance than the two-spectral-line approach. The improvement depends on the extent of sampling deviating from the coherent condition, and the best is decreasing variance by 2/7. However, it is also shown that the estimation variance of the windowed IFFT with the Hanning is significantly higher than that of without windowing.

  14. Radon-domain interferometric interpolation for reconstruction of the near-offset gap in marine seismic data

    Science.gov (United States)

    Xu, Zhuo; Sopher, Daniel; Juhlin, Christopher; Han, Liguo; Gong, Xiangbo

    2018-04-01

    In towed marine seismic data acquisition, a gap between the source and the nearest recording channel is typical. Therefore, extrapolation of the missing near-offset traces is often required to avoid unwanted effects in subsequent data processing steps. However, most existing interpolation methods perform poorly when extrapolating traces. Interferometric interpolation methods are one particular method that have been developed for filling in trace gaps in shot gathers. Interferometry-type interpolation methods differ from conventional interpolation methods as they utilize information from several adjacent shot records to fill in the missing traces. In this study, we aim to improve upon the results generated by conventional time-space domain interferometric interpolation by performing interferometric interpolation in the Radon domain, in order to overcome the effects of irregular data sampling and limited source-receiver aperture. We apply both time-space and Radon-domain interferometric interpolation methods to the Sigsbee2B synthetic dataset and a real towed marine dataset from the Baltic Sea with the primary aim to improve the image of the seabed through extrapolation into the near-offset gap. Radon-domain interferometric interpolation performs better at interpolating the missing near-offset traces than conventional interferometric interpolation when applied to data with irregular geometry and limited source-receiver aperture. We also compare the interferometric interpolated results with those obtained using solely Radon transform (RT) based interpolation and show that interferometry-type interpolation performs better than solely RT-based interpolation when extrapolating the missing near-offset traces. After data processing, we show that the image of the seabed is improved by performing interferometry-type interpolation, especially when Radon-domain interferometric interpolation is applied.

  15. Linear and Quadratic Interpolators Using Truncated-Matrix Multipliers and Squarers

    Directory of Open Access Journals (Sweden)

    E. George Walters III

    2015-11-01

    Full Text Available This paper presents a technique for designing linear and quadratic interpolators for function approximation using truncated multipliers and squarers. Initial coefficient values are found using a Chebyshev-series approximation and then adjusted through exhaustive simulation to minimize the maximum absolute error of the interpolator output. This technique is suitable for any function and any precision up to 24 bits (IEEE single precision. Designs for linear and quadratic interpolators that implement the 1/x, 1/ √ x, log2(1+2x, log2(x and 2x functions are presented and analyzed as examples. Results show that a proposed 24-bit interpolator computing 1/x with a design specification of ±1 unit in the last place of the product (ulp error uses 16.4% less area and 15.3% less power than a comparable standard interpolator with the same error specification. Sixteen-bit linear interpolators for other functions are shown to use up to 17.3% less area and 12.1% less power, and 16-bit quadratic interpolators are shown to use up to 25.8% less area and 24.7% less power.

  16. EBSDinterp 1.0: A MATLAB® Program to Perform Microstructurally Constrained Interpolation of EBSD Data.

    Science.gov (United States)

    Pearce, Mark A

    2015-08-01

    EBSDinterp is a graphic user interface (GUI)-based MATLAB® program to perform microstructurally constrained interpolation of nonindexed electron backscatter diffraction data points. The area available for interpolation is restricted using variations in pattern quality or band contrast (BC). Areas of low BC are not available for interpolation, and therefore cannot be erroneously filled by adjacent grains "growing" into them. Points with the most indexed neighbors are interpolated first and the required number of neighbors is reduced with each successive round until a minimum number of neighbors is reached. Further iterations allow more data points to be filled by reducing the BC threshold. This method ensures that the best quality points (those with high BC and most neighbors) are interpolated first, and that the interpolation is restricted to grain interiors before adjacent grains are grown together to produce a complete microstructure. The algorithm is implemented through a GUI, taking advantage of MATLAB®'s parallel processing toolbox to perform the interpolations rapidly so that a variety of parameters can be tested to ensure that the final microstructures are robust and artifact-free. The software is freely available through the CSIRO Data Access Portal (doi:10.4225/08/5510090C6E620) as both a compiled Windows executable and as source code.

  17. Short-term prediction method of wind speed series based on fractal interpolation

    International Nuclear Information System (INIS)

    Xiu, Chunbo; Wang, Tiantian; Tian, Meng; Li, Yanqing; Cheng, Yi

    2014-01-01

    Highlights: • An improved fractal interpolation prediction method is proposed. • The chaos optimization algorithm is used to obtain the iterated function system. • The fractal extrapolate interpolation prediction of wind speed series is performed. - Abstract: In order to improve the prediction performance of the wind speed series, the rescaled range analysis is used to analyze the fractal characteristics of the wind speed series. An improved fractal interpolation prediction method is proposed to predict the wind speed series whose Hurst exponents are close to 1. An optimization function which is composed of the interpolation error and the constraint items of the vertical scaling factors in the fractal interpolation iterated function system is designed. The chaos optimization algorithm is used to optimize the function to resolve the optimal vertical scaling factors. According to the self-similarity characteristic and the scale invariance, the fractal extrapolate interpolation prediction can be performed by extending the fractal characteristic from internal interval to external interval. Simulation results show that the fractal interpolation prediction method can get better prediction result than others for the wind speed series with the fractal characteristic, and the prediction performance of the proposed method can be improved further because the fractal characteristic of its iterated function system is similar to that of the predicted wind speed series

  18. Coating thickness measuring device

    International Nuclear Information System (INIS)

    Joffe, B.B.; Sawyer, B.E.; Spongr, J.J.

    1984-01-01

    A device especially adapted for measuring the thickness of coatings on small, complexly-shaped parts, such as, for example, electronic connectors, electronic contacts, or the like. The device includes a source of beta radiation and a radiation detector whereby backscatter of the radiation from the coated part can be detected and the thickness of the coating ascertained. The radiation source and detector are positioned in overlying relationship to the coated part and a microscope is provided to accurately position the device with respect to the part. Means are provided to control the rate of descent of the radiation source and radiation detector from its suspended position to its operating position and the resulting impact it makes with the coated part to thereby promote uniformity of readings from operator to operator, and also to avoid excessive impact with the part, thereby improving accuracy of measurement and eliminating damage to the parts

  19. Thick melanoma in Tuscany.

    Science.gov (United States)

    Chiarugi, Alessandra; Nardini, Paolo; Borgognoni, Lorenzo; Brandani, Paola; Gerlini, Gianni; Rubegni, Pietro; Lamberti, Arianna; Salvini, Camilla; Lo Scocco, Giovanni; Cecchi, Roberto; Sirna, Riccardo; Lorenzi, Stefano; Gattai, Riccardo; Battistini, Silvio; Crocetti, Emanuele

    2017-03-14

    The epidemiologic trends of cutaneous melanoma are similar in several countries with a Western-type life style, where there is a progressive increasing incidence and a low but not decreasing mor- tality, or somewhere an increase too, especially in the older age groups. Also in Tuscany there is a steady rise in incidence with prevalence of in situ and invasive thin melanomas, with also an increase of thick melanomas. It is necessary to reduce the frequency of thick melanomas to reduce specific mortality. The objective of the current survey has been to compare, in the Tuscany population, by a case- case study, thin and thick melanoma cases, trying to find out those personal and tumour characteristics which may help to customize preventive interventions. RESULTS The results confirmed the age and the lower edu- cation level are associated with a later detection. The habit to perform skin self-examination is resulted protec- tive forward thick melanoma and also the diagnosis by a doctor. The elements emerging from the survey allow to hypothesize a group of subjects resulting at higher risk for a late diagnosis, aged over 50 and carrier of a fewer constitutional and environmental risk factors: few total and few atypical nevi, and lower sun exposure and burning. It is assumable that a part of people did not be reached from messages of prevention because does not recognize oneself in the categories of people at risk for skin cancers described in educational cam- paigns. If we want to obtain better results on diagnosis of skin melanoma we have to think a new strategy. At least to think over the educational messages discriminating people more at risk of incidence of melanoma from people more at risk to die from melanoma, and to renewed active involvement of the Gen- eral Practitioners .

  20. Thick brane solutions

    International Nuclear Information System (INIS)

    Dzhunushaliev, Vladimir; Minamitsuji, Masato; Folomeev, Vladimir

    2010-01-01

    This paper gives a comprehensive review on thick brane solutions and related topics. Such models have attracted much attention from many aspects since the birth of the brane world scenario. In many works, it has been usually assumed that a brane is an infinitely thin object; however, in more general situations, one can no longer assume this. It is also widely considered that more fundamental theories such as string theory would have a minimal length scale. Many multidimensional field theories coupled to gravitation have exact solutions of gravitating topological defects, which can represent our brane world. The inclusion of brane thickness can realize a variety of possible brane world models. Given our understanding, the known solutions can be classified into topologically non-trivial solutions and trivial ones. The former class contains solutions of a single scalar (domain walls), multi-scalar, gauge-Higgs (vortices), Weyl gravity and so on. As an example of the latter class, we consider solutions of two interacting scalar fields. Approaches to obtain cosmological equations in the thick brane world are reviewed. Solutions with spatially extended branes (S-branes) and those with an extra time-like direction are also discussed.