WorldWideScience

Sample records for selective ct algorithm

  1. Parameter selection in limited data cone-beam CT reconstruction using edge-preserving total variation algorithms

    Science.gov (United States)

    Lohvithee, Manasavee; Biguri, Ander; Soleimani, Manuchehr

    2017-12-01

    There are a number of powerful total variation (TV) regularization methods that have great promise in limited data cone-beam CT reconstruction with an enhancement of image quality. These promising TV methods require careful selection of the image reconstruction parameters, for which there are no well-established criteria. This paper presents a comprehensive evaluation of parameter selection in a number of major TV-based reconstruction algorithms. An appropriate way of selecting the values for each individual parameter has been suggested. Finally, a new adaptive-weighted projection-controlled steepest descent (AwPCSD) algorithm is presented, which implements the edge-preserving function for CBCT reconstruction with limited data. The proposed algorithm shows significant robustness compared to three other existing algorithms: ASD-POCS, AwASD-POCS and PCSD. The proposed AwPCSD algorithm is able to preserve the edges of the reconstructed images better with fewer sensitive parameters to tune.

  2. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    Science.gov (United States)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  3. An algorithm for 4D CT image sorting using spatial continuity.

    Science.gov (United States)

    Li, Chen; Liu, Jie

    2013-01-01

    4D CT, which could locate the position of the movement of the tumor in the entire respiratory cycle and reduce image artifacts effectively, has been widely used in making radiation therapy of tumors. The current 4D CT methods required external surrogates of respiratory motion obtained from extra instruments. However, respiratory signals recorded by these external makers may not always accurately represent the internal tumor and organ movements, especially when irregular breathing patterns happened. In this paper we have proposed a novel automatic 4D CT sorting algorithm that performs without these external surrogates. The sorting algorithm requires collecting the image data with a cine scan protocol. Beginning with the first couch position, images from the adjacent couch position are selected out according to spatial continuity. The process is continued until images from all couch positions are sorted and the entire 3D volume is produced. The algorithm is verified by respiratory phantom image data and clinical image data. The primary test results show that the 4D CT images created by our algorithm have eliminated the motion artifacts effectively and clearly demonstrated the movement of tumor and organ in the breath period.

  4. First-order convex feasibility algorithms for x-ray CT

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times...... problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited...

  5. A fully automated non-external marker 4D-CT sorting algorithm using a serial cine scanning protocol.

    Science.gov (United States)

    Carnes, Greg; Gaede, Stewart; Yu, Edward; Van Dyk, Jake; Battista, Jerry; Lee, Ting-Yim

    2009-04-07

    Current 4D-CT methods require external marker data to retrospectively sort image data and generate CT volumes. In this work we develop an automated 4D-CT sorting algorithm that performs without the aid of data collected from an external respiratory surrogate. The sorting algorithm requires an overlapping cine scan protocol. The overlapping protocol provides a spatial link between couch positions. Beginning with a starting scan position, images from the adjacent scan position (which spatial match the starting scan position) are selected by maximizing the normalized cross correlation (NCC) of the images at the overlapping slice position. The process was continued by 'daisy chaining' all couch positions using the selected images until an entire 3D volume was produced. The algorithm produced 16 phase volumes to complete a 4D-CT dataset. Additional 4D-CT datasets were also produced using external marker amplitude and phase angle sorting methods. The image quality of the volumes produced by the different methods was quantified by calculating the mean difference of the sorted overlapping slices from adjacent couch positions. The NCC sorted images showed a significant decrease in the mean difference (p < 0.01) for the five patients.

  6. Evaluation of a metal artifact reduction algorithm applied to post-interventional flat detector CT in comparison to pre-treatment CT in patients with acute subarachnoid haemorrhage

    International Nuclear Information System (INIS)

    Mennecke, Angelika; Svergun, Stanislav; Doerfler, Arnd; Struffert, Tobias; Scholz, Bernhard; Royalty, Kevin

    2017-01-01

    Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. (orig.)

  7. Evaluation of a metal artifact reduction algorithm applied to post-interventional flat detector CT in comparison to pre-treatment CT in patients with acute subarachnoid haemorrhage

    Energy Technology Data Exchange (ETDEWEB)

    Mennecke, Angelika; Svergun, Stanislav; Doerfler, Arnd; Struffert, Tobias [University of Erlangen-Nuremberg, Department of Neuroradiology, Erlangen (Germany); Scholz, Bernhard [Siemens Healthcare GmbH, Forchheim (Germany); Royalty, Kevin [Siemens Medical Solutions, USA, Inc., Hoffman Estates, IL (United States)

    2017-01-15

    Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. (orig.)

  8. An Approximate Cone Beam Reconstruction Algorithm for Gantry-Tilted CT Using Tangential Filtering

    Directory of Open Access Journals (Sweden)

    Ming Yan

    2006-01-01

    Full Text Available FDK algorithm is a well-known 3D (three-dimensional approximate algorithm for CT (computed tomography image reconstruction and is also known to suffer from considerable artifacts when the scanning cone angle is large. Recently, it has been improved by performing the ramp filtering along the tangential direction of the X-ray source helix for dealing with the large cone angle problem. In this paper, we present an FDK-type approximate reconstruction algorithm for gantry-tilted CT imaging. The proposed method improves the image reconstruction by filtering the projection data along a proper direction which is determined by CT parameters and gantry-tilted angle. As a result, the proposed algorithm for gantry-tilted CT reconstruction can provide more scanning flexibilities in clinical CT scanning and is efficient in computation. The performance of the proposed algorithm is evaluated with turbell clock phantom and thorax phantom and compared with FDK algorithm and a popular 2D (two-dimensional approximate algorithm. The results show that the proposed algorithm can achieve better image quality for gantry-tilted CT image reconstruction.

  9. Frequency Selective Non-Linear Blending to Improve Image Quality in Liver CT.

    Science.gov (United States)

    Bongers, M N; Bier, G; Kloth, C; Schabel, C; Fritz, J; Nikolaou, K; Horger, M

    2016-12-01

    Purpose: To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Materials and Methods: Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60 % female, mean age: 65 ± 16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. Results: The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR Standard 1.62 ± 1.10, CNR NLB 3.6 ± 2.94, p = 0.0002) and portal veins (CNR Standard 1.31 ± 0.85, CNR NLB 2.42 ± 3.03, p = 0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR NLB 11.26 ± 3.16, SNR Standard 8.85 ± 2.27, p = 0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB DHV : 4 [3 - 4.75], S tandardDHV : 2 [1.3 - 2.5], p = algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma. Key Points: • Using the new frequency selective non-linear blending algorithm is feasible in contrast

  10. An algorithm for intelligent sorting of CT-related dose parameters

    Science.gov (United States)

    Cook, Tessa S.; Zimmerman, Stefan L.; Steingal, Scott; Boonn, William W.; Kim, Woojin

    2011-03-01

    Imaging centers nationwide are seeking innovative means to record and monitor CT-related radiation dose in light of multiple instances of patient over-exposure to medical radiation. As a solution, we have developed RADIANCE, an automated pipeline for extraction, archival and reporting of CT-related dose parameters. Estimation of whole-body effective dose from CT dose-length product (DLP)-an indirect estimate of radiation dose-requires anatomy-specific conversion factors that cannot be applied to total DLP, but instead necessitate individual anatomy-based DLPs. A challenge exists because the total DLP reported on a dose sheet often includes multiple separate examinations (e.g., chest CT followed by abdominopelvic CT). Furthermore, the individual reported series DLPs may not be clearly or consistently labeled. For example, Arterial could refer to the arterial phase of the triple liver CT or the arterial phase of a CT angiogram. To address this problem, we have designed an intelligent algorithm to parse dose sheets for multi-series CT examinations and correctly separate the total DLP into its anatomic components. The algorithm uses information from the departmental PACS to determine how many distinct CT examinations were concurrently performed. Then, it matches the number of distinct accession numbers to the series that were acquired, and anatomically matches individual series DLPs to their appropriate CT examinations. This algorithm allows for more accurate dose analytics, but there remain instances where automatic sorting is not feasible. To ultimately improve radiology patient care, we must standardize series names and exam names to unequivocally sort exams by anatomy and correctly estimate whole-body effective dose.

  11. An algorithm for intelligent sorting of CT-related dose parameters.

    Science.gov (United States)

    Cook, Tessa S; Zimmerman, Stefan L; Steingall, Scott R; Boonn, William W; Kim, Woojin

    2012-02-01

    Imaging centers nationwide are seeking innovative means to record and monitor computed tomography (CT)-related radiation dose in light of multiple instances of patient overexposure to medical radiation. As a solution, we have developed RADIANCE, an automated pipeline for extraction, archival, and reporting of CT-related dose parameters. Estimation of whole-body effective dose from CT dose length product (DLP)--an indirect estimate of radiation dose--requires anatomy-specific conversion factors that cannot be applied to total DLP, but instead necessitate individual anatomy-based DLPs. A challenge exists because the total DLP reported on a dose sheet often includes multiple separate examinations (e.g., chest CT followed by abdominopelvic CT). Furthermore, the individual reported series DLPs may not be clearly or consistently labeled. For example, "arterial" could refer to the arterial phase of the triple liver CT or the arterial phase of a CT angiogram. To address this problem, we have designed an intelligent algorithm to parse dose sheets for multi-series CT examinations and correctly separate the total DLP into its anatomic components. The algorithm uses information from the departmental PACS to determine how many distinct CT examinations were concurrently performed. Then, it matches the number of distinct accession numbers to the series that were acquired and anatomically matches individual series DLPs to their appropriate CT examinations. This algorithm allows for more accurate dose analytics, but there remain instances where automatic sorting is not feasible. To ultimately improve radiology patient care, we must standardize series names and exam names to unequivocally sort exams by anatomy and correctly estimate whole-body effective dose.

  12. Parametric boundary reconstruction algorithm for industrial CT metrology application.

    Science.gov (United States)

    Yin, Zhye; Khare, Kedar; De Man, Bruno

    2009-01-01

    High-energy X-ray computed tomography (CT) systems have been recently used to produce high-resolution images in various nondestructive testing and evaluation (NDT/NDE) applications. The accuracy of the dimensional information extracted from CT images is rapidly approaching the accuracy achieved with a coordinate measuring machine (CMM), the conventional approach to acquire the metrology information directly. On the other hand, CT systems generate the sinogram which is transformed mathematically to the pixel-based images. The dimensional information of the scanned object is extracted later by performing edge detection on reconstructed CT images. The dimensional accuracy of this approach is limited by the grid size of the pixel-based representation of CT images since the edge detection is performed on the pixel grid. Moreover, reconstructed CT images usually display various artifacts due to the underlying physical process and resulting object boundaries from the edge detection fail to represent the true boundaries of the scanned object. In this paper, a novel algorithm to reconstruct the boundaries of an object with uniform material composition and uniform density is presented. There are three major benefits in the proposed approach. First, since the boundary parameters are reconstructed instead of image pixels, the complexity of the reconstruction algorithm is significantly reduced. The iterative approach, which can be computationally intensive, will be practical with the parametric boundary reconstruction. Second, the object of interest in metrology can be represented more directly and accurately by the boundary parameters instead of the image pixels. By eliminating the extra edge detection step, the overall dimensional accuracy and process time can be improved. Third, since the parametric reconstruction approach shares the boundary representation with other conventional metrology modalities such as CMM, boundary information from other modalities can be directly

  13. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    Science.gov (United States)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  14. Clinical evaluation of 64-slice CT assessment of global left ventricular function using automated cardiac phase selection

    International Nuclear Information System (INIS)

    Joemai, Raoul M.S.; Geleijns, Joemai; Veldkamp, Wouter J.H.; Kroft, Lucia J.M.

    2008-01-01

    Left ventricular (LV) function provides prognostic information regarding the morbidity and mortality of patients. An automated cardiac phase selection algorithm has the potential to support the assessment of LV function with computed tomography (CT). This algorithm is clinically evaluated for 64-slice cardiac CT. Examinations of twenty consecutive patients were selected. Electrocardiogram gated contrast-enhanced CT was performed. Reconstructions were performed using an automated and a manual method, followed by the determination of the global LV function. Significances were tested using 2-sided Student's t-tests. Reduction in post processing time and storage capacity were estimated. A slightly smaller mean end-systolic volume was found with the automated method (52±18 ml vs 54±17 ml, p=0.02, r=0.99). The mean LV ejection fraction was slightly larger with the automated method (65±8% vs 64±8%, p=0.004, r=0.99). The estimated reduction in post processing time was maximal 5 min per patient with a potential 80% data storage reduction. Results of the automated phase selection algorithm are similar to the manual method. The automated tool reduces post processing time, reconstruction time and transfer time. (author)

  15. Motion tolerant iterative reconstruction algorithm for cone-beam helical CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu [Hitachi Medical Corporation, Chiba-ken (Japan). CT System Div.

    2011-07-01

    We have developed a new advanced iterative reconstruction algorithm for cone-beam helical CT. The features of this algorithm are: (a) it uses separable paraboloidal surrogate (SPS) technique as a foundation for reconstruction to reduce noise and cone-beam artifact, (b) it uses a view weight in the back-projection process to reduce motion artifact. To confirm the improvement of our proposed algorithm over other existing algorithm, such as Feldkamp-Davis-Kress (FDK) or SPS algorithm, we compared the motion artifact reduction, image noise reduction (standard deviation of CT number), and cone-beam artifact reduction on simulated and clinical data set. Our results demonstrate that the proposed algorithm dramatically reduces motion artifacts compared with the SPS algorithm, and decreases image noise compared with the FDK algorithm. In addition, the proposed algorithm potentially improves time resolution of iterative reconstruction. (orig.)

  16. Development of information preserving data compression algorithm for CT images

    International Nuclear Information System (INIS)

    Kobayashi, Yoshio

    1989-01-01

    Although digital imaging techniques in radiology develop rapidly, problems arise in archival storage and communication of image data. This paper reports on a new information preserving data compression algorithm for computed tomographic (CT) images. This algorithm consists of the following five processes: 1. Pixels surrounding the human body showing CT values smaller than -900 H.U. are eliminated. 2. Each pixel is encoded by its numerical difference from its neighboring pixel along a matrix line. 3. Difference values are encoded by a newly designed code rather than the natural binary code. 4. Image data, obtained with the above process, are decomposed into bit planes. 5. The bit state transitions in each bit plane are encoded by run length coding. Using this new algorithm, the compression ratios of brain, chest, and abdomen CT images are 4.49, 4.34. and 4.40 respectively. (author)

  17. Separation of left and right lungs using 3D information of sequential CT images and a guided dynamic programming algorithm

    Science.gov (United States)

    Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin

    2011-01-01

    Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104

  18. Effects of defect pixel correction algorithms for x-ray detectors on image quality in planar projection and volumetric CT data sets

    International Nuclear Information System (INIS)

    Kuttig, Jan; Steiding, Christian; Hupfer, Martin; Karolczak, Marek; Kolditz, Daniel

    2015-01-01

    In this study we compared various defect pixel correction methods for reducing artifact appearance within projection images used for computed tomography (CT) reconstructions.Defect pixel correction algorithms were examined with respect to their artifact behaviour within planar projection images as well as in volumetric CT reconstructions. We investigated four algorithms: nearest neighbour, linear and adaptive linear interpolation, and a frequency-selective spectral-domain approach.To characterise the quality of each algorithm in planar image data, we inserted line defects of varying widths and orientations into images. The structure preservation of each algorithm was analysed by corrupting and correcting the image of a slit phantom pattern and by evaluating its line spread function (LSF). The noise preservation was assessed by interpolating corrupted flat images and estimating the noise power spectrum (NPS) of the interpolated region.For the volumetric investigations, we examined the structure and noise preservation within a structured aluminium foam, a mid-contrast cone-beam phantom and a homogeneous Polyurethane (PUR) cylinder.The frequency-selective algorithm showed the best structure and noise preservation for planar data of the correction methods tested. For volumetric data it still showed the best noise preservation, whereas the structure preservation was outperformed by the linear interpolation.The frequency-selective spectral-domain approach in the correction of line defects is recommended for planar image data, but its abilities within high-contrast volumes are restricted. In that case, the application of a simple linear interpolation might be the better choice to correct line defects within projection images used for CT. (paper)

  19. Evaluation of GMI and PMI diffeomorphic-based demons algorithms for aligning PET and CT Images.

    Science.gov (United States)

    Yang, Juan; Wang, Hongjun; Zhang, You; Yin, Yong

    2015-07-08

    Fusion of anatomic information in computed tomography (CT) and functional information in 18F-FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined 18F-FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole-body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)-based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point-wise mutual information (PMI) diffeomorphic-based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB-approved study. Whole-body PET and CT images were acquired from a combined 18F-FDG PET/CT scanner for each patient. The modified Hausdorff distance (d(MH)) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of d(MH) were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI-based demons and the PMI diffeomorphic-based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined 18F-FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic-based demons

  20. Optimization of Proton CT Detector System and Image Reconstruction Algorithm for On-Line Proton Therapy.

    Directory of Open Access Journals (Sweden)

    Chae Young Lee

    Full Text Available The purposes of this study were to optimize a proton computed tomography system (pCT for proton range verification and to confirm the pCT image reconstruction algorithm based on projection images generated with optimized parameters. For this purpose, we developed a new pCT scanner using the Geometry and Tracking (GEANT 4.9.6 simulation toolkit. GEANT4 simulations were performed to optimize the geometric parameters representing the detector thickness and the distance between the detectors for pCT. The system consisted of four silicon strip detectors for particle tracking and a calorimeter to measure the residual energies of the individual protons. The optimized pCT system design was then adjusted to ensure that the solution to a CS-based convex optimization problem would converge to yield the desired pCT images after a reasonable number of iterative corrections. In particular, we used a total variation-based formulation that has been useful in exploiting prior knowledge about the minimal variations of proton attenuation characteristics in the human body. Examinations performed using our CS algorithm showed that high-quality pCT images could be reconstructed using sets of 72 projections within 20 iterations and without any streaks or noise, which can be caused by under-sampling and proton starvation. Moreover, the images yielded by this CS algorithm were found to be of higher quality than those obtained using other reconstruction algorithms. The optimized pCT scanner system demonstrated the potential to perform high-quality pCT during on-line image-guided proton therapy, without increasing the imaging dose, by applying our CS based proton CT reconstruction algorithm. Further, we make our optimized detector system and CS-based proton CT reconstruction algorithm potentially useful in on-line proton therapy.

  1. Algorithm of pulmonary emphysema extraction using thoracic 3-D CT images

    Science.gov (United States)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2008-03-01

    Emphysema patients have the tendency to increase due to aging and smoking. Emphysematous disease destroys alveolus and to repair is impossible, thus early detection is essential. CT value of lung tissue decreases due to the destruction of lung structure. This CT value becomes lower than the normal lung- low density absorption region or referred to as Low Attenuation Area (LAA). So far, the conventional way of extracting LAA by simple thresholding has been proposed. However, the CT value of CT image fluctuates due to the measurement conditions, with various bias components such as inspiration, expiration and congestion. It is therefore necessary to consider these bias components in the extraction of LAA. We removed these bias components and we proposed LAA extraction algorithm. This algorithm has been applied to the phantom image. Then, by using the low dose CT(normal: 30 cases, obstructive lung disease: 26 cases), we extracted early stage LAA and quantitatively analyzed lung lobes using lung structure.

  2. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    Science.gov (United States)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  3. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    Science.gov (United States)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  4. A trial to reduce cardiac motion artifact on HR-CT images of the lung with the use of subsecond scan and special cine reconstruction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Sakai, Fumikazu; Tsuuchi, Yasuhiko; Suzuki, Keiko; Ueno, Keiko; Yamada, Takayuki; Okawa, Tomohiko [Tokyo Women`s Medical Coll. (Japan); Yun, Shen; Horiuchi, Tetsuya; Kimura, Fumiko

    1998-05-01

    We describe our trial to reduce cardiac motion artifacts on HR-CT images caused by cardiac pulsation by combining use of subsecond CT (scan time 0.8 s) and a special cine reconstruction algorithm (cine reconstruction algorithm with 180-degree helical interpolation). Eleven to 51 HR-CT images were reconstructed with the special cine reconstruction algorithm at the pitch of 0.1 (0.08 s) from the data obtained by two to six contigious rotation scans at the same level. Images with the fewest cardiac motion artifacts were selected for evaluation. These images were compared with those reconstructed with a conventional cine reconstruction algorithm and step-by-step scan. In spite of its increased radiation exposure, technical complexity and slight degradation of spatial resolution, our method was useful in reducing cardiac motion artifacts on HR-CT images in regions adjacent to the heart. (author)

  5. Evaluation of GMI and PMI diffeomorphic‐based demons algorithms for aligning PET and CT Images

    Science.gov (United States)

    Yang, Juan; Zhang, You; Yin, Yong

    2015-01-01

    Fusion of anatomic information in computed tomography (CT) and functional information in F18‐FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined F18‐FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole‐body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)‐based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point‐wise mutual information (PMI) diffeomorphic‐based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB‐approved study. Whole‐body PET and CT images were acquired from a combined F18‐FDG PET/CT scanner for each patient. The modified Hausdorff distance (dMH) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of dMH were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI‐based demons and the PMI diffeomorphic‐based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined F18‐FDG PET/CT scanner was used for image acquisition. The PMI

  6. Segmentation of Lung Structures in CT

    DEFF Research Database (Denmark)

    Lo, Pechin Chien Pau

    This thesis proposes and evaluates new algorithms for segmenting various lung structures in computed tomography (CT) images, namely the lungs, airway trees and vessel trees. The main objective of these algorithms is to facilitate a better platform for studying Chronic Obstructive Pulmonary Disease......, 200 randomly selected CT scans were manually evaluated by medical experts, and only negligible or minor errors were found in nine scans. The proposed algorithm has been used to study how changes in smoking behavior affect CT based emphysema quantification. The algorithms for segmenting the airway...

  7. Improving Polyp Detection Algorithms for CT Colonography: Pareto Front Approach.

    Science.gov (United States)

    Huang, Adam; Li, Jiang; Summers, Ronald M; Petrick, Nicholas; Hara, Amy K

    2010-03-21

    We investigated a Pareto front approach to improving polyp detection algorithms for CT colonography (CTC). A dataset of 56 CTC colon surfaces with 87 proven positive detections of 53 polyps sized 4 to 60 mm was used to evaluate the performance of a one-step and a two-step curvature-based region growing algorithm. The algorithmic performance was statistically evaluated and compared based on the Pareto optimal solutions from 20 experiments by evolutionary algorithms. The false positive rate was lower (pPareto optimization process can effectively help in fine-tuning and redesigning polyp detection algorithms.

  8. TV-constrained incremental algorithms for low-intensity CT image reconstruction

    DEFF Research Database (Denmark)

    Rose, Sean D.; Andersen, Martin S.; Sidky, Emil Y.

    2015-01-01

    constraint can be guided by an image reconstructed by filtered backprojection (FBP). We apply our algorithm to low-dose synchrotron X-ray CT data from the Advanced Photon Source (APS) at Argonne National Labs (ANL) to demonstrate its potential utility. We find that the algorithm provides a means of edge-preserving...

  9. Convergence of SART + OS + TV iterative reconstruction algorithm for optical CT imaging of gel dosimeters

    International Nuclear Information System (INIS)

    Du, Yi; Yu, Gongyi; Xiang, Xincheng; Wang, Xiangang; De Deene, Yves

    2017-01-01

    Computational simulations are used to investigate the convergence of a hybrid iterative algorithm for optical CT reconstruction, i.e. the simultaneous algebraic reconstruction technique (SART) integrated with ordered subsets (OS) iteration and total variation (TV) minimization regularization, or SART+OS+TV for short. The influence of parameter selection to reach convergence, spatial dose gradient integrity, MTF and convergent speed are discussed. It’s shown that the results of SART+OS+TV algorithm converge to the true values without significant bias, and MTF and convergent speed are affected by different parameter sets used for iterative calculation. In conclusion, the performance of the SART+OS+TV depends on parameter selection, which also implies that careful parameter tuning work is required and necessary for proper spatial performance and fast convergence. (paper)

  10. Fourier rebinning algorithm for inverse geometry CT.

    Science.gov (United States)

    Mazin, Samuel R; Pele, Norbert J

    2008-11-01

    Inverse geometry computed tomography (IGCT) is a new type of volumetric CT geometry that employs a large array of x-ray sources opposite a smaller detector array. Volumetric coverage and high isotropic resolution produce very large data sets and therefore require a computationally efficient three-dimensional reconstruction algorithm. The purpose of this work was to adapt and evaluate a fast algorithm based on Defrise's Fourier rebinning (FORE), originally developed for positron emission tomography. The results were compared with the average of FDK reconstructions from each source row. The FORE algorithm is an order of magnitude faster than the FDK-type method for the case of 11 source rows. In the center of the field-of-view both algorithms exhibited the same resolution and noise performance. FORE exhibited some resolution loss (and less noise) in the periphery of the field-of-view. FORE appears to be a fast and reasonably accurate reconstruction method for IGCT.

  11. Algorithms of CT value correction for reconstructing a radiotherapy simulation image through axial CT images

    International Nuclear Information System (INIS)

    Ogino, Takashi; Egawa, Sunao

    1991-01-01

    New algorithms of CT value correction for reconstructing a radiotherapy simulation image through axial CT images were developed. One, designated plane weighting method, is to correct CT value in proportion to the position of the beam element passing through the voxel. The other, designated solid weighting method, is to correct CT value in proportion to the length of the beam element passing through the voxel and the volume of voxel. Phantom experiments showed fair spatial resolution in the transverse direction. In the longitudinal direction, however, spatial resolution of under slice thickness could not be obtained. Contrast resolution was equivalent for both methods. In patient studies, the reconstructed radiotherapy simulation image was almost similar in visual perception of the density resolution to a simulation film taken by X-ray simulator. (author)

  12. Optimization of CT image reconstruction algorithms for the lung tissue research consortium (LTRC)

    Science.gov (United States)

    McCollough, Cynthia; Zhang, Jie; Bruesewitz, Michael; Bartholmai, Brian

    2006-03-01

    To create a repository of clinical data, CT images and tissue samples and to more clearly understand the pathogenetic features of pulmonary fibrosis and emphysema, the National Heart, Lung, and Blood Institute (NHLBI) launched a cooperative effort known as the Lung Tissue Resource Consortium (LTRC). The CT images for the LTRC effort must contain accurate CT numbers in order to characterize tissues, and must have high-spatial resolution to show fine anatomic structures. This study was performed to optimize the CT image reconstruction algorithms to achieve these criteria. Quantitative analyses of phantom and clinical images were conducted. The ACR CT accreditation phantom containing five regions of distinct CT attenuations (CT numbers of approximately -1000 HU, -80 HU, 0 HU, 130 HU and 900 HU), and a high-contrast spatial resolution test pattern, was scanned using CT systems from two manufacturers (General Electric (GE) Healthcare and Siemens Medical Solutions). Phantom images were reconstructed using all relevant reconstruction algorithms. Mean CT numbers and image noise (standard deviation) were measured and compared for the five materials. Clinical high-resolution chest CT images acquired on a GE CT system for a patient with diffuse lung disease were reconstructed using BONE and STANDARD algorithms and evaluated by a thoracic radiologist in terms of image quality and disease extent. The clinical BONE images were processed with a 3 x 3 x 3 median filter to simulate a thicker slice reconstructed in smoother algorithms, which have traditionally been proven to provide an accurate estimation of emphysema extent in the lungs. Using a threshold technique, the volume of emphysema (defined as the percentage of lung voxels having a CT number lower than -950 HU) was computed for the STANDARD, BONE, and BONE filtered. The CT numbers measured in the ACR CT Phantom images were accurate for all reconstruction kernels for both manufacturers. As expected, visual evaluation of the

  13. New reconstruction algorithm in helical-volume CT

    International Nuclear Information System (INIS)

    Toki, Y.; Rifu, T.; Aradate, H.; Hirao, Y.; Ohyama, N.

    1990-01-01

    This paper reports on helical scanning that is an application of continuous scanning CT to acquire volume data in a short time for three-dimensional study. In a helical scan, the patient couch sustains movement during continuous-rotation scanning and then the acquired data is processed to synthesize a projection data set of vertical section by interpolation. But the synthesized section is not thin enough; also, the image may have artifacts caused by couch movement. A new reconstruction algorithm that helps resolve such problems has been developed and compared with the ordinary algorithm. The authors constructed a helical scan system based on TCT-900S, which can perform 1-second rotation continuously for 30 seconds. The authors measured section thickness using both algorithms on an AAPM phantom, and we also compared degree of artifacts on clinical data

  14. Performances of new reconstruction algorithms for CT-TDLAS (computer tomography-tunable diode laser absorption spectroscopy)

    International Nuclear Information System (INIS)

    Jeon, Min-Gyu; Deguchi, Yoshihiro; Kamimoto, Takahiro; Doh, Deog-Hee; Cho, Gyeong-Rae

    2017-01-01

    Highlights: • The measured data were successfully used for generating absorption spectra. • Four different reconstruction algorithms, ART, MART, SART and SMART were evaluated. • The calculation speed of convergence by the SMART algorithm was the fastest. • SMART was the most reliable algorithm for reconstructing the multiple signals. - Abstract: Recent advent of the tunable lasers made to measure simultaneous temperature and concentration fields of the gases. CT-TDLAS (computed tomography-tunable diode laser absorption spectroscopy) is one the leading techniques for the measurements of temperature and concentration fields of the gases. In CT-TDLAS, the accuracies of the measurement results are strongly dependent upon the reconstruction algorithms. In this study, four different reconstruction algorithms have been tested numerically using experimental data sets measured by thermocouples for combustion fields. Three reconstruction algorithms, MART (multiplicative algebraic reconstruction technique) algorithm, SART (simultaneous algebraic reconstruction technique) algorithm and SMART (simultaneous multiplicative algebraic reconstruction technique) algorithm, are newly proposed for CT-TDLAS in this study. The calculation results obtained by the three algorithms have been compared with previous algorithm, ART (algebraic reconstruction technique) algorithm. Phantom data sets have been generated by the use of thermocouples data obtained in an actual experiment. The data of the Harvard HITRAN table in which the thermo-dynamical properties and the light spectrum of the H_2O are listed were used for the numerical test. The reconstructed temperature and concentration fields were compared with the original HITRAN data, through which the constructed methods are validated. The performances of the four reconstruction algorithms were demonstrated. This method is expected to enhance the practicality of CT-TDLAS.

  15. Automated selection of the optimal cardiac phase for single-beat coronary CT angiography reconstruction

    International Nuclear Information System (INIS)

    Stassi, D.; Ma, H.; Schmidt, T. G.; Dutta, S.; Soderman, A.; Pazzani, D.; Gros, E.; Okerlund, D.

    2016-01-01

    Purpose: Reconstructing a low-motion cardiac phase is expected to improve coronary artery visualization in coronary computed tomography angiography (CCTA) exams. This study developed an automated algorithm for selecting the optimal cardiac phase for CCTA reconstruction. The algorithm uses prospectively gated, single-beat, multiphase data made possible by wide cone-beam imaging. The proposed algorithm differs from previous approaches because the optimal phase is identified based on vessel image quality (IQ) directly, compared to previous approaches that included motion estimation and interphase processing. Because there is no processing of interphase information, the algorithm can be applied to any sampling of image phases, making it suited for prospectively gated studies where only a subset of phases are available. Methods: An automated algorithm was developed to select the optimal phase based on quantitative IQ metrics. For each reconstructed slice at each reconstructed phase, an image quality metric was calculated based on measures of circularity and edge strength of through-plane vessels. The image quality metric was aggregated across slices, while a metric of vessel-location consistency was used to ignore slices that did not contain through-plane vessels. The algorithm performance was evaluated using two observer studies. Fourteen single-beat cardiac CT exams (Revolution CT, GE Healthcare, Chalfont St. Giles, UK) reconstructed at 2% intervals were evaluated for best systolic (1), diastolic (6), or systolic and diastolic phases (7) by three readers and the algorithm. Pairwise inter-reader and reader-algorithm agreement was evaluated using the mean absolute difference (MAD) and concordance correlation coefficient (CCC) between the reader and algorithm-selected phases. A reader-consensus best phase was determined and compared to the algorithm selected phase. In cases where the algorithm and consensus best phases differed by more than 2%, IQ was scored by three

  16. Iterative metal artefact reduction (MAR) in postsurgical chest CT: comparison of three iMAR-algorithms.

    Science.gov (United States)

    Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph

    2017-11-01

    The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.

  17. Comparative study between ultrahigh spatial frequency algorithm and high spatial frequency algorithm in high-resolution CT of the lungs

    International Nuclear Information System (INIS)

    Oh, Yu Whan; Kim, Jung Kyuk; Suh, Won Hyuck

    1994-01-01

    To date, the high spatial frequency algorithm (HSFA) which reduces image smoothing and increases spatial resolution has been used for the evaluation of parenchymal lung diseases in thin-section high-resolution CT. In this study, we compared the ultrahigh spatial frequency algorithm (UHSFA) with the high spatial frequency algorithm in the assessment of thin section images of the lung parenchyma. Three radiologists compared the UHSFA and HSFA on identical CT images in a line-pair resolution phantom, one lung specimen, 2 patients with normal lung and 18 patients with abnormal lung parenchyma. Scanning of a line-pair resolution phantom demonstrated no difference in resolution between two techniques but it showed that outer lines of the line pairs with maximal resolution looked thicker on UHSFA than those on HSFA. Lung parenchymal detail with UHSFA was judged equal or superior to HSFA in 95% of images. Lung parenchymal sharpness was improved with UHSFA in all images. Although UHSFA resulted in an increase in visible noise, observers did not found that image noise interfered with image interpretation. The visual CT attenuation of normal lung parenchyma is minimally increased in images with HSFA. The overall visual preference of the images reconstructed on UHSFA was considered equal to or greater than that of those reconstructed on HSFA in 78% of images. The ultrahigh spatial frequency algorithm improved the overall visual quality of the images in pulmonary parenchymal high-resolution CT

  18. Correction of oral contrast artifacts in CT-based attenuation correction of PET images using an automated segmentation algorithm

    International Nuclear Information System (INIS)

    Ahmadian, Alireza; Ay, Mohammad R.; Sarkar, Saeed; Bidgoli, Javad H.; Zaidi, Habib

    2008-01-01

    Oral contrast is usually administered in most X-ray computed tomography (CT) examinations of the abdomen and the pelvis as it allows more accurate identification of the bowel and facilitates the interpretation of abdominal and pelvic CT studies. However, the misclassification of contrast medium with high-density bone in CT-based attenuation correction (CTAC) is known to generate artifacts in the attenuation map (μmap), thus resulting in overcorrection for attenuation of positron emission tomography (PET) images. In this study, we developed an automated algorithm for segmentation and classification of regions containing oral contrast medium to correct for artifacts in CT-attenuation-corrected PET images using the segmented contrast correction (SCC) algorithm. The proposed algorithm consists of two steps: first, high CT number object segmentation using combined region- and boundary-based segmentation and second, object classification to bone and contrast agent using a knowledge-based nonlinear fuzzy classifier. Thereafter, the CT numbers of pixels belonging to the region classified as contrast medium are substituted with their equivalent effective bone CT numbers using the SCC algorithm. The generated CT images are then down-sampled followed by Gaussian smoothing to match the resolution of PET images. A piecewise calibration curve was then used to convert CT pixel values to linear attenuation coefficients at 511 keV. The visual assessment of segmented regions performed by an experienced radiologist confirmed the accuracy of the segmentation and classification algorithms for delineation of contrast-enhanced regions in clinical CT images. The quantitative analysis of generated μmaps of 21 clinical CT colonoscopy datasets showed an overestimation ranging between 24.4% and 37.3% in the 3D-classified regions depending on their volume and the concentration of contrast medium. Two PET/CT studies known to be problematic demonstrated the applicability of the technique in

  19. Acute appendicitis: prospective evaluation of a diagnostic algorithm integrating ultrasound and low-dose CT to reduce the need of standard CT

    International Nuclear Information System (INIS)

    Poletti, Pierre-Alexandre; Platon, Alexandra; Perrot, Thomas de; Becker, Christoph D.; Sarasin, Francois; Rutschmann, Olivier; Andereggen, Elisabeth; Dupuis-Lozeron, Elise; Perneger, Thomas; Gervaz, Pascal

    2011-01-01

    To evaluate an algorithm integrating ultrasound and low-dose unenhanced CT with oral contrast medium (LDCT) in the assessment of acute appendicitis, to reduce the need of conventional CT. Ultrasound was performed upon admission in 183 consecutive adult patients (111 women, 72 men, mean age 32) with suspicion of acute appendicitis and a BMI between 18.5 and 30 (step 1). No further examination was recommended when ultrasound was positive for appendicitis, negative with low clinical suspicion, or demonstrated an alternative diagnosis. All other patients underwent LDCT (30 mAs) (step 2). Standard intravenously enhanced CT (180 mAs) was performed after indeterminate LDCT (step 3). No further imaging was recommended after ultrasound in 84 (46%) patients; LDCT was obtained in 99 (54%). LDCT was positive or negative for appendicitis in 81 (82%) of these 99 patients, indeterminate in 18 (18%) who underwent standard CT. Eighty-six (47%) of the 183 patients had a surgically proven appendicitis. The sensitivity and specificity of the algorithm were 98.8% and 96.9%. The proposed algorithm achieved high sensitivity and specificity for detection of acute appendicitis, while reducing the need for standard CT and thus limiting exposition to radiation and to intravenous contrast media. (orig.)

  20. Acute appendicitis: prospective evaluation of a diagnostic algorithm integrating ultrasound and low-dose CT to reduce the need of standard CT

    Energy Technology Data Exchange (ETDEWEB)

    Poletti, Pierre-Alexandre; Platon, Alexandra [University Hospital of Geneva, Department of Radiology, Geneva (Switzerland); University Hospital of Geneva, Emergency Center, Geneva (Switzerland); Perrot, Thomas de; Becker, Christoph D. [University Hospital of Geneva, Department of Radiology, Geneva (Switzerland); Sarasin, Francois; Rutschmann, Olivier [University Hospital of Geneva, Emergency Center, Geneva (Switzerland); Andereggen, Elisabeth [University Hospital of Geneva, Emergency Center, Geneva (Switzerland); University Hospital of Geneva, Department of Surgery, Geneva (Switzerland); Dupuis-Lozeron, Elise; Perneger, Thomas [University Hospital of Geneva, Division of Clinical Epidemiology, Geneva (Switzerland); Gervaz, Pascal [University Hospital of Geneva, Department of Surgery, Geneva (Switzerland)

    2011-12-15

    To evaluate an algorithm integrating ultrasound and low-dose unenhanced CT with oral contrast medium (LDCT) in the assessment of acute appendicitis, to reduce the need of conventional CT. Ultrasound was performed upon admission in 183 consecutive adult patients (111 women, 72 men, mean age 32) with suspicion of acute appendicitis and a BMI between 18.5 and 30 (step 1). No further examination was recommended when ultrasound was positive for appendicitis, negative with low clinical suspicion, or demonstrated an alternative diagnosis. All other patients underwent LDCT (30 mAs) (step 2). Standard intravenously enhanced CT (180 mAs) was performed after indeterminate LDCT (step 3). No further imaging was recommended after ultrasound in 84 (46%) patients; LDCT was obtained in 99 (54%). LDCT was positive or negative for appendicitis in 81 (82%) of these 99 patients, indeterminate in 18 (18%) who underwent standard CT. Eighty-six (47%) of the 183 patients had a surgically proven appendicitis. The sensitivity and specificity of the algorithm were 98.8% and 96.9%. The proposed algorithm achieved high sensitivity and specificity for detection of acute appendicitis, while reducing the need for standard CT and thus limiting exposition to radiation and to intravenous contrast media. (orig.)

  1. Frequency selective non-linear blending to improve image quality in liver CT

    International Nuclear Information System (INIS)

    Bongers, M.N.; Bier, G.; Kloth, C.; Schabel, C.; Nikolaou, K.; Horger, M.; Fritz, J.

    2016-01-01

    To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60% female, mean age: 65±16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR Standard 1.62±1.10, CNR NLB 3.6±2.94, p=0.0002) and portal veins (CNR Standard 1.31±0.85, CNR NLB 2.42±3.03, p=0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR NLB 11.26±3.16, SNR Standard 8.85± 2.27, p=0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB DHV : 4 [3-4.75], S tandardDHV : 2 [1.3-2.5], p=<0.0001; NLBIQ : 4 [4-4], StandardIQ : 2 [2-3], p=<0.0001). The use of a frequency selective non-linear blending algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma.

  2. Frequency selective non-linear blending to improve image quality in liver CT

    Energy Technology Data Exchange (ETDEWEB)

    Bongers, M.N.; Bier, G.; Kloth, C.; Schabel, C.; Nikolaou, K.; Horger, M. [University Hospital of Tuebingen (Germany). Dept. of Diagnostic and Interventional Radiology; Fritz, J. [Johns Hopkins University School of Medicine, Baltimore, MD (United States). Russell H. Morgan Dept. of Radiology and Radiological Science

    2016-12-15

    To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60% female, mean age: 65±16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR{sub Standard} 1.62±1.10, CNR{sub NLB} 3.6±2.94, p=0.0002) and portal veins (CNR{sub Standard} 1.31±0.85, CNR{sub NLB} 2.42±3.03, p=0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR{sub NLB} 11.26±3.16, SNR{sub Standard} 8.85± 2.27, p=0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB{sub DHV}: 4 [3-4.75], S{sub tandardDHV}: 2 [1.3-2.5], p=<0.0001; {sub NLBIQ}: 4 [4-4], {sub StandardIQ}: 2 [2-3], p=<0.0001). The use of a frequency selective non-linear blending algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma.

  3. Superiorized algorithm for reconstruction of CT images from sparse-view and limited-angle polyenergetic data

    Science.gov (United States)

    Humphries, T.; Winn, J.; Faridani, A.

    2017-08-01

    Recent work in CT image reconstruction has seen increasing interest in the use of total variation (TV) and related penalties to regularize problems involving reconstruction from undersampled or incomplete data. Superiorization is a recently proposed heuristic which provides an automatic procedure to ‘superiorize’ an iterative image reconstruction algorithm with respect to a chosen objective function, such as TV. Under certain conditions, the superiorized algorithm is guaranteed to find a solution that is as satisfactory as any found by the original algorithm with respect to satisfying the constraints of the problem; this solution is also expected to be superior with respect to the chosen objective. Most work on superiorization has used reconstruction algorithms which assume a linear measurement model, which in the case of CT corresponds to data generated from a monoenergetic x-ray beam. Many CT systems generate x-rays from a polyenergetic spectrum, however, in which the measured data represent an integral of object attenuation over all energies in the spectrum. This inconsistency with the linear model produces the well-known beam hardening artifacts, which impair analysis of CT images. In this work we superiorize an iterative algorithm for reconstruction from polyenergetic data, using both TV and an anisotropic TV (ATV) penalty. We apply the superiorized algorithm in numerical phantom experiments modeling both sparse-view and limited-angle scenarios. In our experiments, the superiorized algorithm successfully finds solutions which are as constraints-compatible as those found by the original algorithm, with significantly reduced TV and ATV values. The superiorized algorithm thus produces images with greatly reduced sparse-view and limited angle artifacts, which are also largely free of the beam hardening artifacts that would be present if a superiorized version of a monoenergetic algorithm were used.

  4. A fast iterative soft-thresholding algorithm for few-view CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng; Mou, Xuanqin; Zhang, Yanbo [Jiaotong Univ., Xi' an (China). Inst. of Image Processing and Pattern Recognition

    2011-07-01

    Iterative soft-thresholding algorithms with total variation regularization can produce high-quality reconstructions from few views and even in the presence of noise. However, these algorithms are known to converge quite slowly, with a proven theoretically global convergence rate O(1/k), where k is iteration number. In this paper, we present a fast iterative soft-thresholding algorithm for few-view fan beam CT reconstruction with a global convergence rate O(1/k{sup 2}), which is significantly faster than the iterative soft-thresholding algorithm. Simulation results demonstrate the superior performance of the proposed algorithm in terms of convergence speed and reconstruction quality. (orig.)

  5. Spiral-CT-angiography of acute pulmonary embolism: factors that influence the implementation into standard diagnostic algorithms

    International Nuclear Information System (INIS)

    Bankier, A.; Herold, C.J.; Fleischmann, D.; Janata-Schwatczek, K.

    1998-01-01

    Purpose: Debate about the potential implementation of Spiral-CT in diagnostic algorithms of pulmonary embolism are often focussed on sensitivity and specificity in the context of comparative methodologic studies. We intend to investigate whether additional factors might influence this debate. Results: The factors availability, acceptance, patient-outcome, and cost-effectiveness-studies do have substantial influence on the implementation of Spiral-CT in the diagnostic algorithms of pulmonary embolism. Incorporation of these factors into the discussion might lead to more flexible and more patient-oriented algorithms for the diagnosis of pulmonary embolism. Conclusion: Availability of equipment, acceptance among clinicians, patient-out-come, and cost-effectiveness evaluations should be implemented into the debate about potential implementation of Spiral-CT in routine diagnostic imaging algorithms of pulmonary embolism. (orig./AJ) [de

  6. A combination-weighted Feldkamp-based reconstruction algorithm for cone-beam CT

    International Nuclear Information System (INIS)

    Mori, Shinichiro; Endo, Masahiro; Komatsu, Shuhei; Kandatsu, Susumu; Yashiro, Tomoyasu; Baba, Masayuki

    2006-01-01

    The combination-weighted Feldkamp algorithm (CW-FDK) was developed and tested in a phantom in order to reduce cone-beam artefacts and enhance cranio-caudal reconstruction coverage in an attempt to improve image quality when utilizing cone-beam computed tomography (CBCT). Using a 256-slice cone-beam CT (256CBCT), image quality (CT-number uniformity and geometrical accuracy) was quantitatively evaluated in phantom and clinical studies, and the results were compared to those obtained with the original Feldkamp algorithm. A clinical study was done in lung cancer patients under breath holding and free breathing. Image quality for the original Feldkamp algorithm is degraded at the edge of the scan region due to the missing volume, commensurate with the cranio-caudal distance between the reconstruction and central planes. The CW-FDK extended the reconstruction coverage to equal the scan coverage and improved reconstruction accuracy, unaffected by the cranio-caudal distance. The extended reconstruction coverage with good image quality provided by the CW-FDK will be clinically investigated for improving diagnostic and radiotherapy applications. In addition, this algorithm can also be adapted for use in relatively wide cone-angle CBCT such as with a flat-panel detector CBCT

  7. Automatic computer aided analysis algorithms and system for adrenal tumors on CT images.

    Science.gov (United States)

    Chai, Hanchao; Guo, Yi; Wang, Yuanyuan; Zhou, Guohui

    2017-12-04

    The adrenal tumor will disturb the secreting function of adrenocortical cells, leading to many diseases. Different kinds of adrenal tumors require different therapeutic schedules. In the practical diagnosis, it highly relies on the doctor's experience to judge the tumor type by reading the hundreds of CT images. This paper proposed an automatic computer aided analysis method for adrenal tumors detection and classification. It consisted of the automatic segmentation algorithms, the feature extraction and the classification algorithms. These algorithms were then integrated into a system and conducted on the graphic interface by using MATLAB Graphic user interface (GUI). The accuracy of the automatic computer aided segmentation and classification reached 90% on 436 CT images. The experiments proved the stability and reliability of this automatic computer aided analytic system.

  8. MO-E-17A-05: Individualized Patient Dosimetry in CT Using the Patient Dose (PATDOSE) Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez, A; Boone, J [UC Davis Medical Center, Sacramento, CA (United States)

    2014-06-15

    Purpose: Radiation dose to the patient undergoing a CT examination has been the focus of many recent studies. While CTDIvol and SSDE-based methods are important tools for patient dose management, the CT image data provides important information with respect to CT dose and its distribution. Coupled with the known geometry and output factors (kV, mAs, pitch, etc.) of the CT scanner, the CT dataset can be used directly for computing absorbed dose. Methods: The HU numbers in a patient's CT data set can be converted to linear attenuation coefficients (LACs) with some assumptions. With this (PAT-DOSE) method, which is not Monte Carlo-based, the primary and scatter dose are computed separately. The primary dose is computed directly from the geometry of the scanner, x-ray spectrum, and the known patient LACs. Once the primary dose has been computed to all voxels in the patient, the scatter dose algorithm redistributes a fraction of the absorbed primary dose (based on the HU number of each source voxel), and the methods here invoke both tissue attenuation and absorption and solid angle geometry. The scatter dose algorithm can be run N times to include Nth-scatter redistribution. PAT-DOSE was deployed using simple PMMA phantoms, to validate its performance against Monte Carlo-derived dose distributions. Results: Comparison between PAT-DOSE and MCNPX primary dose distributions showed excellent agreement for several scan lengths. The 1st-scatter dose distributions showed relatively higher-amplitude, long-range scatter tails for the PAT-DOSE algorithm then for MCNPX simulations. Conclusion: The PAT-DOSE algorithm provides a fast, deterministic assessment of the 3-D dose distribution in CT, making use of scanner geometry and the patient image data set. The preliminary implementation of the algorithm produces accurate primary dose distributions however achieving scatter distribution agreement is more challenging. Addressing the polyenergetic x-ray spectrum and spatially

  9. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    Science.gov (United States)

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  10. Coronary CT angiography-derived fractional flow reserve correlated with invasive fractional flow reserve measurements - initial experience with a novel physician-driven algorithm

    International Nuclear Information System (INIS)

    Baumann, Stefan; Wang, Rui; Schoepf, U.J.; Steinberg, Daniel H.; Spearman, James V.; Bayer, Richard R.; Hamm, Christian W.; Renker, Matthias

    2015-01-01

    The present study aimed to determine the feasibility of a novel fractional flow reserve (FFR) algorithm based on coronary CT angiography (cCTA) that permits point-of-care assessment, without data transfer to core laboratories, for the evaluation of potentially ischemia-causing stenoses. To obtain CT-based FFR, anatomical coronary information and ventricular mass extracted from cCTA datasets were integrated with haemodynamic parameters. CT-based FFR was assessed for 36 coronary artery stenoses in 28 patients in a blinded fashion and compared to catheter-based FFR. Haemodynamically relevant stenoses were defined by an invasive FFR ≤0.80. Time was measured for the processing of each cCTA dataset and CT-based FFR computation. Assessment of cCTA image quality was performed using a 5-point scale. Mean total time for CT-based FFR determination was 51.9 ± 9.0 min. Per-vessel analysis for the identification of lesion-specific myocardial ischemia demonstrated good correlation (Pearson's product-moment r = 0.74, p < 0.0001) between the prototype CT-based FFR algorithm and invasive FFR. Subjective image quality analysis resulted in a median score of 4 (interquartile ranges, 3-4). Our initial data suggest that the CT-based FFR method for the detection of haemodynamically significant stenoses evaluated in the selected population correlates well with invasive FFR and renders time-efficient point-of-care assessment possible. (orig.)

  11. A new approximate algorithm for image reconstruction in cone-beam spiral CT at small cone-angles

    International Nuclear Information System (INIS)

    Schaller, S.; Flohr, T.; Steffen, P.

    1996-01-01

    This paper presents a new approximate algorithm for image reconstruction with cone-beam spiral CT data at relatively small cone-angles. Based on the algorithm of Wang et al., our method combines a special complementary interpolation with filtered backprojection. The presented algorithm has three main advantages over Wang's algorithm: (1) It overcomes the pitch limitation of Wang's algorithm. (2) It significantly improves z-resolution when suitable sampling schemes are applied. (3) It avoids the waste of applied radiation dose inherent to Wang's algorithm. Usage of the total applied dose is an important requirement in medical imaging. Our method has been implemented on a standard workstation. Reconstructions of computer-simulated data of different phantoms, assuming sampling conditions and image quality requirements typical to medical CT, show encouraging results

  12. Automatic Algorithm Selection for Complex Simulation Problems

    CERN Document Server

    Ewald, Roland

    2012-01-01

    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. An automated selection of simulation algorithms supports users in setting up simulation experiments without demanding expert knowledge on simulation. Roland Ewald analyzes and discusses existing approaches to solve the algorithm selection problem in the context of simulation. He introduces a framework for automatic simulation algorithm selection and

  13. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection.

    Science.gov (United States)

    Zhuang, Xiahai; Bai, Wenjia; Song, Jingjing; Zhan, Songhua; Qian, Xiaohua; Shi, Wenzhe; Lian, Yanyun; Rueckert, Daniel

    2015-07-01

    Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors' proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p ranking study, the proposed criterion based on conditional entropy yielded a performance curve with higher WHS Dice scores compared to the conventional schemes (p ranking algorithm and joint label fusion, the MAS scheme is able to generate accurate segmentation

  14. An efficient polyenergetic SART (pSART) reconstruction algorithm for quantitative myocardial CT perfusion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yuan, E-mail: yuan.lin@duke.edu; Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Duke University Medical Center, 2424 Erwin Road, Suite 302, Durham, North Carolina 27705 (United States)

    2014-02-15

    Purpose: In quantitative myocardial CT perfusion imaging, beam hardening effect due to dense bone and high concentration iodinated contrast agent can result in visible artifacts and inaccurate CT numbers. In this paper, an efficient polyenergetic Simultaneous Algebraic Reconstruction Technique (pSART) was presented to eliminate the beam hardening artifacts and to improve the CT quantitative imaging ability. Methods: Our algorithm made threea priori assumptions: (1) the human body is composed of several base materials (e.g., fat, breast, soft tissue, bone, and iodine); (2) images can be coarsely segmented to two types of regions, i.e., nonbone regions and noniodine regions; and (3) each voxel can be decomposed into a mixture of two most suitable base materials according to its attenuation value and its corresponding region type information. Based on the above assumptions, energy-independent accumulated effective lengths of all base materials can be fast computed in the forward ray-tracing process and be used repeatedly to obtain accurate polyenergetic projections, with which a SART-based equation can correctly update each voxel in the backward projecting process to iteratively reconstruct artifact-free images. This approach effectively reduces the influence of polyenergetic x-ray sources and it further enables monoenergetic images to be reconstructed at any arbitrarily preselected target energies. A series of simulation tests were performed on a size-variable cylindrical phantom and a realistic anthropomorphic thorax phantom. In addition, a phantom experiment was also performed on a clinical CT scanner to further quantitatively validate the proposed algorithm. Results: The simulations with the cylindrical phantom and the anthropomorphic thorax phantom showed that the proposed algorithm completely eliminated beam hardening artifacts and enabled quantitative imaging across different materials, phantom sizes, and spectra, as the absolute relative errors were reduced

  15. An efficient polyenergetic SART (pSART) reconstruction algorithm for quantitative myocardial CT perfusion

    International Nuclear Information System (INIS)

    Lin, Yuan; Samei, Ehsan

    2014-01-01

    Purpose: In quantitative myocardial CT perfusion imaging, beam hardening effect due to dense bone and high concentration iodinated contrast agent can result in visible artifacts and inaccurate CT numbers. In this paper, an efficient polyenergetic Simultaneous Algebraic Reconstruction Technique (pSART) was presented to eliminate the beam hardening artifacts and to improve the CT quantitative imaging ability. Methods: Our algorithm made threea priori assumptions: (1) the human body is composed of several base materials (e.g., fat, breast, soft tissue, bone, and iodine); (2) images can be coarsely segmented to two types of regions, i.e., nonbone regions and noniodine regions; and (3) each voxel can be decomposed into a mixture of two most suitable base materials according to its attenuation value and its corresponding region type information. Based on the above assumptions, energy-independent accumulated effective lengths of all base materials can be fast computed in the forward ray-tracing process and be used repeatedly to obtain accurate polyenergetic projections, with which a SART-based equation can correctly update each voxel in the backward projecting process to iteratively reconstruct artifact-free images. This approach effectively reduces the influence of polyenergetic x-ray sources and it further enables monoenergetic images to be reconstructed at any arbitrarily preselected target energies. A series of simulation tests were performed on a size-variable cylindrical phantom and a realistic anthropomorphic thorax phantom. In addition, a phantom experiment was also performed on a clinical CT scanner to further quantitatively validate the proposed algorithm. Results: The simulations with the cylindrical phantom and the anthropomorphic thorax phantom showed that the proposed algorithm completely eliminated beam hardening artifacts and enabled quantitative imaging across different materials, phantom sizes, and spectra, as the absolute relative errors were reduced

  16. Image processing algorithm of computer-aided diagnosis in lung cancer screening by CT

    International Nuclear Information System (INIS)

    Yamamoto, Shinji

    2004-01-01

    In this paper, an image processing algorithm for computer-aided diagnosis of lung cancer by X-ray CT is described, which has been developed by my research group for these 10 years or so. CT lung images gathered at the mass screening stage are almost all normal, and lung cancer nodules will be found as the rate of less than 10%. To pick up such a very rare nodules with the high accuracy, a very sensitive detection algorithm is requested which is detectable local and very slight variation of the image. On the contrary, such a sensitive detection algorithm introduces a bad effect that a lot of normal shadows will be detected as abnormal shadows. In this paper I describe how to compromise this complicated subject and realize a practical computer-aided diagnosis tool by the image processing algorithm developed by my research group. Especially, I will mainly focus my description to the principle and characteristics of the Quoit filter which is newly developed as a high sensitive filter by my group. (author)

  17. SU-F-J-88: Comparison of Two Deformable Image Registration Algorithms for CT-To-CT Contour Propagation

    Energy Technology Data Exchange (ETDEWEB)

    Gopal, A; Xu, H; Chen, S [University of Maryland School of Medicine, Columbia, MD (United States)

    2016-06-15

    Purpose: To compare the contour propagation accuracy of two deformable image registration (DIR) algorithms in the Raystation treatment planning system – the “Hybrid” algorithm based on image intensities and anatomical information; and the “Biomechanical” algorithm based on linear anatomical elasticity and finite element modeling. Methods: Both DIR algorithms were used for CT-to-CT deformation for 20 lung radiation therapy patients that underwent treatment plan revisions. Deformation accuracy was evaluated using landmark tracking to measure the target registration error (TRE) and inverse consistency error (ICE). The deformed contours were also evaluated against physician drawn contours using Dice similarity coefficients (DSC). Contour propagation was qualitatively assessed using a visual quality score assigned by physicians, and a refinement quality score (0algorithms showed similar ICE (< 1.5 mm), but the hybrid DIR (TRE = 3.2 mm) performed better than the biomechanical DIR (TRE = 4.3 mm) with landmark tracking. Both algorithms had comparable DSC (DSC > 0.9 for lungs, > 0.85 for heart, > 0.8 for liver) and similar qualitative assessments (VQS < 0.35, RQS > 0.75 for lungs). When anatomical structures were used to control the deformation, the DSC improved more significantly for the biomechanical DIR compared to the hybrid DIR, while the VQS and RQS improved only for the controlling structures. However, while the inclusion of controlling structures improved the TRE for the hybrid DIR, it increased the TRE for the biomechanical DIR. Conclusion: The hybrid DIR was found to perform slightly better than the biomechanical DIR based on lower TRE while the DSC, VQS, and RQS studies yielded comparable results for both. The use of controlling structures showed considerable improvement in the hybrid DIR results and is recommended for clinical use in

  18. SU-E-J-218: Evaluation of CT Images Created Using a New Metal Artifact Reduction Reconstruction Algorithm for Radiation Therapy Treatment Planning

    Energy Technology Data Exchange (ETDEWEB)

    Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J [Lehigh Valley Health Network, Allentown, PA (United States)

    2014-06-01

    Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation

  19. SU-E-J-218: Evaluation of CT Images Created Using a New Metal Artifact Reduction Reconstruction Algorithm for Radiation Therapy Treatment Planning

    International Nuclear Information System (INIS)

    Niemkiewicz, J; Palmiotti, A; Miner, M; Stunja, L; Bergene, J

    2014-01-01

    Purpose: Metal in patients creates streak artifacts in CT images. When used for radiation treatment planning, these artifacts make it difficult to identify internal structures and affects radiation dose calculations, which depend on HU numbers for inhomogeneity correction. This work quantitatively evaluates a new metal artifact reduction (MAR) CT image reconstruction algorithm (GE Healthcare CT-0521-04.13-EN-US DOC1381483) when metal is present. Methods: A Gammex Model 467 Tissue Characterization phantom was used. CT images were taken of this phantom on a GE Optima580RT CT scanner with and without steel and titanium plugs using both the standard and MAR reconstruction algorithms. HU values were compared pixel by pixel to determine if the MAR algorithm altered the HUs of normal tissues when no metal is present, and to evaluate the effect of using the MAR algorithm when metal is present. Also, CT images of patients with internal metal objects using standard and MAR reconstruction algorithms were compared. Results: Comparing the standard and MAR reconstructed images of the phantom without metal, 95.0% of pixels were within ±35 HU and 98.0% of pixels were within ±85 HU. Also, the MAR reconstruction algorithm showed significant improvement in maintaining HUs of non-metallic regions in the images taken of the phantom with metal. HU Gamma analysis (2%, 2mm) of metal vs. non-metal phantom imaging using standard reconstruction resulted in an 84.8% pass rate compared to 96.6% for the MAR reconstructed images. CT images of patients with metal show significant artifact reduction when reconstructed with the MAR algorithm. Conclusion: CT imaging using the MAR reconstruction algorithm provides improved visualization of internal anatomy and more accurate HUs when metal is present compared to the standard reconstruction algorithm. MAR reconstructed CT images provide qualitative and quantitative improvements over current reconstruction algorithms, thus improving radiation

  20. Improved adaptive genetic algorithm with sparsity constraint applied to thermal neutron CT reconstruction of two-phase flow

    Science.gov (United States)

    Yan, Mingfei; Hu, Huasi; Otake, Yoshie; Taketani, Atsushi; Wakabayashi, Yasuo; Yanagimachi, Shinzo; Wang, Sheng; Pan, Ziheng; Hu, Guang

    2018-05-01

    Thermal neutron computer tomography (CT) is a useful tool for visualizing two-phase flow due to its high imaging contrast and strong penetrability of neutrons for tube walls constructed with metallic material. A novel approach for two-phase flow CT reconstruction based on an improved adaptive genetic algorithm with sparsity constraint (IAGA-SC) is proposed in this paper. In the algorithm, the neighborhood mutation operator is used to ensure the continuity of the reconstructed object. The adaptive crossover probability P c and mutation probability P m are improved to help the adaptive genetic algorithm (AGA) achieve the global optimum. The reconstructed results for projection data, obtained from Monte Carlo simulation, indicate that the comprehensive performance of the IAGA-SC algorithm exceeds the adaptive steepest descent-projection onto convex sets (ASD-POCS) algorithm in restoring typical and complex flow regimes. It especially shows great advantages in restoring the simply connected flow regimes and the shape of object. In addition, the CT experiment for two-phase flow phantoms was conducted on the accelerator-driven neutron source to verify the performance of the developed IAGA-SC algorithm.

  1. Evaluation of the OSC-TV iterative reconstruction algorithm for cone-beam optical CT.

    Science.gov (United States)

    Matenine, Dmitri; Mascolo-Fortin, Julia; Goussard, Yves; Després, Philippe

    2015-11-01

    The present work evaluates an iterative reconstruction approach, namely, the ordered subsets convex (OSC) algorithm with regularization via total variation (TV) minimization in the field of cone-beam optical computed tomography (optical CT). One of the uses of optical CT is gel-based 3D dosimetry for radiation therapy, where it is employed to map dose distributions in radiosensitive gels. Model-based iterative reconstruction may improve optical CT image quality and contribute to a wider use of optical CT in clinical gel dosimetry. This algorithm was evaluated using experimental data acquired by a cone-beam optical CT system, as well as complementary numerical simulations. A fast GPU implementation of OSC-TV was used to achieve reconstruction times comparable to those of conventional filtered backprojection. Images obtained via OSC-TV were compared with the corresponding filtered backprojections. Spatial resolution and uniformity phantoms were scanned and respective reconstructions were subject to evaluation of the modulation transfer function, image uniformity, and accuracy. The artifacts due to refraction and total signal loss from opaque objects were also studied. The cone-beam optical CT data reconstructions showed that OSC-TV outperforms filtered backprojection in terms of image quality, thanks to a model-based simulation of the photon attenuation process. It was shown to significantly improve the image spatial resolution and reduce image noise. The accuracy of the estimation of linear attenuation coefficients remained similar to that obtained via filtered backprojection. Certain image artifacts due to opaque objects were reduced. Nevertheless, the common artifact due to the gel container walls could not be eliminated. The use of iterative reconstruction improves cone-beam optical CT image quality in many ways. The comparisons between OSC-TV and filtered backprojection presented in this paper demonstrate that OSC-TV can potentially improve the rendering of

  2. CT liver volumetry using geodesic active contour segmentation with a level-set algorithm

    Science.gov (United States)

    Suzuki, Kenji; Epstein, Mark L.; Kohlbrenner, Ryan; Obajuluwa, Ademola; Xu, Jianwu; Hori, Masatoshi; Baron, Richard

    2010-03-01

    Automatic liver segmentation on CT images is challenging because the liver often abuts other organs of a similar density. Our purpose was to develop an accurate automated liver segmentation scheme for measuring liver volumes. We developed an automated volumetry scheme for the liver in CT based on a 5 step schema. First, an anisotropic smoothing filter was applied to portal-venous phase CT images to remove noise while preserving the liver structure, followed by an edge enhancer to enhance the liver boundary. By using the boundary-enhanced image as a speed function, a fastmarching algorithm generated an initial surface that roughly estimated the liver shape. A geodesic-active-contour segmentation algorithm coupled with level-set contour-evolution refined the initial surface so as to more precisely fit the liver boundary. The liver volume was calculated based on the refined liver surface. Hepatic CT scans of eighteen prospective liver donors were obtained under a liver transplant protocol with a multi-detector CT system. Automated liver volumes obtained were compared with those manually traced by a radiologist, used as "gold standard." The mean liver volume obtained with our scheme was 1,520 cc, whereas the mean manual volume was 1,486 cc, with the mean absolute difference of 104 cc (7.0%). CT liver volumetrics based on an automated scheme agreed excellently with "goldstandard" manual volumetrics (intra-class correlation coefficient was 0.95) with no statistically significant difference (p(F<=f)=0.32), and required substantially less completion time. Our automated scheme provides an efficient and accurate way of measuring liver volumes.

  3. Chronic thromboembolic pulmonary hypertension: diagnostic impact of multislice-CT and selective pulmonary-DSA

    International Nuclear Information System (INIS)

    Pitton, M.B.; Kemmerich, G.; Herber, S.; Schweden, F.; Thelen, M.; Mayer, E.

    2002-01-01

    Purpose: To evaluate the diagnostic impact of multislice-CT and selective pulmonary DSA in chronic thromboembolic pulmonary hypertension (CTEPH). Methods: 994 vessel segments of 14 consecutive patients with CTEPH were investigated with multislice-CT (slice thickness 3 mm, collimation 2.5 mm, reconstruction intervall 2 mm) and selective pulmonary DSA posterior-anterior, 45 oblique, and lateral projection. Analysis was performed by 2 investigators independently for CT and DSA. Diagnostic criteria were occlusions and non-occlusive changes like webs and bands, irregularities of the vessel wall, diameter reduction and thromboembolic depositions at different levels from central pulmonary arteries to subsegmental arteries. Reference diagnosis was made by synopsis of CT and DSA by consensus. Results: Concerning patency CT and DSA showed concordant findings overall in 88.9%, 92.9% for segmental arteries and 85.4% for subsegmental arteries. Concerning any thromboembolic changes, multislice-CT was significantly inferior to selective DSA (concordance 67.0% overall, 70.4% for segments and 63.6% for subsegments). Non-occlusive changes of the vessels were significantly underdiagnosed by CT (concordance of CT versus DSA: 23.1%). Conclusion: Multislice-CT and selective pulmonary DSA are equivalent for diagnosis of vessel occlusions at the level of segmental and subsegmental arteries. However, for visualisation of the non-occlusive thromboembolic changes of the vessel wall selective pulmonary DSA is still superior compared to multislice-CT. Multislice-CT and selective pulmonary DSA are complementary tools for diagnosis and treatment planning of chronic thromboembolic pulmonary hypertension (CTEPH). (orig.) [de

  4. SU-F-J-88: Comparison of Two Deformable Image Registration Algorithms for CT-To-CT Contour Propagation

    International Nuclear Information System (INIS)

    Gopal, A; Xu, H; Chen, S

    2016-01-01

    Purpose: To compare the contour propagation accuracy of two deformable image registration (DIR) algorithms in the Raystation treatment planning system – the “Hybrid” algorithm based on image intensities and anatomical information; and the “Biomechanical” algorithm based on linear anatomical elasticity and finite element modeling. Methods: Both DIR algorithms were used for CT-to-CT deformation for 20 lung radiation therapy patients that underwent treatment plan revisions. Deformation accuracy was evaluated using landmark tracking to measure the target registration error (TRE) and inverse consistency error (ICE). The deformed contours were also evaluated against physician drawn contours using Dice similarity coefficients (DSC). Contour propagation was qualitatively assessed using a visual quality score assigned by physicians, and a refinement quality score (0 0.9 for lungs, > 0.85 for heart, > 0.8 for liver) and similar qualitative assessments (VQS 0.75 for lungs). When anatomical structures were used to control the deformation, the DSC improved more significantly for the biomechanical DIR compared to the hybrid DIR, while the VQS and RQS improved only for the controlling structures. However, while the inclusion of controlling structures improved the TRE for the hybrid DIR, it increased the TRE for the biomechanical DIR. Conclusion: The hybrid DIR was found to perform slightly better than the biomechanical DIR based on lower TRE while the DSC, VQS, and RQS studies yielded comparable results for both. The use of controlling structures showed considerable improvement in the hybrid DIR results and is recommended for clinical use in contour propagation.

  5. Impact of Reconstruction Algorithms on CT Radiomic Features of Pulmonary Tumors: Analysis of Intra- and Inter-Reader Variability and Inter-Reconstruction Algorithm Variability.

    Science.gov (United States)

    Kim, Hyungjin; Park, Chang Min; Lee, Myunghee; Park, Sang Joon; Song, Yong Sub; Lee, Jong Hyuk; Hwang, Eui Jin; Goo, Jin Mo

    2016-01-01

    To identify the impact of reconstruction algorithms on CT radiomic features of pulmonary tumors and to reveal and compare the intra- and inter-reader and inter-reconstruction algorithm variability of each feature. Forty-two patients (M:F = 19:23; mean age, 60.43±10.56 years) with 42 pulmonary tumors (22.56±8.51mm) underwent contrast-enhanced CT scans, which were reconstructed with filtered back projection and commercial iterative reconstruction algorithm (level 3 and 5). Two readers independently segmented the whole tumor volume. Fifteen radiomic features were extracted and compared among reconstruction algorithms. Intra- and inter-reader variability and inter-reconstruction algorithm variability were calculated using coefficients of variation (CVs) and then compared. Among the 15 features, 5 first-order tumor intensity features and 4 gray level co-occurrence matrix (GLCM)-based features showed significant differences (palgorithms. As for the variability, effective diameter, sphericity, entropy, and GLCM entropy were the most robust features (CV≤5%). Inter-reader variability was larger than intra-reader or inter-reconstruction algorithm variability in 9 features. However, for entropy, homogeneity, and 4 GLCM-based features, inter-reconstruction algorithm variability was significantly greater than inter-reader variability (palgorithms. Inter-reconstruction algorithm variability was greater than inter-reader variability for entropy, homogeneity, and GLCM-based features.

  6. Ultra-low dose CT attenuation correction for PET/CT: analysis of sparse view data acquisition and reconstruction algorithms

    Science.gov (United States)

    Rui, Xue; Cheng, Lishui; Long, Yong; Fu, Lin; Alessio, Adam M.; Asma, Evren; Kinahan, Paul E.; De Man, Bruno

    2015-01-01

    For PET/CT systems, PET image reconstruction requires corresponding CT images for anatomical localization and attenuation correction. In the case of PET respiratory gating, multiple gated CT scans can offer phase-matched attenuation and motion correction, at the expense of increased radiation dose. We aim to minimize the dose of the CT scan, while preserving adequate image quality for the purpose of PET attenuation correction by introducing sparse view CT data acquisition. Methods We investigated sparse view CT acquisition protocols resulting in ultra-low dose CT scans designed for PET attenuation correction. We analyzed the tradeoffs between the number of views and the integrated tube current per view for a given dose using CT and PET simulations of a 3D NCAT phantom with lesions inserted into liver and lung. We simulated seven CT acquisition protocols with {984, 328, 123, 41, 24, 12, 8} views per rotation at a gantry speed of 0.35 seconds. One standard dose and four ultra-low dose levels, namely, 0.35 mAs, 0.175 mAs, 0.0875 mAs, and 0.04375 mAs, were investigated. Both the analytical FDK algorithm and the Model Based Iterative Reconstruction (MBIR) algorithm were used for CT image reconstruction. We also evaluated the impact of sinogram interpolation to estimate the missing projection measurements due to sparse view data acquisition. For MBIR, we used a penalized weighted least squares (PWLS) cost function with an approximate total-variation (TV) regularizing penalty function. We compared a tube pulsing mode and a continuous exposure mode for sparse view data acquisition. Global PET ensemble root-mean-squares-error (RMSE) and local ensemble lesion activity error were used as quantitative evaluation metrics for PET image quality. Results With sparse view sampling, it is possible to greatly reduce the CT scan dose when it is primarily used for PET attenuation correction with little or no measureable effect on the PET image. For the four ultra-low dose levels

  7. Investigating the generalisation of an atlas-based synthetic-CT algorithm to another centre and MR scanner for prostate MR-only radiotherapy

    Science.gov (United States)

    Wyatt, Jonathan J.; Dowling, Jason A.; Kelly, Charles G.; McKenna, Jill; Johnstone, Emily; Speight, Richard; Henry, Ann; Greer, Peter B.; McCallum, Hazel M.

    2017-12-01

    There is increasing interest in MR-only radiotherapy planning since it provides superb soft-tissue contrast without the registration uncertainties inherent in a CT-MR registration. However, MR images cannot readily provide the electron density information necessary for radiotherapy dose calculation. An algorithm which generates synthetic CTs for dose calculations from MR images of the prostate using an atlas of 3 T MR images has been previously reported by two of the authors. This paper aimed to evaluate this algorithm using MR data acquired at a different field strength and a different centre to the algorithm atlas. Twenty-one prostate patients received planning 1.5 T MR and CT scans with routine immobilisation devices on a flat-top couch set-up using external lasers. The MR receive coils were supported by a coil bridge. Synthetic CTs were generated from the planning MR images with (sCT1V ) and without (sCT) a one voxel body contour expansion included in the algorithm. This was to test whether this expansion was required for 1.5 T images. Both synthetic CTs were rigidly registered to the planning CT (pCT). A 6 MV volumetric modulated arc therapy plan was created on the pCT and recalculated on the sCT and sCT1V . The synthetic CTs’ dose distributions were compared to the dose distribution calculated on the pCT. The percentage dose difference at isocentre without the body contour expansion (sCT-pCT) was Δ D_sCT=(0.9 +/- 0.8) % and with (sCT1V -pCT) was Δ D_sCT1V=(-0.7 +/- 0.7) % (mean  ±  one standard deviation). The sCT1V result was within one standard deviation of zero and agreed with the result reported previously using 3 T MR data. The sCT dose difference only agreed within two standard deviations. The mean  ±  one standard deviation gamma pass rate was Γ_sCT = 96.1 +/- 2.9 % for the sCT and Γ_sCT1V = 98.8 +/- 0.5 % for the sCT1V (with 2% global dose difference and 2~mm distance to agreement gamma criteria). The one voxel body contour

  8. Neural Network Blind Equalization Algorithm Applied in Medical CT Image Restoration

    Directory of Open Access Journals (Sweden)

    Yunshan Sun

    2013-01-01

    Full Text Available A new algorithm for iterative blind image restoration is presented in this paper. The method extends blind equalization found in the signal case to the image. A neural network blind equalization algorithm is derived and used in conjunction with Zigzag coding to restore the original image. As a result, the effect of PSF can be removed by using the proposed algorithm, which contributes to eliminate intersymbol interference (ISI. In order to obtain the estimation of the original image, what is proposed in this method is to optimize constant modulus blind equalization cost function applied to grayscale CT image by using conjugate gradient method. Analysis of convergence performance of the algorithm verifies the feasibility of this method theoretically; meanwhile, simulation results and performance evaluations of recent image quality metrics are provided to assess the effectiveness of the proposed method.

  9. A backprojection-filtration algorithm for nonstandard spiral cone-beam CT with an n-PI-window

    International Nuclear Information System (INIS)

    Yu Hengyong; Ye Yangbo; Zhao Shiying; Wang Ge

    2005-01-01

    For applications in bolus-chasing computed tomography (CT) angiography and electron-beam micro-CT, the backprojection-filtration (BPF) formula developed by Zou and Pan was recently generalized by Ye et al to reconstruct images from cone-beam data collected along a rather flexible scanning locus, including a nonstandard spiral. A major implication of the generalized BPF formula is that it can be applied for n-PI-window-based reconstruction in the nonstandard spiral scanning case. In this paper, we design an n-PI-window-based BPF algorithm, and report the numerical simulation results with the 3D Shepp-Logan phantom and Defrise disk phantom. The proposed BPF algorithm consists of three steps: cone-beam data differentiation, weighted backprojection and inverse Hilbert filtration. Our simulated results demonstrate the feasibility and merits of the proposed algorithm

  10. Application of 2 mm thin-slice scanning with bone algorithm on conventional CT in diagnosis of the pulmonary diseases

    International Nuclear Information System (INIS)

    Zhang Xianheng; Li Xiuhua; Wang Fenghua

    2004-01-01

    Objective: To evaluate the value of 2 mm thin-slice conventional CT scan with bone algorithm in diagnosis and differential diagnosis in the pulmonary diseases. Methods: In total 135 cases of the pulmonary diseases were routinely scanned by conventional scan, 10 mm per slice, with standard algorithm, then the 2 mm thin-slice scan with bone algorithm was performed at the interested region of the lungs. Result: According to the comparative study of the CT signs between 10 mm slice scan with standard algorithm and 2 mm thin-slice scan with bone algorithm, the latter was better on displaying the pulmonary axial interstium, intralobular septum, subpleura lines, honeycombing, 2-5 mm nodulars and anomalies of bronchial wall. Conclusion: According to the study of 135 cases, 2 mm thin-slice scan with bone algorithm is superior to 10 mm slice scan with standard algorithm in demonstrating the pulmonary lesions. It has a similar value with high-resolution spiral CT in the diagnosis of the pulmonary solitary or diffuse nodules, pulmonary diffuse interstitial lesions and the lesions of the airway. It is practical and advisable in the community hospital

  11. A faster ordered-subset convex algorithm for iterative reconstruction in a rotation-free micro-CT system

    International Nuclear Information System (INIS)

    Quan, E; Lalush, D S

    2009-01-01

    We present a faster iterative reconstruction algorithm based on the ordered-subset convex (OSC) algorithm for transmission CT. The OSC algorithm was modified such that it calculates the normalization term before the iterative process in order to save computational cost. The modified version requires only one backprojection per iteration as compared to two required for the original OSC. We applied the modified OSC (MOSC) algorithm to a rotation-free micro-CT system that we proposed previously, observed its performance, and compared with the OSC algorithm for 3D cone-beam reconstruction. Measurements on the reconstructed images as well as the point spread functions show that MOSC is quite similar to OSC; in noise-resolution trade-off, MOSC is comparable with OSC in a regular-noise situation and it is slightly worse than OSC in an extremely high-noise situation. The timing record shows that MOSC saves 25-30% CPU time, depending on the number of iterations used. We conclude that the MOSC algorithm is more efficient than OSC and provides comparable images.

  12. SU-E-I-82: Improving CT Image Quality for Radiation Therapy Using Iterative Reconstruction Algorithms and Slightly Increasing Imaging Doses

    International Nuclear Information System (INIS)

    Noid, G; Chen, G; Tai, A; Li, X

    2014-01-01

    Purpose: Iterative reconstruction (IR) algorithms are developed to improve CT image quality (IQ) by reducing noise without diminishing spatial resolution or contrast. For CT in radiation therapy (RT), slightly increasing imaging dose to improve IQ may be justified if it can substantially enhance structure delineation. The purpose of this study is to investigate and to quantify the IQ enhancement as a result of increasing imaging doses and using IR algorithms. Methods: CT images were acquired for phantoms, built to evaluate IQ metrics including spatial resolution, contrast and noise, with a variety of imaging protocols using a CT scanner (Definition AS Open, Siemens) installed inside a Linac room. Representative patients were scanned once the protocols were optimized. Both phantom and patient scans were reconstructed using the Sinogram Affirmed Iterative Reconstruction (SAFIRE) and the Filtered Back Projection (FBP) methods. IQ metrics of the obtained CTs were compared. Results: IR techniques are demonstrated to preserve spatial resolution as measured by the point spread function and reduce noise in comparison to traditional FBP. Driven by the reduction in noise, the contrast to noise ratio is doubled by adopting the highest SAFIRE strength. As expected, increasing imaging dose reduces noise for both SAFIRE and FBP reconstructions. The contrast to noise increases from 3 to 5 by increasing the dose by a factor of 4. Similar IQ improvement was observed on the CTs for selected patients with pancreas and prostrate cancers. Conclusion: The IR techniques produce a measurable enhancement to CT IQ by reducing the noise. Increasing imaging dose further reduces noise independent of the IR techniques. The improved CT enables more accurate delineation of tumors and/or organs at risk during RT planning and delivery guidance

  13. A three-dimensional-weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT-helical scanning

    International Nuclear Information System (INIS)

    Tang Xiangyang; Hsieh Jiang; Nilsen, Roy A; Dutta, Sandeep; Samsonov, Dmitry; Hagiwara, Akira

    2006-01-01

    Based on the structure of the original helical FDK algorithm, a three-dimensional (3D)-weighted cone beam filtered backprojection (CB-FBP) algorithm is proposed for image reconstruction in volumetric CT under helical source trajectory. In addition to its dependence on view and fan angles, the 3D weighting utilizes the cone angle dependency of a ray to improve reconstruction accuracy. The 3D weighting is ray-dependent and the underlying mechanism is to give a favourable weight to the ray with the smaller cone angle out of a pair of conjugate rays but an unfavourable weight to the ray with the larger cone angle out of the conjugate ray pair. The proposed 3D-weighted helical CB-FBP reconstruction algorithm is implemented in the cone-parallel geometry that can improve noise uniformity and image generation speed significantly. Under the cone-parallel geometry, the filtering is naturally carried out along the tangential direction of the helical source trajectory. By exploring the 3D weighting's dependence on cone angle, the proposed helical 3D-weighted CB-FBP reconstruction algorithm can provide significantly improved reconstruction accuracy at moderate cone angle and high helical pitches. The 3D-weighted CB-FBP algorithm is experimentally evaluated by computer-simulated phantoms and phantoms scanned by a diagnostic volumetric CT system with a detector dimension of 64 x 0.625 mm over various helical pitches. The computer simulation study shows that the 3D weighting enables the proposed algorithm to reach reconstruction accuracy comparable to that of exact CB reconstruction algorithms, such as the Katsevich algorithm, under a moderate cone angle (4 deg.) and various helical pitches. Meanwhile, the experimental evaluation using the phantoms scanned by a volumetric CT system shows that the spatial resolution along the z-direction and noise characteristics of the proposed 3D-weighted helical CB-FBP reconstruction algorithm are maintained very well in comparison to the FDK

  14. A Fast Algorithm of Cartographic Sounding Selection

    Institute of Scientific and Technical Information of China (English)

    SUI Haigang; HUA Li; ZHAO Haitao; ZHANG Yongli

    2005-01-01

    An effective strategy and framework that adequately integrate the automated and manual processes for fast cartographic sounding selection is presented. The important submarine topographic features are extracted for important soundings selection, and an improved "influence circle" algorithm is introduced for sounding selection. For automatic configuration of soundings distribution pattern, a special algorithm considering multi-factors is employed. A semi-automatic method for solving the ambiguous conflicts is described. On the basis of the algorithms and strategies a system named HGIS for fast cartographic sounding selection is developed and applied in Chinese Marine Safety Administration Bureau (CMSAB). The application experiments show that the system is effective and reliable. At last some conclusions and the future work are given.

  15. Application of a kernel-based online learning algorithm to the classification of nodule candidates in computer-aided detection of CT lung nodules

    International Nuclear Information System (INIS)

    Matsumoto, S.; Ohno, Y.; Takenaka, D.; Sugimura, K.; Yamagata, H.

    2007-01-01

    Classification of the nodule candidates in computer-aided detection (CAD) of lung nodules in CT images was addressed by constructing a nonlinear discriminant function using a kernel-based learning algorithm called the kernel recursive least-squares (KRLS) algorithm. Using the nodule candidates derived from the processing by a CAD scheme of 100 CT datasets containing 253 non-calcified nodules or 3 mm or larger as determined by the consensus of two thoracic radiologists, the following trial were carried out 100 times: by randomly selecting 50 datasets for training, a nonlinear discriminant function was obtained using the nodule candidates in the training datasets and tested with the remaining candidates; for comparison, a rule-based classification was tested in a similar manner. At the number of false positives per case of about 5, the nonlinear classification method showed an improved sensitivity of 80% (mean over the 100 trials) compared with 74% of the rule-based method. (orig.)

  16. Automatic spectral imaging protocol selection and iterative reconstruction in abdominal CT with reduced contrast agent dose: initial experience.

    Science.gov (United States)

    Lv, Peijie; Liu, Jie; Chai, Yaru; Yan, Xiaopeng; Gao, Jianbo; Dong, Junqiang

    2017-01-01

    To evaluate the feasibility, image quality, and radiation dose of automatic spectral imaging protocol selection (ASIS) and adaptive statistical iterative reconstruction (ASIR) with reduced contrast agent dose in abdominal multiphase CT. One hundred and sixty patients were randomly divided into two scan protocols (n = 80 each; protocol A, 120 kVp/450 mgI/kg, filtered back projection algorithm (FBP); protocol B, spectral CT imaging with ASIS and 40 to 70 keV monochromatic images generated per 300 mgI/kg, ASIR algorithm. Quantitative parameters (image noise and contrast-to-noise ratios [CNRs]) and qualitative visual parameters (image noise, small structures, organ enhancement, and overall image quality) were compared. Monochromatic images at 50 keV and 60 keV provided similar or lower image noise, but higher contrast and overall image quality as compared with 120-kVp images. Despite the higher image noise, 40-keV images showed similar overall image quality compared to 120-kVp images. Radiation dose did not differ between the two protocols, while contrast agent dose in protocol B was reduced by 33 %. Application of ASIR and ASIS to monochromatic imaging from 40 to 60 keV allowed contrast agent dose reduction with adequate image quality and without increasing radiation dose compared to 120 kVp with FBP. • Automatic spectral imaging protocol selection provides appropriate scan protocols. • Abdominal CT is feasible using spectral imaging and 300 mgI/kg contrast agent. • 50-keV monochromatic images with 50 % ASIR provide optimal image quality.

  17. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging.

    Science.gov (United States)

    Yan, Hao; Zhen, Xin; Folkerts, Michael; Li, Yongbao; Pan, Tinsu; Cervino, Laura; Jiang, Steve B; Jia, Xun

    2014-07-01

    4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3-0.5 mm for patients 1-3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1-1.5 min per phase. High-quality 4D-CBCT imaging based

  18. A hybrid reconstruction algorithm for fast and accurate 4D cone-beam CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Hao; Folkerts, Michael; Jiang, Steve B., E-mail: xun.jia@utsouthwestern.edu, E-mail: steve.jiang@UTSouthwestern.edu; Jia, Xun, E-mail: xun.jia@utsouthwestern.edu, E-mail: steve.jiang@UTSouthwestern.edu [Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, Texas 75390 (United States); Zhen, Xin [Department of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515 (China); Li, Yongbao [Department of Radiation Oncology, The University of Texas, Southwestern Medical Center, Dallas, Texas 75390 and Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Pan, Tinsu [Department of Imaging Physics, The University of Texas, MD Anderson Cancer Center, Houston, Texas 77030 (United States); Cervino, Laura [Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92093 (United States)

    2014-07-15

    Purpose: 4D cone beam CT (4D-CBCT) has been utilized in radiation therapy to provide 4D image guidance in lung and upper abdomen area. However, clinical application of 4D-CBCT is currently limited due to the long scan time and low image quality. The purpose of this paper is to develop a new 4D-CBCT reconstruction method that restores volumetric images based on the 1-min scan data acquired with a standard 3D-CBCT protocol. Methods: The model optimizes a deformation vector field that deforms a patient-specific planning CT (p-CT), so that the calculated 4D-CBCT projections match measurements. A forward-backward splitting (FBS) method is invented to solve the optimization problem. It splits the original problem into two well-studied subproblems, i.e., image reconstruction and deformable image registration. By iteratively solving the two subproblems, FBS gradually yields correct deformation information, while maintaining high image quality. The whole workflow is implemented on a graphic-processing-unit to improve efficiency. Comprehensive evaluations have been conducted on a moving phantom and three real patient cases regarding the accuracy and quality of the reconstructed images, as well as the algorithm robustness and efficiency. Results: The proposed algorithm reconstructs 4D-CBCT images from highly under-sampled projection data acquired with 1-min scans. Regarding the anatomical structure location accuracy, 0.204 mm average differences and 0.484 mm maximum difference are found for the phantom case, and the maximum differences of 0.3–0.5 mm for patients 1–3 are observed. As for the image quality, intensity errors below 5 and 20 HU compared to the planning CT are achieved for the phantom and the patient cases, respectively. Signal-noise-ratio values are improved by 12.74 and 5.12 times compared to results from FDK algorithm using the 1-min data and 4-min data, respectively. The computation time of the algorithm on a NVIDIA GTX590 card is 1–1.5 min per phase

  19. Early evaluation of coronary artery bypass grafts: CT or selective angiography

    International Nuclear Information System (INIS)

    Wilson, P.C.; Gutierrez, O.; Moss, A.

    1984-01-01

    A prospective study was performed in 27 patients to compare the value of computed tomography and selective angiography in assessing coronary artery graft patency in the early post-operative period. The sensitivity of CT to graft patency was 85% with no falsely patent determinations. Dynamic CT was not found useful in predicting graft stenosis. There were no complications associated with CT studies, and two related to selective angiography. It is concluded that CT is the procedure of choice for graft evaluation in the early post-operative period; but that angiography is mandatory for the assessment of late symptom recurrence. A review is made of the results described in previous series. (orig.)

  20. Reducing 4D CT artifacts using optimized sorting based on anatomic similarity.

    Science.gov (United States)

    Johnston, Eric; Diehn, Maximilian; Murphy, James D; Loo, Billy W; Maxim, Peter G

    2011-05-01

    Four-dimensional (4D) computed tomography (CT) has been widely used as a tool to characterize respiratory motion in radiotherapy. The two most commonly used 4D CT algorithms sort images by the associated respiratory phase or displacement into a predefined number of bins, and are prone to image artifacts at transitions between bed positions. The purpose of this work is to demonstrate a method of reducing motion artifacts in 4D CT by incorporating anatomic similarity into phase or displacement based sorting protocols. Ten patient datasets were retrospectively sorted using both the displacement and phase based sorting algorithms. Conventional sorting methods allow selection of only the nearest-neighbor image in time or displacement within each bin. In our method, for each bed position either the displacement or the phase defines the center of a bin range about which several candidate images are selected. The two dimensional correlation coefficients between slices bordering the interface between adjacent couch positions are then calculated for all candidate pairings. Two slices have a high correlation if they are anatomically similar. Candidates from each bin are then selected to maximize the slice correlation over the entire data set using the Dijkstra's shortest path algorithm. To assess the reduction of artifacts, two thoracic radiation oncologists independently compared the resorted 4D datasets pairwise with conventionally sorted datasets, blinded to the sorting method, to choose which had the least motion artifacts. Agreement between reviewers was evaluated using the weighted kappa score. Anatomically based image selection resulted in 4D CT datasets with significantly reduced motion artifacts with both displacement (P = 0.0063) and phase sorting (P = 0.00022). There was good agreement between the two reviewers, with complete agreement 34 times and complete disagreement 6 times. Optimized sorting using anatomic similarity significantly reduces 4D CT motion

  1. Enhanced temporal resolution at cardiac CT with a novel CT image reconstruction algorithm: Initial patient experience

    Energy Technology Data Exchange (ETDEWEB)

    Apfaltrer, Paul, E-mail: paul.apfaltrer@medma.uni-heidelberg.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Institute of Clinical Radiology and Nuclear Medicine, Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, D-68167 Mannheim (Germany); Schoendube, Harald, E-mail: harald.schoendube@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Schoepf, U. Joseph, E-mail: schoepf@musc.edu [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Allmendinger, Thomas, E-mail: thomas.allmendinger@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Tricarico, Francesco, E-mail: francescotricarico82@gmail.com [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Department of Bioimaging and Radiological Sciences, Catholic University of the Sacred Heart, “A. Gemelli” Hospital, Largo A. Gemelli 8, Rome (Italy); Schindler, Andreas, E-mail: andreas.schindler@campus.lmu.de [Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, PO Box 250322, 169 Ashley Avenue, Charleston, SC 29425 (United States); Vogt, Sebastian, E-mail: sebastian.vogt@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); Sunnegårdh, Johan, E-mail: johan.sunnegardh@siemens.com [Siemens Healthcare, CT Division, Forchheim Siemens, Siemensstr. 1, 91301 Forchheim (Germany); and others

    2013-02-15

    Objective: To evaluate the effect of a temporal resolution improvement method (TRIM) for cardiac CT on diagnostic image quality for coronary artery assessment. Materials and methods: The TRIM-algorithm employs an iterative approach to reconstruct images from less than 180° of projections and uses a histogram constraint to prevent the occurrence of limited-angle artifacts. This algorithm was applied in 11 obese patients (7 men, 67.2 ± 9.8 years) who had undergone second generation dual-source cardiac CT with 120 kV, 175–426 mAs, and 500 ms gantry rotation. All data were reconstructed with a temporal resolution of 250 ms using traditional filtered-back projection (FBP) and of 200 ms using the TRIM-algorithm. Contrast attenuation and contrast-to-noise-ratio (CNR) were measured in the ascending aorta. The presence and severity of coronary motion artifacts was rated on a 4-point Likert scale. Results: All scans were considered of diagnostic quality. Mean BMI was 36 ± 3.6 kg/m{sup 2}. Average heart rate was 60 ± 9 bpm. Mean effective dose was 13.5 ± 4.6 mSv. When comparing FBP- and TRIM reconstructed series, the attenuation within the ascending aorta (392 ± 70.7 vs. 396.8 ± 70.1 HU, p > 0.05) and CNR (13.2 ± 3.2 vs. 11.7 ± 3.1, p > 0.05) were not significantly different. A total of 110 coronary segments were evaluated. All studies were deemed diagnostic; however, there was a significant (p < 0.05) difference in the severity score distribution of coronary motion artifacts between FBP (median = 2.5) and TRIM (median = 2.0) reconstructions. Conclusion: The algorithm evaluated here delivers diagnostic imaging quality of the coronary arteries despite 500 ms gantry rotation. Possible applications include improvement of cardiac imaging on slower gantry rotation systems or mitigation of the trade-off between temporal resolution and CNR in obese patients.

  2. Clinical Applications of a CT Window Blending Algorithm: RADIO (Relative Attenuation-Dependent Image Overlay).

    Science.gov (United States)

    Mandell, Jacob C; Khurana, Bharti; Folio, Les R; Hyun, Hyewon; Smith, Stacy E; Dunne, Ruth M; Andriole, Katherine P

    2017-06-01

    A methodology is described using Adobe Photoshop and Adobe Extendscript to process DICOM images with a Relative Attenuation-Dependent Image Overlay (RADIO) algorithm to visualize the full dynamic range of CT in one view, without requiring a change in window and level settings. The potential clinical uses for such an algorithm are described in a pictorial overview, including applications in emergency radiology, oncologic imaging, and nuclear medicine and molecular imaging.

  3. Naive Bayes-Guided Bat Algorithm for Feature Selection

    Directory of Open Access Journals (Sweden)

    Ahmed Majid Taha

    2013-01-01

    Full Text Available When the amount of data and information is said to double in every 20 months or so, feature selection has become highly important and beneficial. Further improvements in feature selection will positively affect a wide array of applications in fields such as pattern recognition, machine learning, or signal processing. Bio-inspired method called Bat Algorithm hybridized with a Naive Bayes classifier has been presented in this work. The performance of the proposed feature selection algorithm was investigated using twelve benchmark datasets from different domains and was compared to three other well-known feature selection algorithms. Discussion focused on four perspectives: number of features, classification accuracy, stability, and feature generalization. The results showed that BANB significantly outperformed other algorithms in selecting lower number of features, hence removing irrelevant, redundant, or noisy features while maintaining the classification accuracy. BANB is also proven to be more stable than other methods and is capable of producing more general feature subsets.

  4. Naive Bayes-Guided Bat Algorithm for Feature Selection

    Science.gov (United States)

    Taha, Ahmed Majid; Mustapha, Aida; Chen, Soong-Der

    2013-01-01

    When the amount of data and information is said to double in every 20 months or so, feature selection has become highly important and beneficial. Further improvements in feature selection will positively affect a wide array of applications in fields such as pattern recognition, machine learning, or signal processing. Bio-inspired method called Bat Algorithm hybridized with a Naive Bayes classifier has been presented in this work. The performance of the proposed feature selection algorithm was investigated using twelve benchmark datasets from different domains and was compared to three other well-known feature selection algorithms. Discussion focused on four perspectives: number of features, classification accuracy, stability, and feature generalization. The results showed that BANB significantly outperformed other algorithms in selecting lower number of features, hence removing irrelevant, redundant, or noisy features while maintaining the classification accuracy. BANB is also proven to be more stable than other methods and is capable of producing more general feature subsets. PMID:24396295

  5. Abdomen disease diagnosis in CT images using flexiscale curvelet transform and improved genetic algorithm.

    Science.gov (United States)

    Sethi, Gaurav; Saini, B S

    2015-12-01

    This paper presents an abdomen disease diagnostic system based on the flexi-scale curvelet transform, which uses different optimal scales for extracting features from computed tomography (CT) images. To optimize the scale of the flexi-scale curvelet transform, we propose an improved genetic algorithm. The conventional genetic algorithm assumes that fit parents will likely produce the healthiest offspring that leads to the least fit parents accumulating at the bottom of the population, reducing the fitness of subsequent populations and delaying the optimal solution search. In our improved genetic algorithm, combining the chromosomes of a low-fitness and a high-fitness individual increases the probability of producing high-fitness offspring. Thereby, all of the least fit parent chromosomes are combined with high fit parent to produce offspring for the next population. In this way, the leftover weak chromosomes cannot damage the fitness of subsequent populations. To further facilitate the search for the optimal solution, our improved genetic algorithm adopts modified elitism. The proposed method was applied to 120 CT abdominal images; 30 images each of normal subjects, cysts, tumors and stones. The features extracted by the flexi-scale curvelet transform were more discriminative than conventional methods, demonstrating the potential of our method as a diagnostic tool for abdomen diseases.

  6. Impact of respiratory-correlated CT sorting algorithms on the choice of margin definition for free-breathing lung radiotherapy treatments.

    Science.gov (United States)

    Thengumpallil, Sheeba; Germond, Jean-François; Bourhis, Jean; Bochud, François; Moeckli, Raphaël

    2016-06-01

    To investigate the impact of Toshiba phase- and amplitude-sorting algorithms on the margin strategies for free-breathing lung radiotherapy treatments in the presence of breathing variations. 4D CT of a sphere inside a dynamic thorax phantom was acquired. The 4D CT was reconstructed according to the phase- and amplitude-sorting algorithms. The phantom was moved by reproducing amplitude, frequency, and a mix of amplitude and frequency variations. Artefact analysis was performed for Mid-Ventilation and ITV-based strategies on the images reconstructed by phase- and amplitude-sorting algorithms. The target volume deviation was assessed by comparing the target volume acquired during irregular motion to the volume acquired during regular motion. The amplitude-sorting algorithm shows reduced artefacts for only amplitude variations while the phase-sorting algorithm for only frequency variations. For amplitude and frequency variations, both algorithms perform similarly. Most of the artefacts are blurring and incomplete structures. We found larger artefacts and volume differences for the Mid-Ventilation with respect to the ITV strategy, resulting in a higher relative difference of the surface distortion value which ranges between maximum 14.6% and minimum 4.1%. The amplitude- is superior to the phase-sorting algorithm in the reduction of motion artefacts for amplitude variations while phase-sorting for frequency variations. A proper choice of 4D CT sorting algorithm is important in order to reduce motion artefacts, especially if Mid-Ventilation strategy is used. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Preliminary study on helical CT algorithms for patient motion estimation and compensation

    International Nuclear Information System (INIS)

    Wang, G.; Vannier, M.W.

    1995-01-01

    Helical computed tomography (helical/spiral CT) has replaced conventional CT in many clinical applications. In current helical CT, a patient is assumed to be rigid and motionless during scanning and planar projection sets are produced from raw data via longitudinal interpolation. However, rigid patient motion is a problem in some cases (such as in the skull base and temporal bone imaging). Motion artifacts thus generated in reconstructed images can prevent accurate diagnosis. Modeling a uniform translational movement, the authors address how patient motion is ascertained and how it may be compensated. First, mismatch between adjacent fan-beam projections of the same orientation is determined via classical correlation, which is approximately proportional to the patient displacement projected onto an axis orthogonal to the central ray of the involved fan-beam. Then, the patient motion vector (the patient displacement per gantry rotation) is estimated from its projections using a least-square-root method. To suppress motion artifacts, adaptive interpolation algorithms are developed that synthesize full-scan and half-scan planar projection data sets, respectively. In the adaptive scheme, the interpolation is performed along inclined paths dependent upon the patient motion vector. The simulation results show that the patient motion vector can be accurately and reliably estimated using their correlation and least-square-root algorithm, patient motion artifacts can be effectively suppressed via adaptive interpolation, and adaptive half-scan interpolation is advantageous compared with its full-scale counterpart in terms of high contrast image resolution

  8. Portfolio selection using genetic algorithms | Yahaya | International ...

    African Journals Online (AJOL)

    In this paper, one of the nature-inspired evolutionary algorithms – a Genetic Algorithms (GA) was used in solving the portfolio selection problem (PSP). Based on a real dataset from a popular stock market, the performance of the algorithm in relation to those obtained from one of the popular quadratic programming (QP) ...

  9. [Accurate 3D free-form registration between fan-beam CT and cone-beam CT].

    Science.gov (United States)

    Liang, Yueqiang; Xu, Hongbing; Li, Baosheng; Li, Hongsheng; Yang, Fujun

    2012-06-01

    Because the X-ray scatters, the CT numbers in cone-beam CT cannot exactly correspond to the electron densities. This, therefore, results in registration error when the intensity-based registration algorithm is used to register planning fan-beam CT and cone-beam CT. In order to reduce the registration error, we have developed an accurate gradient-based registration algorithm. The gradient-based deformable registration problem is described as a minimization of energy functional. Through the calculus of variations and Gauss-Seidel finite difference method, we derived the iterative formula of the deformable registration. The algorithm was implemented by GPU through OpenCL framework, with which the registration time was greatly reduced. Our experimental results showed that the proposed gradient-based registration algorithm could register more accurately the clinical cone-beam CT and fan-beam CT images compared with the intensity-based algorithm. The GPU-accelerated algorithm meets the real-time requirement in the online adaptive radiotherapy.

  10. Ad Hoc Access Gateway Selection Algorithm

    Science.gov (United States)

    Jie, Liu

    With the continuous development of mobile communication technology, Ad Hoc access network has become a hot research, Ad Hoc access network nodes can be used to expand capacity of multi-hop communication range of mobile communication system, even business adjacent to the community, improve edge data rates. For mobile nodes in Ad Hoc network to internet, internet communications in the peer nodes must be achieved through the gateway. Therefore, the key Ad Hoc Access Networks will focus on the discovery gateway, as well as gateway selection in the case of multi-gateway and handover problems between different gateways. This paper considers the mobile node and the gateway, based on the average number of hops from an average access time and the stability of routes, improved gateway selection algorithm were proposed. An improved gateway selection algorithm, which mainly considers the algorithm can improve the access time of Ad Hoc nodes and the continuity of communication between the gateways, were proposed. This can improve the quality of communication across the network.

  11. Semiautomated hybrid algorithm for estimation of three-dimensional liver surface in CT using dynamic cellular automata and level-sets.

    Science.gov (United States)

    Dakua, Sarada Prasad; Abinahed, Julien; Al-Ansari, Abdulla

    2015-04-01

    Liver segmentation continues to remain a major challenge, largely due to its intense complexity with surrounding anatomical structures (stomach, kidney, and heart), high noise level and lack of contrast in pathological computed tomography (CT) data. We present an approach to reconstructing the liver surface in low contrast CT. The main contributions are: (1) a stochastic resonance-based methodology in discrete cosine transform domain is developed to enhance the contrast of pathological liver images, (2) a new formulation is proposed to prevent the object boundary, resulting from the cellular automata method, from leaking into the surrounding areas of similar intensity, and (3) a level-set method is suggested to generate intermediate segmentation contours from two segmented slices distantly located in a subject sequence. We have tested the algorithm on real datasets obtained from two sources, Hamad General Hospital and medical image computing and computer-assisted interventions grand challenge workshop. Various parameters in the algorithm, such as [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text], play imperative roles, thus their values are precisely selected. Both qualitative and quantitative evaluation performed on liver data show promising segmentation accuracy when compared with ground truth data reflecting the potential of the proposed method.

  12. Quantitatively assessed CT imaging measures of pulmonary interstitial pneumonia: Effects of reconstruction algorithms on histogram parameters

    International Nuclear Information System (INIS)

    Koyama, Hisanobu; Ohno, Yoshiharu; Yamazaki, Youichi; Nogami, Munenobu; Kusaka, Akiko; Murase, Kenya; Sugimura, Kazuro

    2010-01-01

    This study aimed the influences of reconstruction algorithm for quantitative assessments in interstitial pneumonia patients. A total of 25 collagen vascular disease patients (nine male patients and 16 female patients; mean age, 57.2 years; age range 32-77 years) underwent thin-section MDCT examinations, and MDCT data were reconstructed with three kinds of reconstruction algorithm (two high-frequencies [A and B] and one standard [C]). In reconstruction algorithm B, the effect of low- and middle-frequency space was suppressed compared with reconstruction algorithm A. As quantitative CT parameters, kurtosis, skewness, and mean lung density (MLD) were acquired from a frequency histogram of the whole lung parenchyma in each reconstruction algorithm. To determine the difference of quantitative CT parameters affected by reconstruction algorithms, these parameters were compared statistically. To determine the relationships with the disease severity, these parameters were correlated with PFTs. In the results, all the histogram parameters values had significant differences each other (p < 0.0001) and those of reconstruction algorithm C were the highest. All MLDs had fair or moderate correlation with all parameters of PFT (-0.64 < r < -0.45, p < 0.05). Though kurtosis and skewness in high-frequency reconstruction algorithm A had significant correlations with all parameters of PFT (-0.61 < r < -0.45, p < 0.05), there were significant correlations only with diffusing capacity of carbon monoxide (DLco) and total lung capacity (TLC) in reconstruction algorithm C and with forced expiratory volume in 1 s (FEV1), DLco and TLC in reconstruction algorithm B. In conclusion, reconstruction algorithm has influence to quantitative assessments on chest thin-section MDCT examination in interstitial pneumonia patients.

  13. Quantitatively assessed CT imaging measures of pulmonary interstitial pneumonia: Effects of reconstruction algorithms on histogram parameters

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Hisanobu [Department of Radiology, Hyogo Kaibara Hospital, 5208-1 Kaibara, Kaibara-cho, Tanba 669-3395 (Japan)], E-mail: hisanobu19760104@yahoo.co.jp; Ohno, Yoshiharu [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: yosirad@kobe-u.ac.jp; Yamazaki, Youichi [Department of Medical Physics and Engineering, Faculty of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita 565-0871 (Japan)], E-mail: y.yamazk@sahs.med.osaka-u.ac.jp; Nogami, Munenobu [Division of PET, Institute of Biomedical Research and Innovation, 2-2 MInamimachi, Minatojima, Chu0-ku, Kobe 650-0047 (Japan)], E-mail: aznogami@fbri.org; Kusaka, Akiko [Division of Radiology, Kobe University Hospital, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: a.kusaka@hosp.kobe-u.ac.jp; Murase, Kenya [Department of Medical Physics and Engineering, Faculty of Health Sciences, Graduate School of Medicine, Osaka University, 1-7 Yamadaoka, Suita 565-0871 (Japan)], E-mail: murase@sahs.med.osaka-u.ac.jp; Sugimura, Kazuro [Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan)], E-mail: sugimura@med.kobe-u.ac.jp

    2010-04-15

    This study aimed the influences of reconstruction algorithm for quantitative assessments in interstitial pneumonia patients. A total of 25 collagen vascular disease patients (nine male patients and 16 female patients; mean age, 57.2 years; age range 32-77 years) underwent thin-section MDCT examinations, and MDCT data were reconstructed with three kinds of reconstruction algorithm (two high-frequencies [A and B] and one standard [C]). In reconstruction algorithm B, the effect of low- and middle-frequency space was suppressed compared with reconstruction algorithm A. As quantitative CT parameters, kurtosis, skewness, and mean lung density (MLD) were acquired from a frequency histogram of the whole lung parenchyma in each reconstruction algorithm. To determine the difference of quantitative CT parameters affected by reconstruction algorithms, these parameters were compared statistically. To determine the relationships with the disease severity, these parameters were correlated with PFTs. In the results, all the histogram parameters values had significant differences each other (p < 0.0001) and those of reconstruction algorithm C were the highest. All MLDs had fair or moderate correlation with all parameters of PFT (-0.64 < r < -0.45, p < 0.05). Though kurtosis and skewness in high-frequency reconstruction algorithm A had significant correlations with all parameters of PFT (-0.61 < r < -0.45, p < 0.05), there were significant correlations only with diffusing capacity of carbon monoxide (DLco) and total lung capacity (TLC) in reconstruction algorithm C and with forced expiratory volume in 1 s (FEV1), DLco and TLC in reconstruction algorithm B. In conclusion, reconstruction algorithm has influence to quantitative assessments on chest thin-section MDCT examination in interstitial pneumonia patients.

  14. ADAPTIVE SELECTION OF AUXILIARY OBJECTIVES IN MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS

    Directory of Open Access Journals (Sweden)

    I. A. Petrova

    2016-05-01

    Full Text Available Subject of Research.We propose to modify the EA+RL method, which increases efficiency of evolutionary algorithms by means of auxiliary objectives. The proposed modification is compared to the existing objective selection methods on the example of travelling salesman problem. Method. In the EA+RL method a reinforcement learning algorithm is used to select an objective – the target objective or one of the auxiliary objectives – at each iteration of the single-objective evolutionary algorithm.The proposed modification of the EA+RL method adopts this approach for the usage with a multiobjective evolutionary algorithm. As opposed to theEA+RL method, in this modification one of the auxiliary objectives is selected by reinforcement learning and optimized together with the target objective at each step of the multiobjective evolutionary algorithm. Main Results.The proposed modification of the EA+RL method was compared to the existing objective selection methods on the example of travelling salesman problem. In the EA+RL method and its proposed modification reinforcement learning algorithms for stationary and non-stationary environment were used. The proposed modification of the EA+RL method applied with reinforcement learning for non-stationary environment outperformed the considered objective selection algorithms on the most problem instances. Practical Significance. The proposed approach increases efficiency of evolutionary algorithms, which may be used for solving discrete NP-hard optimization problems. They are, in particular, combinatorial path search problems and scheduling problems.

  15. Selection of views to materialize using simulated annealing algorithms

    Science.gov (United States)

    Zhou, Lijuan; Liu, Chi; Wang, Hongfeng; Liu, Daixin

    2002-03-01

    A data warehouse contains lots of materialized views over the data provided by the distributed heterogeneous databases for the purpose of efficiently implementing decision-support or OLAP queries. It is important to select the right view to materialize that answer a given set of queries. The goal is the minimization of the combination of the query evaluation and view maintenance costs. In this paper, we have addressed and designed algorithms for selecting a set of views to be materialized so that the sum of processing a set of queries and maintaining the materialized views is minimized. We develop an approach using simulated annealing algorithms to solve it. First, we explore simulated annealing algorithms to optimize the selection of materialized views. Then we use experiments to demonstrate our approach. The results show that our algorithm works better. We implemented our algorithms and a performance study of the algorithms shows that the proposed algorithm gives an optimal solution.

  16. Multiatlas whole heart segmentation of CT data using conditional entropy for atlas ranking and selection

    Energy Technology Data Exchange (ETDEWEB)

    Zhuang, Xiahai, E-mail: zhuangxiahai@sjtu.edu.cn; Qian, Xiaohua [SJTU-CU International Cooperative Research Center, Department of Engineering Mechanics, School of Naval Architecture Ocean and Civil Engineering, Shanghai Jiao Tong University, Shanghai 200240 (China); Bai, Wenjia; Shi, Wenzhe; Rueckert, Daniel [Biomedical Image Analysis Group, Department of Computing, Imperial College London, 180 Queens Gate, London SW7 2AZ (United Kingdom); Song, Jingjing; Zhan, Songhua [Shuguang Hospital Affiliated to Shanghai University of Traditional Chinese Medicine, Shanghai 201203 (China); Lian, Yanyun [Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai 201210 (China)

    2015-07-15

    Purpose: Cardiac computed tomography (CT) is widely used in clinical diagnosis of cardiovascular diseases. Whole heart segmentation (WHS) plays a vital role in developing new clinical applications of cardiac CT. However, the shape and appearance of the heart can vary greatly across different scans, making the automatic segmentation particularly challenging. The objective of this work is to develop and evaluate a multiatlas segmentation (MAS) scheme using a new atlas ranking and selection algorithm for automatic WHS of CT data. Research on different MAS strategies and their influence on WHS performance are limited. This work provides a detailed comparison study evaluating the impacts of label fusion, atlas ranking, and sizes of the atlas database on the segmentation performance. Methods: Atlases in a database were registered to the target image using a hierarchical registration scheme specifically designed for cardiac images. A subset of the atlases were selected for label fusion, according to the authors’ proposed atlas ranking criterion which evaluated the performance of each atlas by computing the conditional entropy of the target image given the propagated atlas labeling. Joint label fusion was used to combine multiple label estimates to obtain the final segmentation. The authors used 30 clinical cardiac CT angiography (CTA) images to evaluate the proposed MAS scheme and to investigate different segmentation strategies. Results: The mean WHS Dice score of the proposed MAS method was 0.918 ± 0.021, and the mean runtime for one case was 13.2 min on a workstation. This MAS scheme using joint label fusion generated significantly better Dice scores than the other label fusion strategies, including majority voting (0.901 ± 0.276, p < 0.01), locally weighted voting (0.905 ± 0.0247, p < 0.01), and probabilistic patch-based fusion (0.909 ± 0.0249, p < 0.01). In the atlas ranking study, the proposed criterion based on conditional entropy yielded a performance curve

  17. Parameter Selection for Ant Colony Algorithm Based on Bacterial Foraging Algorithm

    Directory of Open Access Journals (Sweden)

    Peng Li

    2016-01-01

    Full Text Available The optimal performance of the ant colony algorithm (ACA mainly depends on suitable parameters; therefore, parameter selection for ACA is important. We propose a parameter selection method for ACA based on the bacterial foraging algorithm (BFA, considering the effects of coupling between different parameters. Firstly, parameters for ACA are mapped into a multidimensional space, using a chemotactic operator to ensure that each parameter group approaches the optimal value, speeding up the convergence for each parameter set. Secondly, the operation speed for optimizing the entire parameter set is accelerated using a reproduction operator. Finally, the elimination-dispersal operator is used to strengthen the global optimization of the parameters, which avoids falling into a local optimal solution. In order to validate the effectiveness of this method, the results were compared with those using a genetic algorithm (GA and a particle swarm optimization (PSO, and simulations were conducted using different grid maps for robot path planning. The results indicated that parameter selection for ACA based on BFA was the superior method, able to determine the best parameter combination rapidly, accurately, and effectively.

  18. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  19. Object-constrained meshless deformable algorithm for high speed 3D nonrigid registration between CT and CBCT

    International Nuclear Information System (INIS)

    Chen Ting; Kim, Sung; Goyal, Sharad; Jabbour, Salma; Zhou Jinghao; Rajagopal, Gunaretnum; Haffty, Bruce; Yue Ning

    2010-01-01

    Purpose: High-speed nonrigid registration between the planning CT and the treatment CBCT data is critical for real time image guided radiotherapy (IGRT) to improve the dose distribution and to reduce the toxicity to adjacent organs. The authors propose a new fully automatic 3D registration framework that integrates object-based global and seed constraints with the grayscale-based ''demons'' algorithm. Methods: Clinical objects were segmented on the planning CT images and were utilized as meshless deformable models during the nonrigid registration process. The meshless models reinforced a global constraint in addition to the grayscale difference between CT and CBCT in order to maintain the shape and the volume of geometrically complex 3D objects during the registration. To expedite the registration process, the framework was stratified into hierarchies, and the authors used a frequency domain formulation to diffuse the displacement between the reference and the target in each hierarchy. Also during the registration of pelvis images, they replaced the air region inside the rectum with estimated pixel values from the surrounding rectal wall and introduced an additional seed constraint to robustly track and match the seeds implanted into the prostate. The proposed registration framework and algorithm were evaluated on 15 real prostate cancer patients. For each patient, prostate gland, seminal vesicle, bladder, and rectum were first segmented by a radiation oncologist on planning CT images for radiotherapy planning purpose. The same radiation oncologist also manually delineated the tumor volumes and critical anatomical structures in the corresponding CBCT images acquired at treatment. These delineated structures on the CBCT were only used as the ground truth for the quantitative validation, while structures on the planning CT were used both as the input to the registration method and the ground truth in validation. By registering the planning CT to the CBCT, a

  20. Development of a guideline on reading CT images of malignant pleural mesothelioma and selection of the reference CT films

    International Nuclear Information System (INIS)

    Zhou, Huashi; Tamura, Taro; Kusaka, Yukinori; Suganuma, Narufumi; Subhannachart, Ponglada; Vijitsanguan, Chomphunut; Noisiri, Weeraya; Hering, Kurt G.; Akira, Masanori; Itoh, Harumi

    2012-01-01

    Purpose: International experts developed a guideline on reading CT images of malignant pleural mesothelioma for radiologists and physicians. It is intended that it act as a supplement to the current International Classification of HRCT for Occupational and Environmental Respiratory Diseases. Methods: The research literatures on mesothelioma CT features were systematically reviewed. Ten mesothelioma CT features were adopted into the guideline prepared according to experts’ opinion. The terminology of mesothelioma CT features and mesothelioma probability were agreed by consensus of experts. The CT reference films for each mesothelioma feature were selected based on agreement by experts from 22 definite mesothelioma cases confirmed pathologically and immunohistochemically. To support the validity of the mesothelioma probability, 4 experts’ readings of CT films from 57 cases with or without mesothelioma were analyzed by kappa statistics between the experts; sensitivity and specificity for mesothelioma were also assessed. Results: The mesothelioma CT Guideline was developed, providing the terminology of CT features and the mesothelioma probability, the judgement of severity, the distribution of mesothelioma, and the revised CT reading sheet including mesothelioma items. The CT reference films with ten mesothelioma typical features were selected. The average linearly and quadratically weighted kappa of the agreement on the 4-point scale mesothelioma probability were 0.58 and 0.71, respectively. The average sensitivity and specificity for mesothelioma were 93.2% and 65.6%, respectively. Conclusion: The evidence-based mesothelioma CT Guideline developed may serve as a good educational tool to facilitate physicians in recognising mesothelioma and improve their proficiency in diagnosis of mesothelioma.

  1. Development of a guideline on reading CT images of malignant pleural mesothelioma and selection of the reference CT films

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Huashi, E-mail: zhouhua@u-fukui.ac.jp [Department of Environmental Health, School of Medicine, University of Fukui, 23-3 Shimoaitsuki, Matsuoka, Eihezi-cho, Fukui Prefecture 910-1193 (Japan); Tamura, Taro, E-mail: tarou@u-fukui.ac.jp [Department of Environmental Health, School of Medicine, University of Fukui, 23-3 Shimoaitsuki, Matsuoka, Eihezi-cho, Fukui Prefecture 910-1193 (Japan); Kusaka, Yukinori, E-mail: kusakayk@gmail.com [Department of Environmental Health, School of Medicine, University of Fukui, 23-3 Shimoaitsuki, Matsuoka, Eihezi-cho, Fukui Prefecture 910-1193 (Japan); Suganuma, Narufumi, E-mail: nsuganuma@kochi-u.ac.jp [Department of Environmental Medicine, Kochi University School of Medicine (Japan); Subhannachart, Ponglada, E-mail: pongladas@gmail.com [Central Chest Disease Institute of Thailand, 39 Moo 9, Tiwanon Road, Muang Nonthaburi 11000 (Thailand); Vijitsanguan, Chomphunut, E-mail: Chompoo_vj@yahoo.com [Central Chest Disease Institute of Thailand, 39 Moo 9, Tiwanon Road, Muang Nonthaburi 11000 (Thailand); Noisiri, Weeraya, E-mail: weeraya_tat@yahoo.com [Central Chest Disease Institute of Thailand, 39 Moo 9, Tiwanon Road, Muang Nonthaburi 11000 (Thailand); Hering, Kurt G., E-mail: k.g.hering@t-online.de [Department of Diagnostic Radiology, Radiooncology and Nuclear Medicine, Radiological Clinic, Miner' s Hospital, Radiologische Klinik, Lansppaschaftskranhaus Dortmund, Wieckesweg 27 44309, Dortmund (Germany); Akira, Masanori, E-mail: akira@kch.hosp.go.jp [Department of Radiology, National Hospital Organization Kinki-Chuo Chest Medical Center, 1180 Nagasone-cho, Kita-ku, Sakai, Osaka 591-8555 (Japan); Itoh, Harumi, E-mail: hitoh@fmsrsa.fukui-med.ac.jp [Department of Environmental Health, School of Medicine, University of Fukui, 23-3 Shimoaitsuki, Matsuoka, Eihezi-cho, Fukui Prefecture 910-1193 (Japan); Department of Radiology, School of Medicine, University of Fukui, 23-3 Shimoaitsuki Matsuoka, Eiheizi-cho, Fukui Prefecture 910-1193 (Japan); and others

    2012-12-15

    Purpose: International experts developed a guideline on reading CT images of malignant pleural mesothelioma for radiologists and physicians. It is intended that it act as a supplement to the current International Classification of HRCT for Occupational and Environmental Respiratory Diseases. Methods: The research literatures on mesothelioma CT features were systematically reviewed. Ten mesothelioma CT features were adopted into the guideline prepared according to experts’ opinion. The terminology of mesothelioma CT features and mesothelioma probability were agreed by consensus of experts. The CT reference films for each mesothelioma feature were selected based on agreement by experts from 22 definite mesothelioma cases confirmed pathologically and immunohistochemically. To support the validity of the mesothelioma probability, 4 experts’ readings of CT films from 57 cases with or without mesothelioma were analyzed by kappa statistics between the experts; sensitivity and specificity for mesothelioma were also assessed. Results: The mesothelioma CT Guideline was developed, providing the terminology of CT features and the mesothelioma probability, the judgement of severity, the distribution of mesothelioma, and the revised CT reading sheet including mesothelioma items. The CT reference films with ten mesothelioma typical features were selected. The average linearly and quadratically weighted kappa of the agreement on the 4-point scale mesothelioma probability were 0.58 and 0.71, respectively. The average sensitivity and specificity for mesothelioma were 93.2% and 65.6%, respectively. Conclusion: The evidence-based mesothelioma CT Guideline developed may serve as a good educational tool to facilitate physicians in recognising mesothelioma and improve their proficiency in diagnosis of mesothelioma.

  2. SU-E-J-150: Four-Dimensional Cone-Beam CT Algorithm by Extraction of Physical and Motion Parameter of Mobile Targets Retrospective to Image Reconstruction with Motion Modeling

    International Nuclear Information System (INIS)

    Ali, I; Ahmad, S; Alsbou, N

    2015-01-01

    Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases

  3. SU-E-J-150: Four-Dimensional Cone-Beam CT Algorithm by Extraction of Physical and Motion Parameter of Mobile Targets Retrospective to Image Reconstruction with Motion Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ali, I; Ahmad, S [University of Oklahoma Health Sciences, Oklahoma City, OK (United States); Alsbou, N [Ohio Northern University, Ada, OH (United States)

    2015-06-15

    Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases

  4. Comprehensive evaluation of ten deformable image registration algorithms for contour propagation between CT and cone-beam CT images in adaptive head & neck radiotherapy.

    Science.gov (United States)

    Li, Xin; Zhang, Yuyu; Shi, Yinghua; Wu, Shuyu; Xiao, Yang; Gu, Xuejun; Zhen, Xin; Zhou, Linghong

    2017-01-01

    Deformable image registration (DIR) is a critical technic in adaptive radiotherapy (ART) for propagating contours between planning computerized tomography (CT) images and treatment CT/cone-beam CT (CBCT) images to account for organ deformation for treatment re-planning. To validate the ability and accuracy of DIR algorithms in organ at risk (OAR) contour mapping, ten intensity-based DIR strategies, which were classified into four categories-optical flow-based, demons-based, level-set-based and spline-based-were tested on planning CT and fractional CBCT images acquired from twenty-one head & neck (H&N) cancer patients who underwent 6~7-week intensity-modulated radiation therapy (IMRT). Three similarity metrics, i.e., the Dice similarity coefficient (DSC), the percentage error (PE) and the Hausdorff distance (HD), were employed to measure the agreement between the propagated contours and the physician-delineated ground truths of four OARs, including the vertebra (VTB), the vertebral foramen (VF), the parotid gland (PG) and the submandibular gland (SMG). It was found that the evaluated DIRs in this work did not necessarily outperform rigid registration. DIR performed better for bony structures than soft-tissue organs, and the DIR performance tended to vary for different ROIs with different degrees of deformation as the treatment proceeded. Generally, the optical flow-based DIR performed best, while the demons-based DIR usually ranked last except for a modified demons-based DISC used for CT-CBCT DIR. These experimental results suggest that the choice of a specific DIR algorithm depends on the image modality, anatomic site, magnitude of deformation and application. Therefore, careful examinations and modifications are required before accepting the auto-propagated contours, especially for automatic re-planning ART systems.

  5. Imaging algorithms and CT protocols in trauma patients: survey of Swiss emergency centers

    International Nuclear Information System (INIS)

    Hinzpeter, R.; Alkadhi, Hatem; Boehm, T.; Boll, D.; Constantin, C.; Del Grande, F.; Fretz, V.; Leschka, S.; Ohletz, T.; Broennimann, M.; Schmidt, S.; Treumann, T.; Poletti, P.A.

    2017-01-01

    To identify imaging algorithms and indications, CT protocols, and radiation doses in polytrauma patients in Swiss trauma centres. An online survey with multiple choice questions and free-text responses was sent to authorized level-I trauma centres in Switzerland. All centres responded and indicated that they have internal standardized imaging algorithms for polytrauma patients. Nine of 12 centres (75 %) perform whole-body CT (WBCT) after focused assessment with sonography for trauma (FAST) and conventional radiography; 3/12 (25 %) use WBCT for initial imaging. Indications for WBCT were similar across centres being based on trauma mechanisms, vital signs, and presence of multiple injuries. Seven of 12 centres (58 %) perform an arterial and venous phase of the abdomen in split-bolus technique. Six of 12 centres (50 %) use multiphase protocols of the head (n = 3) and abdomen (n = 4), whereas 6/12 (50 %) use single-phase protocols for WBCT. Arm position was on the patient's body during scanning (3/12, 25 %), alongside the body (2/12, 17 %), above the head (2/12, 17 %), or was changed during scanning (5/12, 42 %). Radiation doses showed large variations across centres ranging from 1268-3988 mGy*cm (DLP) per WBCT. Imaging algorithms in polytrauma patients are standardized within, but vary across Swiss trauma centres, similar to the individual WBCT protocols, resulting in large variations in associated radiation doses. (orig.)

  6. Imaging algorithms and CT protocols in trauma patients: survey of Swiss emergency centers

    Energy Technology Data Exchange (ETDEWEB)

    Hinzpeter, R.; Alkadhi, Hatem [University Hospital Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); Boehm, T. [Kantonsspital Graubuenden, Department of Radiology, Chur (Switzerland); Boll, D. [University Hospital Basel, Department of Radiology and Nuclear Medicine, Basel (Switzerland); Constantin, C. [Spital Wallis, Department of Radiology, Visp (Switzerland); Del Grande, F. [Ospedale Regionale di Lugano, Department of Radiology, Lugano (Switzerland); Fretz, V. [Kantonsspital Winterthur, Institute of Radiology and Nuclear Medicine, Winterthur (Switzerland); Leschka, S. [Kantonsspital St Gallen, Division of Radiology and Nuclear Medicine, Gallen (Switzerland); Ohletz, T. [Kantonsspital Aarau, Department of Radiology, Aarau (Switzerland); Broennimann, M. [University Hospital Bern, Department of Diagnostic, Interventional and Pediatric Radiology, Bern (Switzerland); Schmidt, S. [Lausanne University Hospital, Department of Diagnostic and Interventional Radiology, Lausanne (Switzerland); Treumann, T. [Luzerner Kantonsspital, Institute of Radiology, Luzern 16 (Switzerland); Poletti, P.A. [Geneva University Hospital, Department of Radiology, Geneve (Switzerland)

    2017-05-15

    To identify imaging algorithms and indications, CT protocols, and radiation doses in polytrauma patients in Swiss trauma centres. An online survey with multiple choice questions and free-text responses was sent to authorized level-I trauma centres in Switzerland. All centres responded and indicated that they have internal standardized imaging algorithms for polytrauma patients. Nine of 12 centres (75 %) perform whole-body CT (WBCT) after focused assessment with sonography for trauma (FAST) and conventional radiography; 3/12 (25 %) use WBCT for initial imaging. Indications for WBCT were similar across centres being based on trauma mechanisms, vital signs, and presence of multiple injuries. Seven of 12 centres (58 %) perform an arterial and venous phase of the abdomen in split-bolus technique. Six of 12 centres (50 %) use multiphase protocols of the head (n = 3) and abdomen (n = 4), whereas 6/12 (50 %) use single-phase protocols for WBCT. Arm position was on the patient's body during scanning (3/12, 25 %), alongside the body (2/12, 17 %), above the head (2/12, 17 %), or was changed during scanning (5/12, 42 %). Radiation doses showed large variations across centres ranging from 1268-3988 mGy*cm (DLP) per WBCT. Imaging algorithms in polytrauma patients are standardized within, but vary across Swiss trauma centres, similar to the individual WBCT protocols, resulting in large variations in associated radiation doses. (orig.)

  7. Image Registration for PET/CT and CT Images with Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Lee, Hak Jae; Kim, Yong Kwon; Lee, Ki Sung; Choi, Jong Hak; Kim, Chang Kyun; Moon, Guk Hyun; Joo, Sung Kwan; Kim, Kyeong Min; Cheon, Gi Jeong

    2009-01-01

    Image registration is a fundamental task in image processing used to match two or more images. It gives new information to the radiologists by matching images from different modalities. The objective of this study is to develop 2D image registration algorithm for PET/CT and CT images acquired by different systems at different times. We matched two CT images first (one from standalone CT and the other from PET/CT) that contain affluent anatomical information. Then, we geometrically transformed PET image according to the results of transformation parameters calculated by the previous step. We have used Affine transform to match the target and reference images. For the similarity measure, mutual information was explored. Use of particle swarm algorithm optimized the performance by finding the best matched parameter set within a reasonable amount of time. The results show good agreements of the images between PET/CT and CT. We expect the proposed algorithm can be used not only for PET/CT and CT image registration but also for different multi-modality imaging systems such as SPECT/CT, MRI/PET and so on.

  8. An Improved User Selection Algorithm in Multiuser MIMO Broadcast with Channel Prediction

    Science.gov (United States)

    Min, Zhi; Ohtsuki, Tomoaki

    In multiuser MIMO-BC (Multiple-Input Multiple-Output Broadcasting) systems, user selection is important to achieve multiuser diversity. The optimal user selection algorithm is to try all the combinations of users to find the user group that can achieve the multiuser diversity. Unfortunately, the high calculation cost of the optimal algorithm prevents its implementation. Thus, instead of the optimal algorithm, some suboptimal user selection algorithms were proposed based on semiorthogonality of user channel vectors. The purpose of this paper is to achieve multiuser diversity with a small amount of calculation. For this purpose, we propose a user selection algorithm that can improve the orthogonality of a selected user group. We also apply a channel prediction technique to a MIMO-BC system to get more accurate channel information at the transmitter. Simulation results show that the channel prediction can improve the accuracy of channel information for user selections, and the proposed user selection algorithm achieves higher sum rate capacity than the SUS (Semiorthogonal User Selection) algorithm. Also we discuss the setting of the algorithm threshold. As the result of a discussion on the calculation complexity, which uses the number of complex multiplications as the parameter, the proposed algorithm is shown to have a calculation complexity almost equal to that of the SUS algorithm, and they are much lower than that of the optimal user selection algorithm.

  9. Dosimetric Evaluation of Metal Artefact Reduction using Metal Artefact Reduction (MAR) Algorithm and Dual-energy Computed Tomography (CT) Method

    Science.gov (United States)

    Laguda, Edcer Jerecho

    Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient's medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method. Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three

  10. Low-dose dual-energy cone-beam CT using a total-variation minimization algorithm

    International Nuclear Information System (INIS)

    Min, Jong Hwan

    2011-02-01

    Dual-energy cone-beam CT is an important imaging modality in diagnostic applications, and may also find its use in other application such as therapeutic image guidance. Despite of its clinical values, relatively high radiation dose of dual-energy scan may pose a challenge to its wide use. In this work, we investigated a low-dose, pre-reconstruction type of dual-energy cone-beam CT (CBCT) using a total-variation minimization algorithm for image reconstruction. An empirical dual-energy calibration method was used to prepare material-specific projection data. Raw data at high and low tube voltages are converted into a set of basis functions which can be linearly combined to produce material-specific data using the coefficients obtained through the calibration process. From much fewer views than are conventionally used, material specific images are reconstructed by use of the total-variation minimization algorithm. An experimental study was performed to demonstrate the feasibility of the proposed method using a micro-CT system. We have reconstructed images of the phantoms from only 90 projections acquired at tube voltages of 40 kVp and 90 kVp each. Aluminum-only and acryl-only images were successfully decomposed. We evaluated the quality of the reconstructed images by use of contrast-to-noise ratio and detectability. A low-dose dual-energy CBCT can be realized via the proposed method by greatly reducing the number of projections

  11. Applying an animal model to quantify the uncertainties of an image-based 4D-CT algorithm

    International Nuclear Information System (INIS)

    Pierce, Greg; Battista, Jerry; Wang, Kevin; Lee, Ting-Yim

    2012-01-01

    The purpose of this paper is to use an animal model to quantify the spatial displacement uncertainties and test the fundamental assumptions of an image-based 4D-CT algorithm in vivo. Six female Landrace cross pigs were ventilated and imaged using a 64-slice CT scanner (GE Healthcare) operating in axial cine mode. The breathing amplitude pattern of the pigs was varied by periodically crimping the ventilator gas return tube during the image acquisition. The image data were used to determine the displacement uncertainties that result from matching CT images at the same respiratory phase using normalized cross correlation (NCC) as the matching criteria. Additionally, the ability to match the respiratory phase of a 4.0 cm subvolume of the thorax to a reference subvolume using only a single overlapping 2D slice from the two subvolumes was tested by varying the location of the overlapping matching image within the subvolume and examining the effect this had on the displacement relative to the reference volume. The displacement uncertainty resulting from matching two respiratory images using NCC ranged from 0.54 ± 0.10 mm per match to 0.32 ± 0.16 mm per match in the lung of the animal. The uncertainty was found to propagate in quadrature, increasing with number of NCC matches performed. In comparison, the minimum displacement achievable if two respiratory images were matched perfectly in phase ranged from 0.77 ± 0.06 to 0.93 ± 0.06 mm in the lung. The assumption that subvolumes from separate cine scan could be matched by matching a single overlapping 2D image between to subvolumes was validated. An in vivo animal model was developed to test an image-based 4D-CT algorithm. The uncertainties associated with using NCC to match the respiratory phase of two images were quantified and the assumption that a 4.0 cm 3D subvolume can by matched in respiratory phase by matching a single 2D image from the 3D subvolume was validated. The work in this paper shows the image-based 4D-CT

  12. Comprehensive evaluation of ten deformable image registration algorithms for contour propagation between CT and cone-beam CT images in adaptive head & neck radiotherapy.

    Directory of Open Access Journals (Sweden)

    Xin Li

    Full Text Available Deformable image registration (DIR is a critical technic in adaptive radiotherapy (ART for propagating contours between planning computerized tomography (CT images and treatment CT/cone-beam CT (CBCT images to account for organ deformation for treatment re-planning. To validate the ability and accuracy of DIR algorithms in organ at risk (OAR contour mapping, ten intensity-based DIR strategies, which were classified into four categories-optical flow-based, demons-based, level-set-based and spline-based-were tested on planning CT and fractional CBCT images acquired from twenty-one head & neck (H&N cancer patients who underwent 6~7-week intensity-modulated radiation therapy (IMRT. Three similarity metrics, i.e., the Dice similarity coefficient (DSC, the percentage error (PE and the Hausdorff distance (HD, were employed to measure the agreement between the propagated contours and the physician-delineated ground truths of four OARs, including the vertebra (VTB, the vertebral foramen (VF, the parotid gland (PG and the submandibular gland (SMG. It was found that the evaluated DIRs in this work did not necessarily outperform rigid registration. DIR performed better for bony structures than soft-tissue organs, and the DIR performance tended to vary for different ROIs with different degrees of deformation as the treatment proceeded. Generally, the optical flow-based DIR performed best, while the demons-based DIR usually ranked last except for a modified demons-based DISC used for CT-CBCT DIR. These experimental results suggest that the choice of a specific DIR algorithm depends on the image modality, anatomic site, magnitude of deformation and application. Therefore, careful examinations and modifications are required before accepting the auto-propagated contours, especially for automatic re-planning ART systems.

  13. Implementations of PI-line based FBP and BPF algorithms on GPGPU

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Le [Tsinghua Univ., Beijing (China). Dept. of Engineering Physics; Xing, Yuxiang [Tsinghua Univ., Beijing (China). Dept. of Engineering Physics; Ministry of Education, Beijing (China). Key Lab. of Particle and Radiation Imaging

    2011-07-01

    Exact reconstruction is under the spotlight in cone beam CT. Katsevich put forward the first exact inversion formula for helical cone beam CT, which belongs to FBP type. Also, Pan Xiaochuan's group proposed another PI-line based exact reconstruction algorithm of BPF type. These two exact reconstruction algorithms and their derivative forms have been widely studied. In this paper, we present a different way of selecting PI-line segments appropriate for both Katsevich's FBP and Pan Xiaochuan's BPF algorithms. As 3D reconstruction contributes to massive computations and takes long time, people have made efforts to speed up the algorithms with the help of multi-core CPUs and GPGPU (General Purpose Graphics Processing Unit). In this paper, we also presents implementations for these two algorithms on GPGPU using an innovative way of selecting PI-line segments. Acceleration techniques and implementations are addressed in detail. The methods are tested on the Shepp-Logan phantom. Compared with our CPU's implementations, the accelerated algorithms on GPGPU are tens to hundreds times faster. (orig.)

  14. How does PET/CT help in selecting therapy for patients with Hodgkin lymphoma?

    DEFF Research Database (Denmark)

    Hutchings, Martin

    2012-01-01

    investigating the use of PET/CT for early response-adapted therapy, with therapeutic stratification based on interim PET/CT results. Posttreatment PET/CT is a cornerstone of the revised response criteria and enables the selection of advanced-stage patients without the need for consolidation radiotherapy. Once...

  15. Selective epidemic vaccination under the performant routing algorithms

    Science.gov (United States)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  16. Validation of algorithm used for location of electrodes in CT images

    International Nuclear Information System (INIS)

    Bustos, J; Graffigna, J P; Isoardi, R; Gómez, M E; Romo, R

    2013-01-01

    It has been implement a noninvasive technique to detect and delineate the focus of electric discharge in patients with mono-focal epilepsy. For the detection of these sources it has used electroencephalogram (EEG) with 128 electrodes cap. With EEG data and electrodes position, it is possible locate this focus on MR volumes. The technique locates the electrodes on CT volumes using image processing algorithms to obtain descriptors of electrodes, as centroid, which determines its position in space. Finally these points are transformed into the coordinate space of MR through a registration for a better understanding by the physician. Due to the medical implications of this technique is of utmost importance to validate the results of the detection of electrodes coordinates. For that, this paper present a comparison between the actual values measured physically (measures including electrode size and spatial location) and the values obtained in the processing of CT and MR images

  17. Fully automatic algorithm for segmenting full human diaphragm in non-contrast CT Images

    Science.gov (United States)

    Karami, Elham; Gaede, Stewart; Lee, Ting-Yim; Samani, Abbas

    2015-03-01

    The diaphragm is a sheet of muscle which separates the thorax from the abdomen and it acts as the most important muscle of the respiratory system. As such, an accurate segmentation of the diaphragm, not only provides key information for functional analysis of the respiratory system, but also can be used for locating other abdominal organs such as the liver. However, diaphragm segmentation is extremely challenging in non-contrast CT images due to the diaphragm's similar appearance to other abdominal organs. In this paper, we present a fully automatic algorithm for diaphragm segmentation in non-contrast CT images. The method is mainly based on a priori knowledge about the human diaphragm anatomy. The diaphragm domes are in contact with the lungs and the heart while its circumference runs along the lumbar vertebrae of the spine as well as the inferior border of the ribs and sternum. As such, the diaphragm can be delineated by segmentation of these organs followed by connecting relevant parts of their outline properly. More specifically, the bottom surface of the lungs and heart, the spine borders and the ribs are delineated, leading to a set of scattered points which represent the diaphragm's geometry. Next, a B-spline filter is used to find the smoothest surface which pass through these points. This algorithm was tested on a noncontrast CT image of a lung cancer patient. The results indicate that there is an average Hausdorff distance of 2.96 mm between the automatic and manually segmented diaphragms which implies a favourable accuracy.

  18. A Two-Pass Exact Algorithm for Selection on Parallel Disk Systems.

    Science.gov (United States)

    Mi, Tian; Rajasekaran, Sanguthevar

    2013-07-01

    Numerous OLAP queries process selection operations of "top N", median, "top 5%", in data warehousing applications. Selection is a well-studied problem that has numerous applications in the management of data and databases since, typically, any complex data query can be reduced to a series of basic operations such as sorting and selection. The parallel selection has also become an important fundamental operation, especially after parallel databases were introduced. In this paper, we present a deterministic algorithm Recursive Sampling Selection (RSS) to solve the exact out-of-core selection problem, which we show needs no more than (2 + ε ) passes ( ε being a very small fraction). We have compared our RSS algorithm with two other algorithms in the literature, namely, the Deterministic Sampling Selection and QuickSelect on the Parallel Disks Systems. Our analysis shows that DSS is a (2 + ε )-pass algorithm when the total number of input elements N is a polynomial in the memory size M (i.e., N = M c for some constant c ). While, our proposed algorithm RSS runs in (2 + ε ) passes without any assumptions. Experimental results indicate that both RSS and DSS outperform QuickSelect on the Parallel Disks Systems. Especially, the proposed algorithm RSS is more scalable and robust to handle big data when the input size is far greater than the core memory size, including the case of N ≫ M c .

  19. Automatic spectral imaging protocol selection and iterative reconstruction in abdominal CT with reduced contrast agent dose: initial experience

    Energy Technology Data Exchange (ETDEWEB)

    Lv, Peijie; Liu, Jie; Chai, Yaru; Yan, Xiaopeng; Gao, Jianbo; Dong, Junqiang [The First Affiliated Hospital of Zhengzhou University, Department of Radiology, Zhengzhou, Henan Province (China)

    2017-01-15

    To evaluate the feasibility, image quality, and radiation dose of automatic spectral imaging protocol selection (ASIS) and adaptive statistical iterative reconstruction (ASIR) with reduced contrast agent dose in abdominal multiphase CT. One hundred and sixty patients were randomly divided into two scan protocols (n = 80) each; protocol A, 120 kVp/450 mgI/kg, filtered back projection algorithm (FBP); protocol B, spectral CT imaging with ASIS and 40 to 70 keV monochromatic images generated per 300 mgI/kg, ASIR algorithm. Quantitative parameters (image noise and contrast-to-noise ratios [CNRs]) and qualitative visual parameters (image noise, small structures, organ enhancement, and overall image quality) were compared. Monochromatic images at 50 keV and 60 keV provided similar or lower image noise, but higher contrast and overall image quality as compared with 120-kVp images. Despite the higher image noise, 40-keV images showed similar overall image quality compared to 120-kVp images. Radiation dose did not differ between the two protocols, while contrast agent dose in protocol B was reduced by 33 %. Application of ASIR and ASIS to monochromatic imaging from 40 to 60 keV allowed contrast agent dose reduction with adequate image quality and without increasing radiation dose compared to 120 kVp with FBP. (orig.)

  20. Automatic spectral imaging protocol selection and iterative reconstruction in abdominal CT with reduced contrast agent dose: initial experience

    International Nuclear Information System (INIS)

    Lv, Peijie; Liu, Jie; Chai, Yaru; Yan, Xiaopeng; Gao, Jianbo; Dong, Junqiang

    2017-01-01

    To evaluate the feasibility, image quality, and radiation dose of automatic spectral imaging protocol selection (ASIS) and adaptive statistical iterative reconstruction (ASIR) with reduced contrast agent dose in abdominal multiphase CT. One hundred and sixty patients were randomly divided into two scan protocols (n = 80) each; protocol A, 120 kVp/450 mgI/kg, filtered back projection algorithm (FBP); protocol B, spectral CT imaging with ASIS and 40 to 70 keV monochromatic images generated per 300 mgI/kg, ASIR algorithm. Quantitative parameters (image noise and contrast-to-noise ratios [CNRs]) and qualitative visual parameters (image noise, small structures, organ enhancement, and overall image quality) were compared. Monochromatic images at 50 keV and 60 keV provided similar or lower image noise, but higher contrast and overall image quality as compared with 120-kVp images. Despite the higher image noise, 40-keV images showed similar overall image quality compared to 120-kVp images. Radiation dose did not differ between the two protocols, while contrast agent dose in protocol B was reduced by 33 %. Application of ASIR and ASIS to monochromatic imaging from 40 to 60 keV allowed contrast agent dose reduction with adequate image quality and without increasing radiation dose compared to 120 kVp with FBP. (orig.)

  1. A Cancer Gene Selection Algorithm Based on the K-S Test and CFS

    Directory of Open Access Journals (Sweden)

    Qiang Su

    2017-01-01

    Full Text Available Background. To address the challenging problem of selecting distinguished genes from cancer gene expression datasets, this paper presents a gene subset selection algorithm based on the Kolmogorov-Smirnov (K-S test and correlation-based feature selection (CFS principles. The algorithm selects distinguished genes first using the K-S test, and then, it uses CFS to select genes from those selected by the K-S test. Results. We adopted support vector machines (SVM as the classification tool and used the criteria of accuracy to evaluate the performance of the classifiers on the selected gene subsets. This approach compared the proposed gene subset selection algorithm with the K-S test, CFS, minimum-redundancy maximum-relevancy (mRMR, and ReliefF algorithms. The average experimental results of the aforementioned gene selection algorithms for 5 gene expression datasets demonstrate that, based on accuracy, the performance of the new K-S and CFS-based algorithm is better than those of the K-S test, CFS, mRMR, and ReliefF algorithms. Conclusions. The experimental results show that the K-S test-CFS gene selection algorithm is a very effective and promising approach compared to the K-S test, CFS, mRMR, and ReliefF algorithms.

  2. Adaptive Equalizer Using Selective Partial Update Algorithm and Selective Regressor Affine Projection Algorithm over Shallow Water Acoustic Channels

    Directory of Open Access Journals (Sweden)

    Masoumeh Soflaei

    2014-01-01

    Full Text Available One of the most important problems of reliable communications in shallow water channels is intersymbol interference (ISI which is due to scattering from surface and reflecting from bottom. Using adaptive equalizers in receiver is one of the best suggested ways for overcoming this problem. In this paper, we apply the family of selective regressor affine projection algorithms (SR-APA and the family of selective partial update APA (SPU-APA which have low computational complexity that is one of the important factors that influences adaptive equalizer performance. We apply experimental data from Strait of Hormuz for examining the efficiency of the proposed methods over shallow water channel. We observe that the values of the steady-state mean square error (MSE of SR-APA and SPU-APA decrease by 5.8 (dB and 5.5 (dB, respectively, in comparison with least mean square (LMS algorithm. Also the families of SPU-APA and SR-APA have better convergence speed than LMS type algorithm.

  3. Filtered backprojection proton CT reconstruction along most likely paths

    Energy Technology Data Exchange (ETDEWEB)

    Rit, Simon; Dedes, George; Freud, Nicolas; Sarrut, David; Letang, Jean Michel [Universite de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Universite Lyon 1, Centre Leon Berard, 69008 Lyon (France)

    2013-03-15

    Purpose: Proton CT (pCT) has the potential to accurately measure the electron density map of tissues at low doses but the spatial resolution is prohibitive if the curved paths of protons in matter is not accounted for. The authors propose to account for an estimate of the most likely path of protons in a filtered backprojection (FBP) reconstruction algorithm. Methods: The energy loss of protons is first binned in several proton radiographs at different distances to the proton source to exploit the depth-dependency of the estimate of the most likely path. This process is named the distance-driven binning. A voxel-specific backprojection is then used to select the adequate radiograph in the distance-driven binning in order to propagate in the pCT image the best achievable spatial resolution in proton radiographs. The improvement in spatial resolution is demonstrated using Monte Carlo simulations of resolution phantoms. Results: The spatial resolution in the distance-driven binning depended on the distance of the objects from the source and was optimal in the binned radiograph corresponding to that distance. The spatial resolution in the reconstructed pCT images decreased with the depth in the scanned object but it was always better than previous FBP algorithms assuming straight line paths. In a water cylinder with 20 cm diameter, the observed range of spatial resolutions was 0.7 - 1.6 mm compared to 1.0 - 2.4 mm at best with a straight line path assumption. The improvement was strongly enhanced in shorter 200 Degree-Sign scans. Conclusions: Improved spatial resolution was obtained in pCT images with filtered backprojection reconstruction using most likely path estimates of protons. The improvement in spatial resolution combined with the practicality of FBP algorithms compared to iterative reconstruction algorithms makes this new algorithm a candidate of choice for clinical pCT.

  4. Enhancement of Selection, Bubble and Insertion Sorting Algorithm

    OpenAIRE

    Muhammad Farooq Umar; Ehsan Ullah Munir; Shafqat Ali Shad; Muhammad Wasif Nisar

    2014-01-01

    In everyday life there is a large amount of data to arrange because sorting removes any ambiguities and make the data analysis and data processing very easy, efficient and provides with cost less effort. In this study a set of improved sorting algorithms are proposed which gives better performance and design idea. In this study five new sorting algorithms (Bi-directional Selection Sort, Bi-directional bubble sort, MIDBiDirectional Selection Sort, MIDBidirectional bubble sort and linear insert...

  5. Effect of CT digital image compression on detection of coronary artery calcification

    International Nuclear Information System (INIS)

    Zheng, L.M.; Sone, S.; Itani, Y.; Wang, Q.; Hanamura, K.; Asakura, K.; Li, F.; Yang, Z.G.; Wang, J.C.; Funasaka, T.

    2000-01-01

    Purpose: To test the effect of digital compression of CT images on the detection of small linear or spotted high attenuation lesions such as coronary artery calcification (CAC). Material and methods: Fifty cases with and 50 without CAC were randomly selected from a population that had undergone spiral CT of the thorax for screening lung cancer. CT image data were compressed using JPEG (Joint Photographic Experts Group) or wavelet algorithms at ratios of 10:1, 20:1 or 40:1. Five radiologists reviewed the uncompressed and compressed images on a cathode-ray-tube. Observer performance was evaluated with receiver operating characteristic analysis. Results: CT images compressed at a ratio as high as 20:1 were acceptable for primary diagnosis of CAC. There was no significant difference in the detection accuracy for CAC between JPEG and wavelet algorithms at the compression ratios up to 20:1. CT images were more vulnerable to image blurring on the wavelet compression at relatively lower ratios, and 'blocking' artifacts occurred on the JPEG compression at relatively higher ratios. Conclusion: JPEG and wavelet algorithms allow compression of CT images without compromising their diagnostic value at ratios up to 20:1 in detecting small linear or spotted high attenuation lesions such as CAC, and there was no difference between the two algorithms in diagnostic accuracy

  6. Parameters selection in gene selection using Gaussian kernel support vector machines by genetic algorithm

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    In microarray-based cancer classification, gene selection is an important issue owing to the large number of variables and small number of samples as well as its non-linearity. It is difficult to get satisfying results by using conventional linear statistical methods. Recursive feature elimination based on support vector machine (SVM RFE) is an effective algorithm for gene selection and cancer classification, which are integrated into a consistent framework. In this paper, we propose a new method to select parameters of the aforementioned algorithm implemented with Gaussian kernel SVMs as better alternatives to the common practice of selecting the apparently best parameters by using a genetic algorithm to search for a couple of optimal parameter. Fast implementation issues for this method are also discussed for pragmatic reasons. The proposed method was tested on two representative hereditary breast cancer and acute leukaemia datasets. The experimental results indicate that the proposed method performs well in selecting genes and achieves high classification accuracies with these genes.

  7. A redundancy-removing feature selection algorithm for nominal data

    Directory of Open Access Journals (Sweden)

    Zhihua Li

    2015-10-01

    Full Text Available No order correlation or similarity metric exists in nominal data, and there will always be more redundancy in a nominal dataset, which means that an efficient mutual information-based nominal-data feature selection method is relatively difficult to find. In this paper, a nominal-data feature selection method based on mutual information without data transformation, called the redundancy-removing more relevance less redundancy algorithm, is proposed. By forming several new information-related definitions and the corresponding computational methods, the proposed method can compute the information-related amount of nominal data directly. Furthermore, by creating a new evaluation function that considers both the relevance and the redundancy globally, the new feature selection method can evaluate the importance of each nominal-data feature. Although the presented feature selection method takes commonly used MIFS-like forms, it is capable of handling high-dimensional datasets without expensive computations. We perform extensive experimental comparisons of the proposed algorithm and other methods using three benchmarking nominal datasets with two different classifiers. The experimental results demonstrate the average advantage of the presented algorithm over the well-known NMIFS algorithm in terms of the feature selection and classification accuracy, which indicates that the proposed method has a promising performance.

  8. Effective traffic features selection algorithm for cyber-attacks samples

    Science.gov (United States)

    Li, Yihong; Liu, Fangzheng; Du, Zhenyu

    2018-05-01

    By studying the defense scheme of Network attacks, this paper propose an effective traffic features selection algorithm based on k-means++ clustering to deal with the problem of high dimensionality of traffic features which extracted from cyber-attacks samples. Firstly, this algorithm divide the original feature set into attack traffic feature set and background traffic feature set by the clustering. Then, we calculates the variation of clustering performance after removing a certain feature. Finally, evaluating the degree of distinctiveness of the feature vector according to the result. Among them, the effective feature vector is whose degree of distinctiveness exceeds the set threshold. The purpose of this paper is to select out the effective features from the extracted original feature set. In this way, it can reduce the dimensionality of the features so as to reduce the space-time overhead of subsequent detection. The experimental results show that the proposed algorithm is feasible and it has some advantages over other selection algorithms.

  9. Multimaterial Decomposition Algorithm for the Quantification of Liver Fat Content by Using Fast-Kilovolt-Peak Switching Dual-Energy CT: Clinical Evaluation.

    Science.gov (United States)

    Hyodo, Tomoko; Yada, Norihisa; Hori, Masatoshi; Maenishi, Osamu; Lamb, Peter; Sasaki, Kosuke; Onoda, Minori; Kudo, Masatoshi; Mochizuki, Teruhito; Murakami, Takamichi

    2017-04-01

    Purpose To assess the clinical accuracy and reproducibility of liver fat quantification with the multimaterial decomposition (MMD) algorithm, comparing the performance of MMD with that of magnetic resonance (MR) spectroscopy by using liver biopsy as the reference standard. Materials and Methods This prospective study was approved by the institutional ethics committee, and patients provided written informed consent. Thirty-three patients suspected of having hepatic steatosis underwent non-contrast material-enhanced and triple-phase dynamic contrast-enhanced dual-energy computed tomography (CT) (80 and 140 kVp) and single-voxel proton MR spectroscopy within 30 days before liver biopsy. Percentage fat volume fraction (FVF) images were generated by using the MMD algorithm on dual-energy CT data to measure hepatic fat content. FVFs determined by using dual-energy CT and percentage fat fractions (FFs) determined by using MR spectroscopy were compared with histologic steatosis grade (0-3, as defined by the nonalcoholic fatty liver disease activity score system) by using Jonckheere-Terpstra trend tests and were compared with each other by using Bland-Altman analysis. Real non-contrast-enhanced FVFs were compared with triple-phase contrast-enhanced FVFs to determine the reproducibility of MMD by using Bland-Altman analyses. Results Both dual-energy CT FVF and MR spectroscopy FF increased with increasing histologic steatosis grade (trend test, P algorithm quantifying hepatic fat in dual-energy CT images is accurate and reproducible across imaging phases. © RSNA, 2017 Online supplemental material is available for this article.

  10. Super resolution reconstruction of μ-CT image of rock sample using neighbour embedding algorithm

    Science.gov (United States)

    Wang, Yuzhu; Rahman, Sheik S.; Arns, Christoph H.

    2018-03-01

    X-ray computed tomography (μ-CT) is considered to be the most effective way to obtain the inner structure of rock sample without destructions. However, its limited resolution hampers its ability to probe sub-micro structures which is critical for flow transportation of rock sample. In this study, we propose an innovative methodology to improve the resolution of μ-CT image using neighbour embedding algorithm where low frequency information is provided by μ-CT image itself while high frequency information is supplemented by high resolution scanning electron microscopy (SEM) image. In order to obtain prior for reconstruction, a large number of image patch pairs contain high- and low- image patches are extracted from the Gaussian image pyramid generated by SEM image. These image patch pairs contain abundant information about tomographic evolution of local porous structures under different resolution spaces. Relying on the assumption of self-similarity of porous structure, this prior information can be used to supervise the reconstruction of high resolution μ-CT image effectively. The experimental results show that the proposed method is able to achieve the state-of-the-art performance.

  11. Selection method of terrain matching area for TERCOM algorithm

    Science.gov (United States)

    Zhang, Qieqie; Zhao, Long

    2017-10-01

    The performance of terrain aided navigation is closely related to the selection of terrain matching area. The different matching algorithms have different adaptability to terrain. This paper mainly studies the adaptability to terrain of TERCOM algorithm, analyze the relation between terrain feature and terrain characteristic parameters by qualitative and quantitative methods, and then research the relation between matching probability and terrain characteristic parameters by the Monte Carlo method. After that, we propose a selection method of terrain matching area for TERCOM algorithm, and verify the method correctness with real terrain data by simulation experiment. Experimental results show that the matching area obtained by the method in this paper has the good navigation performance and the matching probability of TERCOM algorithm is great than 90%

  12. An algorithm for preferential selection of spectroscopic targets in LEGUE

    International Nuclear Information System (INIS)

    Carlin, Jeffrey L.; Newberg, Heidi Jo; Lépine, Sébastien; Deng Licai; Chen Yuqin; Fu Xiaoting; Gao Shuang; Li Jing; Liu Chao; Beers, Timothy C.; Christlieb, Norbert; Grillmair, Carl J.; Guhathakurta, Puragra; Han Zhanwen; Hou Jinliang; Lee, Hsu-Tai; Liu Xiaowei; Pan Kaike; Sellwood, J. A.; Wang Hongchi

    2012-01-01

    We describe a general target selection algorithm that is applicable to any survey in which the number of available candidates is much larger than the number of objects to be observed. This routine aims to achieve a balance between a smoothly-varying, well-understood selection function and the desire to preferentially select certain types of targets. Some target-selection examples are shown that illustrate different possibilities of emphasis functions. Although it is generally applicable, the algorithm was developed specifically for the LAMOST Experiment for Galactic Understanding and Exploration (LEGUE) survey that will be carried out using the Chinese Guo Shou Jing Telescope. In particular, this algorithm was designed for the portion of LEGUE targeting the Galactic halo, in which we attempt to balance a variety of science goals that require stars at fainter magnitudes than can be completely sampled by LAMOST. This algorithm has been implemented for the halo portion of the LAMOST pilot survey, which began in October 2011.

  13. Liver Segmentation Based on Snakes Model and Improved GrowCut Algorithm in Abdominal CT Image

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2013-01-01

    Full Text Available A novel method based on Snakes Model and GrowCut algorithm is proposed to segment liver region in abdominal CT images. First, according to the traditional GrowCut method, a pretreatment process using K-means algorithm is conducted to reduce the running time. Then, the segmentation result of our improved GrowCut approach is used as an initial contour for the future precise segmentation based on Snakes model. At last, several experiments are carried out to demonstrate the performance of our proposed approach and some comparisons are conducted between the traditional GrowCut algorithm. Experimental results show that the improved approach not only has a better robustness and precision but also is more efficient than the traditional GrowCut method.

  14. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Solomon, Justin, E-mail: justin.solomon@duke.edu [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Samei, Ehsan [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 and Departments of Biomedical Engineering and Electrical and Computer Engineering, Pratt School of Engineering, Duke University, Durham, North Carolina 27705 (United States)

    2014-09-15

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was

  15. Quantum noise properties of CT images with anatomical textured backgrounds across reconstruction algorithms: FBP and SAFIRE

    International Nuclear Information System (INIS)

    Solomon, Justin; Samei, Ehsan

    2014-01-01

    Purpose: Quantum noise properties of CT images are generally assessed using simple geometric phantoms with uniform backgrounds. Such phantoms may be inadequate when assessing nonlinear reconstruction or postprocessing algorithms. The purpose of this study was to design anatomically informed textured phantoms and use the phantoms to assess quantum noise properties across two clinically available reconstruction algorithms, filtered back projection (FBP) and sinogram affirmed iterative reconstruction (SAFIRE). Methods: Two phantoms were designed to represent lung and soft-tissue textures. The lung phantom included intricate vessel-like structures along with embedded nodules (spherical, lobulated, and spiculated). The soft tissue phantom was designed based on a three-dimensional clustered lumpy background with included low-contrast lesions (spherical and anthropomorphic). The phantoms were built using rapid prototyping (3D printing) technology and, along with a uniform phantom of similar size, were imaged on a Siemens SOMATOM Definition Flash CT scanner and reconstructed with FBP and SAFIRE. Fifty repeated acquisitions were acquired for each background type and noise was assessed by estimating pixel-value statistics, such as standard deviation (i.e., noise magnitude), autocorrelation, and noise power spectrum. Noise stationarity was also assessed by examining the spatial distribution of noise magnitude. The noise properties were compared across background types and between the two reconstruction algorithms. Results: In FBP and SAFIRE images, noise was globally nonstationary for all phantoms. In FBP images of all phantoms, and in SAFIRE images of the uniform phantom, noise appeared to be locally stationary (within a reasonably small region of interest). Noise was locally nonstationary in SAFIRE images of the textured phantoms with edge pixels showing higher noise magnitude compared to pixels in more homogenous regions. For pixels in uniform regions, noise magnitude was

  16. Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ross; Wierzbicki, Marcin [McMaster University / National Guard Health Affairs, Radiation Oncology Department, Riyadh, Saudi Arabia, McMaster University / Juravinski Cancer Centre, McMaster University / Juravinski Cancer Centre, McMaster University / Juravinski Cancer Centre (Saudi Arabia)

    2016-08-15

    Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as a target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.

  17. Poster - 32: Atlas Selection for Automated Segmentation of Pelvic CT for Prostate Radiotherapy

    International Nuclear Information System (INIS)

    Mallawi, Abrar; Farrell, TomTom; Diamond, Kevin-Ross; Wierzbicki, Marcin

    2016-01-01

    Atlas based-segmentation has recently been evaluated for use in prostate radiotherapy. In a typical approach, the essential step is the selection of an atlas from a database that the best matches of the target image. This work proposes an atlas selection strategy and evaluate it impacts on final segmentation accuracy. Several anatomical parameters were measured to indicate the overall prostate and body shape, all of these measurements obtained on CT images. A brute force procedure was first performed for a training dataset of 20 patients using image registration to pair subject with similar contours; each subject was served as a target image to which all reaming 19 images were affinity registered. The overlap between the prostate and femoral heads was quantified for each pair using the Dice Similarity Coefficient (DSC). Finally, an atlas selection procedure was designed; relying on the computation of a similarity score defined as a weighted sum of differences between the target and atlas subject anatomical measurement. The algorithm ability to predict the most similar atlas was excellent, achieving mean DSCs of 0.78 ± 0.07 and 0.90 ± 0.02 for the CTV and either femoral head. The proposed atlas selection yielded 0.72 ± 0.11 and 0.87 ± 0.03 for CTV and either femoral head. The DSC obtained with the proposed selection method were slightly lower than the maximum established using brute force, but this does not include potential improvements expected with deformable registration. The proposed atlas selection method provides reasonable segmentation accuracy.

  18. Metal artifact reduction image reconstruction algorithm for CT of implanted metal orthopedic devices: a work in progress

    International Nuclear Information System (INIS)

    Liu, Patrick T.; Pavlicek, William P.; Peter, Mary B.; Roberts, Catherine C.; Paden, Robert G.; Spangehl, Mark J.

    2009-01-01

    Despite recent advances in CT technology, metal orthopedic implants continue to cause significant artifacts on many CT exams, often obscuring diagnostic information. We performed this prospective study to evaluate the effectiveness of an experimental metal artifact reduction (MAR) image reconstruction program for CT. We examined image quality on CT exams performed in patients with hip arthroplasties as well as other types of implanted metal orthopedic devices. The exam raw data were reconstructed using two different methods, the standard filtered backprojection (FBP) program and the MAR program. Images were evaluated for quality of the metal-cement-bone interfaces, trabeculae ≤1 cm from the metal, trabeculae 5 cm apart from the metal, streak artifact, and overall soft tissue detail. The Wilcoxon Rank Sum test was used to compare the image scores from the large and small prostheses. Interobserver agreement was calculated. When all patients were grouped together, the MAR images showed mild to moderate improvement over the FBP images. However, when the cases were divided by implant size, the MAR images consistently received higher image quality scores than the FBP images for large metal implants (total hip prostheses). For small metal implants (screws, plates, staples), conversely, the MAR images received lower image quality scores than the FBP images due to blurring artifact. The difference of image scores for the large and small implants was significant (p=0.002). Interobserver agreement was found to be high for all measures of image quality (k>0.9). The experimental MAR reconstruction algorithm significantly improved CT image quality for patients with large metal implants. However, the MAR algorithm introduced blurring artifact that reduced image quality with small metal implants. (orig.)

  19. An evolutionary algorithm for model selection

    Energy Technology Data Exchange (ETDEWEB)

    Bicker, Karl [CERN, Geneva (Switzerland); Chung, Suh-Urk; Friedrich, Jan; Grube, Boris; Haas, Florian; Ketzer, Bernhard; Neubert, Sebastian; Paul, Stephan; Ryabchikov, Dimitry [Technische Univ. Muenchen (Germany)

    2013-07-01

    When performing partial-wave analyses of multi-body final states, the choice of the fit model, i.e. the set of waves to be used in the fit, can significantly alter the results of the partial wave fit. Traditionally, the models were chosen based on physical arguments and by observing the changes in log-likelihood of the fits. To reduce possible bias in the model selection process, an evolutionary algorithm was developed based on a Bayesian goodness-of-fit criterion which takes into account the model complexity. Starting from systematically constructed pools of waves which contain significantly more waves than the typical fit model, the algorithm yields a model with an optimal log-likelihood and with a number of partial waves which is appropriate for the number of events in the data. Partial waves with small contributions to the total intensity are penalized and likely to be dropped during the selection process, as are models were excessive correlations between single waves occur. Due to the automated nature of the model selection, a much larger part of the model space can be explored than would be possible in a manual selection. In addition the method allows to assess the dependence of the fit result on the fit model which is an important contribution to the systematic uncertainty.

  20. Tag SNP selection via a genetic algorithm.

    Science.gov (United States)

    Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh

    2010-10-01

    Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.

  1. Effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low dose CT colonography

    International Nuclear Information System (INIS)

    Lee, Eun Sun; Kim, Se Hyung; Im, Jong Pil; Kim, Sang Gyun; Shin, Cheong-il; Han, Joon Koo; Choi, Byung Ihn

    2015-01-01

    Highlights: •We assessed the effect of reconstruction algorithms on CAD in ultra-low dose CTC. •30 patients underwent ultra-low dose CTC using 120 and 100 kVp with 10 mAs. •CT was reconstructed with FBP, ASiR and Veo and then, we applied a CAD system. •Per-polyp sensitivity of CAD in ULD CT can be improved with the IR algorithms. •Despite of an increase in the number of FPs with IR, it was still acceptable. -- Abstract: Purpose: To assess the effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low-dose CT colonography (ULD CTC). Materials and methods: IRB approval and informed consents were obtained. Thirty prospectively enrolled patients underwent non-contrast CTC at 120 kVp/10 mAs in supine and 100 kVp/10 mAs in prone positions, followed by same-day colonoscopy. Images were reconstructed with filtered back projection (FBP), 80% adaptive statistical iterative reconstruction (ASIR80), and model-based iterative reconstruction (MBIR). A commercial CAD system was applied and per-polyp sensitivities and numbers of false-positives (FPs) were compared among algorithms. Results: Mean effective radiation dose of CTC was 1.02 mSv. Of 101 polyps detected and removed by colonoscopy, 61 polyps were detected on supine and on prone CTC datasets on consensus unblinded review, resulting in 122 visible polyps (32 polyps <6 mm, 52 6–9.9 mm, and 38 ≥ 10 mm). Per-polyp sensitivity of CAD for all polyps was highest with MBIR (56/122, 45.9%), followed by ASIR80 (54/122, 44.3%) and FBP (43/122, 35.2%), with significant differences between FBP and IR algorithms (P < 0.017). Per-polyp sensitivity for polyps ≥ 10 mm was also higher with MBIR (25/38, 65.8%) and ASIR80 (24/38, 63.2%) than with FBP (20/38, 58.8%), albeit without statistical significance (P > 0.017). Mean number of FPs was significantly different among algorithms (FBP, 1.4; ASIR, 2.1; MBIR, 2.4) (P = 0.011). Conclusion: Although the performance of stand-alone CAD

  2. Effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low dose CT colonography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Sun [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of); Kim, Se Hyung, E-mail: shkim7071@gmail.com [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of); Im, Jong Pil; Kim, Sang Gyun [Department of Internal Medicine, Seoul National University Hospital (Korea, Republic of); Shin, Cheong-il; Han, Joon Koo; Choi, Byung Ihn [Department of Radiology, Seoul National University Hospital (Korea, Republic of); Institute of Radiation Medicine, Seoul National University Hospital (Korea, Republic of)

    2015-04-15

    Highlights: •We assessed the effect of reconstruction algorithms on CAD in ultra-low dose CTC. •30 patients underwent ultra-low dose CTC using 120 and 100 kVp with 10 mAs. •CT was reconstructed with FBP, ASiR and Veo and then, we applied a CAD system. •Per-polyp sensitivity of CAD in ULD CT can be improved with the IR algorithms. •Despite of an increase in the number of FPs with IR, it was still acceptable. -- Abstract: Purpose: To assess the effect of different reconstruction algorithms on computer-aided diagnosis (CAD) performance in ultra-low-dose CT colonography (ULD CTC). Materials and methods: IRB approval and informed consents were obtained. Thirty prospectively enrolled patients underwent non-contrast CTC at 120 kVp/10 mAs in supine and 100 kVp/10 mAs in prone positions, followed by same-day colonoscopy. Images were reconstructed with filtered back projection (FBP), 80% adaptive statistical iterative reconstruction (ASIR80), and model-based iterative reconstruction (MBIR). A commercial CAD system was applied and per-polyp sensitivities and numbers of false-positives (FPs) were compared among algorithms. Results: Mean effective radiation dose of CTC was 1.02 mSv. Of 101 polyps detected and removed by colonoscopy, 61 polyps were detected on supine and on prone CTC datasets on consensus unblinded review, resulting in 122 visible polyps (32 polyps <6 mm, 52 6–9.9 mm, and 38 ≥ 10 mm). Per-polyp sensitivity of CAD for all polyps was highest with MBIR (56/122, 45.9%), followed by ASIR80 (54/122, 44.3%) and FBP (43/122, 35.2%), with significant differences between FBP and IR algorithms (P < 0.017). Per-polyp sensitivity for polyps ≥ 10 mm was also higher with MBIR (25/38, 65.8%) and ASIR80 (24/38, 63.2%) than with FBP (20/38, 58.8%), albeit without statistical significance (P > 0.017). Mean number of FPs was significantly different among algorithms (FBP, 1.4; ASIR, 2.1; MBIR, 2.4) (P = 0.011). Conclusion: Although the performance of stand-alone CAD

  3. Signal filtering algorithm for depth-selective diffuse optical topography

    International Nuclear Information System (INIS)

    Fujii, M; Nakayama, K

    2009-01-01

    A compact filtered backprojection algorithm that suppresses the undesirable effects of skin circulation for near-infrared diffuse optical topography is proposed. Our approach centers around a depth-selective filtering algorithm that uses an inverse problem technique and extracts target signals from observation data contaminated by noise from a shallow region. The filtering algorithm is reduced to a compact matrix and is therefore easily incorporated into a real-time system. To demonstrate the validity of this method, we developed a demonstration prototype for depth-selective diffuse optical topography and performed both computer simulations and phantom experiments. The results show that the proposed method significantly suppresses the noise from the shallow region with a minimal degradation of the target signal.

  4. SU-E-J-119: Head-And-Neck Digital Phantoms for Geometric and Dosimetric Uncertainty Evaluation of CT-CBCT Deformable Image Registration

    International Nuclear Information System (INIS)

    Shen, Z; Koyfman, S; Xia, P; Bzdusek, K

    2015-01-01

    Purpose: To evaluate geometric and dosimetric uncertainties of CT-CBCT deformable image registration (DIR) algorithms using digital phantoms generated from real patients. Methods: We selected ten H&N cancer patients with adaptive IMRT. For each patient, a planning CT (CT1), a replanning CT (CT2), and a pretreatment CBCT (CBCT1) were used as the basis for digital phantom creation. Manually adjusted meshes were created for selected ROIs (e.g. PTVs, brainstem, spinal cord, mandible, and parotids) on CT1 and CT2. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF was applied to CBCT1 to create a simulated mid-treatment CBCT (CBCT2). The CT-CBCT digital phantom consisted of CT1 and CBCT2, which were linked by the reference DVF. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten digital phantoms. The images, ROIs, and volumetric doses were mapped from CT1 to CBCT2 using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.83 to 0.94 for Demons, from 0.82 to 0.95 for B-Spline, and from 0.67 to 0.89 for intensity-based DIR. The average Hausdorff distances for selected ROIs were from 2.4 to 6.2 mm for Demons, from 1.8 to 5.9 mm for B-Spline, and from 2.8 to 11.2 mm for intensity-based DIR. The average absolute dose errors for selected ROIs were from 0.7 to 2.1 Gy for Demons, from 0.7 to 2.9 Gy for B- Spline, and from 1.3 to 4.5 Gy for intensity-based DIR. Conclusion: Using clinically realistic CT-CBCT digital phantoms, Demons and B-Spline were shown to have similar geometric and dosimetric uncertainties while intensity-based DIR had the worst uncertainties. CT-CBCT DIR has the potential to provide accurate CBCT-based dose verification for H&N adaptive radiotherapy. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; S Koyfman: None; P Xia

  5. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    Science.gov (United States)

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  7. The M-OLAP Cube Selection Problem: A Hyper-polymorphic Algorithm Approach

    Science.gov (United States)

    Loureiro, Jorge; Belo, Orlando

    OLAP systems depend heavily on the materialization of multidimensional structures to speed-up queries, whose appropriate selection constitutes the cube selection problem. However, the recently proposed distribution of OLAP structures emerges to answer new globalization's requirements, capturing the known advantages of distributed databases. But this hardens the search for solutions, especially due to the inherent heterogeneity, imposing an extra characteristic of the algorithm that must be used: adaptability. Here the emerging concept known as hyper-heuristic can be a solution. In fact, having an algorithm where several (meta-)heuristics may be selected under the control of a heuristic has an intrinsic adaptive behavior. This paper presents a hyper-heuristic polymorphic algorithm used to solve the extended cube selection and allocation problem generated in M-OLAP architectures.

  8. Gene selection heuristic algorithm for nutrigenomics studies.

    Science.gov (United States)

    Valour, D; Hue, I; Grimard, B; Valour, B

    2013-07-15

    Large datasets from -omics studies need to be deeply investigated. The aim of this paper is to provide a new method (LEM method) for the search of transcriptome and metabolome connections. The heuristic algorithm here described extends the classical canonical correlation analysis (CCA) to a high number of variables (without regularization) and combines well-conditioning and fast-computing in "R." Reduced CCA models are summarized in PageRank matrices, the product of which gives a stochastic matrix that resumes the self-avoiding walk covered by the algorithm. Then, a homogeneous Markov process applied to this stochastic matrix converges the probabilities of interconnection between genes, providing a selection of disjointed subsets of genes. This is an alternative to regularized generalized CCA for the determination of blocks within the structure matrix. Each gene subset is thus linked to the whole metabolic or clinical dataset that represents the biological phenotype of interest. Moreover, this selection process reaches the aim of biologists who often need small sets of genes for further validation or extended phenotyping. The algorithm is shown to work efficiently on three published datasets, resulting in meaningfully broadened gene networks.

  9. Hybrid feature selection algorithm using symmetrical uncertainty and a harmony search algorithm

    Science.gov (United States)

    Salameh Shreem, Salam; Abdullah, Salwani; Nazri, Mohd Zakree Ahmad

    2016-04-01

    Microarray technology can be used as an efficient diagnostic system to recognise diseases such as tumours or to discriminate between different types of cancers in normal tissues. This technology has received increasing attention from the bioinformatics community because of its potential in designing powerful decision-making tools for cancer diagnosis. However, the presence of thousands or tens of thousands of genes affects the predictive accuracy of this technology from the perspective of classification. Thus, a key issue in microarray data is identifying or selecting the smallest possible set of genes from the input data that can achieve good predictive accuracy for classification. In this work, we propose a two-stage selection algorithm for gene selection problems in microarray data-sets called the symmetrical uncertainty filter and harmony search algorithm wrapper (SU-HSA). Experimental results show that the SU-HSA is better than HSA in isolation for all data-sets in terms of the accuracy and achieves a lower number of genes on 6 out of 10 instances. Furthermore, the comparison with state-of-the-art methods shows that our proposed approach is able to obtain 5 (out of 10) new best results in terms of the number of selected genes and competitive results in terms of the classification accuracy.

  10. A numeric comparison of variable selection algorithms for supervised learning

    International Nuclear Information System (INIS)

    Palombo, G.; Narsky, I.

    2009-01-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.

  11. Algorithms for selecting informative marker panels for population assignment.

    Science.gov (United States)

    Rosenberg, Noah A

    2005-11-01

    Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

  12. A study of metaheuristic algorithms for high dimensional feature selection on microarray data

    Science.gov (United States)

    Dankolo, Muhammad Nasiru; Radzi, Nor Haizan Mohamed; Sallehuddin, Roselina; Mustaffa, Noorfa Haszlinna

    2017-11-01

    Microarray systems enable experts to examine gene profile at molecular level using machine learning algorithms. It increases the potentials of classification and diagnosis of many diseases at gene expression level. Though, numerous difficulties may affect the efficiency of machine learning algorithms which includes vast number of genes features comprised in the original data. Many of these features may be unrelated to the intended analysis. Therefore, feature selection is necessary to be performed in the data pre-processing. Many feature selection algorithms are developed and applied on microarray which including the metaheuristic optimization algorithms. This paper discusses the application of the metaheuristics algorithms for feature selection in microarray dataset. This study reveals that, the algorithms have yield an interesting result with limited resources thereby saving computational expenses of machine learning algorithms.

  13. SU-E-I-89: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Pediatric Anthropomorphic and ACR Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Mahmood, U; Erdi, Y; Wang, W [Memorial Sloan Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: To assess the impact of General Electrics automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of a pediatric anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, 80 mA, 0.7s rotation time. Image quality was assessed by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: For the baseline protocol, CNR was found to decrease from 0.460 ± 0.182 to 0.420 ± 0.057 when kVa was activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.620 ± 0.040. The liver dose decreased by 30% with kVa activation. Conclusion: Application of kVa reduces the liver dose up to 30%. However, reduction in image quality for abdominal scans occurs when using the automated tube voltage selection feature at the baseline protocol. As demonstrated by the CNR and NPS analysis, the texture and magnitude of the noise in reconstructed images at ASiR 40% was found to be the same as our baseline images. We have demonstrated that 30% dose reduction is possible when using 40% ASiR with kVa in pediatric patients.

  14. SU-E-I-89: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Pediatric Anthropomorphic and ACR Phantoms

    International Nuclear Information System (INIS)

    Mahmood, U; Erdi, Y; Wang, W

    2014-01-01

    Purpose: To assess the impact of General Electrics automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of a pediatric anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, 80 mA, 0.7s rotation time. Image quality was assessed by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: For the baseline protocol, CNR was found to decrease from 0.460 ± 0.182 to 0.420 ± 0.057 when kVa was activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.620 ± 0.040. The liver dose decreased by 30% with kVa activation. Conclusion: Application of kVa reduces the liver dose up to 30%. However, reduction in image quality for abdominal scans occurs when using the automated tube voltage selection feature at the baseline protocol. As demonstrated by the CNR and NPS analysis, the texture and magnitude of the noise in reconstructed images at ASiR 40% was found to be the same as our baseline images. We have demonstrated that 30% dose reduction is possible when using 40% ASiR with kVa in pediatric patients

  15. Genetic Bee Colony (GBC) algorithm: A new gene selection method for microarray cancer classification.

    Science.gov (United States)

    Alshamlan, Hala M; Badr, Ghada H; Alohali, Yousef A

    2015-06-01

    Naturally inspired evolutionary algorithms prove effectiveness when used for solving feature selection and classification problems. Artificial Bee Colony (ABC) is a relatively new swarm intelligence method. In this paper, we propose a new hybrid gene selection method, namely Genetic Bee Colony (GBC) algorithm. The proposed algorithm combines the used of a Genetic Algorithm (GA) along with Artificial Bee Colony (ABC) algorithm. The goal is to integrate the advantages of both algorithms. The proposed algorithm is applied to a microarray gene expression profile in order to select the most predictive and informative genes for cancer classification. In order to test the accuracy performance of the proposed algorithm, extensive experiments were conducted. Three binary microarray datasets are use, which include: colon, leukemia, and lung. In addition, another three multi-class microarray datasets are used, which are: SRBCT, lymphoma, and leukemia. Results of the GBC algorithm are compared with our recently proposed technique: mRMR when combined with the Artificial Bee Colony algorithm (mRMR-ABC). We also compared the combination of mRMR with GA (mRMR-GA) and Particle Swarm Optimization (mRMR-PSO) algorithms. In addition, we compared the GBC algorithm with other related algorithms that have been recently published in the literature, using all benchmark datasets. The GBC algorithm shows superior performance as it achieved the highest classification accuracy along with the lowest average number of selected genes. This proves that the GBC algorithm is a promising approach for solving the gene selection problem in both binary and multi-class cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Motion-compensated PET image reconstruction with respiratory-matched attenuation correction using two low-dose inhale and exhale CT images

    International Nuclear Information System (INIS)

    Nam, Woo Hyun; Ahn, Il Jun; Ra, Jong Beom; Kim, Kyeong Min; Kim, Byung Il

    2013-01-01

    Positron emission tomography (PET) is widely used for diagnosis and follow up assessment of radiotherapy. However, thoracic and abdominal PET suffers from false staging and incorrect quantification of the radioactive uptake of lesion(s) due to respiratory motion. Furthermore, respiratory motion-induced mismatch between a computed tomography (CT) attenuation map and PET data often leads to significant artifacts in the reconstructed PET image. To solve these problems, we propose a unified framework for respiratory-matched attenuation correction and motion compensation of respiratory-gated PET. For the attenuation correction, the proposed algorithm manipulates a 4D CT image virtually generated from two low-dose inhale and exhale CT images, rather than a real 4D CT image which significantly increases the radiation burden on a patient. It also utilizes CT-driven motion fields for motion compensation. To realize the proposed algorithm, we propose an improved region-based approach for non-rigid registration between body CT images, and we suggest a selection scheme of 3D CT images that are respiratory-matched to each respiratory-gated sinogram. In this work, the proposed algorithm was evaluated qualitatively and quantitatively by using patient datasets including lung and/or liver lesion(s). Experimental results show that the method can provide much clearer organ boundaries and more accurate lesion information than existing algorithms by utilizing two low-dose CT images. (paper)

  17. Multi-material decomposition of spectral CT images

    Science.gov (United States)

    Mendonça, Paulo R. S.; Bhotika, Rahul; Maddah, Mahnaz; Thomsen, Brian; Dutta, Sandeep; Licato, Paul E.; Joshi, Mukta C.

    2010-04-01

    Spectral Computed Tomography (Spectral CT), and in particular fast kVp switching dual-energy computed tomography, is an imaging modality that extends the capabilities of conventional computed tomography (CT). Spectral CT enables the estimation of the full linear attenuation curve of the imaged subject at each voxel in the CT volume, instead of a scalar image in Hounsfield units. Because the space of linear attenuation curves in the energy ranges of medical applications can be accurately described through a two-dimensional manifold, this decomposition procedure would be, in principle, limited to two materials. This paper describes an algorithm that overcomes this limitation, allowing for the estimation of N-tuples of material-decomposed images. The algorithm works by assuming that the mixing of substances and tissue types in the human body has the physicochemical properties of an ideal solution, which yields a model for the density of the imaged material mix. Under this model the mass attenuation curve of each voxel in the image can be estimated, immediately resulting in a material-decomposed image triplet. Decomposition into an arbitrary number of pre-selected materials can be achieved by automatically selecting adequate triplets from an application-specific material library. The decomposition is expressed in terms of the volume fractions of each constituent material in the mix; this provides for a straightforward, physically meaningful interpretation of the data. One important application of this technique is in the digital removal of contrast agent from a dual-energy exam, producing a virtual nonenhanced image, as well as in the quantification of the concentration of contrast observed in a targeted region, thus providing an accurate measure of tissue perfusion.

  18. Influence of model based iterative reconstruction algorithm on image quality of multiplanar reformations in reduced dose chest CT

    International Nuclear Information System (INIS)

    Barras, Heloise; Dunet, Vincent; Hachulla, Anne-Lise; Grimm, Jochen; Beigelman-Aubry, Catherine

    2016-01-01

    Model-based iterative reconstruction (MBIR) reduces image noise and improves image quality (IQ) but its influence on post-processing tools including maximal intensity projection (MIP) and minimal intensity projection (mIP) remains unknown. To evaluate the influence on IQ of MBIR on native, mIP, MIP axial and coronal reformats of reduced dose computed tomography (RD-CT) chest acquisition. Raw data of 50 patients, who underwent a standard dose CT (SD-CT) and a follow-up RD-CT with a CT dose index (CTDI) of 2–3 mGy, were reconstructed by MBIR and FBP. Native slices, 4-mm-thick MIP, and 3-mm-thick mIP axial and coronal reformats were generated. The relative IQ, subjective IQ, image noise, and number of artifacts were determined in order to compare different reconstructions of RD-CT with reference SD-CT. The lowest noise was observed with MBIR. RD-CT reconstructed by MBIR exhibited the best relative and subjective IQ on coronal view regardless of the post-processing tool. MBIR generated the lowest rate of artefacts on coronal mIP/MIP reformats and the highest one on axial reformats, mainly represented by distortions and stairsteps artifacts. The MBIR algorithm reduces image noise but generates more artifacts than FBP on axial mIP and MIP reformats of RD-CT. Conversely, it significantly improves IQ on coronal views, without increasing artifacts, regardless of the post-processing technique

  19. A hybrid intelligent algorithm for portfolio selection problem with fuzzy returns

    Science.gov (United States)

    Li, Xiang; Zhang, Yang; Wong, Hau-San; Qin, Zhongfeng

    2009-11-01

    Portfolio selection theory with fuzzy returns has been well developed and widely applied. Within the framework of credibility theory, several fuzzy portfolio selection models have been proposed such as mean-variance model, entropy optimization model, chance constrained programming model and so on. In order to solve these nonlinear optimization models, a hybrid intelligent algorithm is designed by integrating simulated annealing algorithm, neural network and fuzzy simulation techniques, where the neural network is used to approximate the expected value and variance for fuzzy returns and the fuzzy simulation is used to generate the training data for neural network. Since these models are used to be solved by genetic algorithm, some comparisons between the hybrid intelligent algorithm and genetic algorithm are given in terms of numerical examples, which imply that the hybrid intelligent algorithm is robust and more effective. In particular, it reduces the running time significantly for large size problems.

  20. SU-E-J-94: Geometric and Dosimetric Evaluation of Deformation Image Registration Algorithms Using Virtual Phantoms Generated From Patients with Lung Cancer

    International Nuclear Information System (INIS)

    Shen, Z; Greskovich, J; Xia, P; Bzdusek, K

    2015-01-01

    Purpose: To generate virtual phantoms with clinically relevant deformation and use them to objectively evaluate geometric and dosimetric uncertainties of deformable image registration (DIR) algorithms. Methods: Ten lung cancer patients undergoing adaptive 3DCRT planning were selected. For each patient, a pair of planning CT (pCT) and replanning CT (rCT) were used as the basis for virtual phantom generation. Manually adjusted meshes were created for selected ROIs (e.g. PTV, lungs, spinal cord, esophagus, and heart) on pCT and rCT. The mesh vertices were input into a thin-plate spline algorithm to generate a reference displacement vector field (DVF). The reference DVF was used to deform pCT to generate a simulated replanning CT (srCT) that was closely matched to rCT. Three DIR algorithms (Demons, B-Spline, and intensity-based) were applied to these ten virtual phantoms. The images, ROIs, and doses were mapped from pCT to srCT using the DVFs computed by these three DIRs and compared to those mapped using the reference DVF. Results: The average Dice coefficients for selected ROIs were from 0.85 to 0.96 for Demons, from 0.86 to 0.97 for intensity-based, and from 0.76 to 0.95 for B-Spline. The average Hausdorff distances for selected ROIs were from 2.2 to 5.4 mm for Demons, from 2.3 to 6.8 mm for intensity-based, and from 2.4 to 11.4 mm for B-Spline. The average absolute dose errors for selected ROIs were from 0.2 to 0.6 Gy for Demons, from 0.1 to 0.5 Gy for intensity-based, and from 0.5 to 1.5 Gy for B-Spline. Conclusion: Virtual phantoms were modeled after patients with lung cancer and were clinically relevant for adaptive radiotherapy treatment replanning. Virtual phantoms with known DVFs serve as references and can provide a fair comparison when evaluating different DIRs. Demons and intensity-based DIRs were shown to have smaller geometric and dosimetric uncertainties than B-Spline. Z Shen: None; K Bzdusek: an employee of Philips Healthcare; J Greskovich: None; P Xia

  1. Application of multislice spiral CT (MSCT) in multiple injured patients and its effect on diagnostic and therapeutic algorithms

    International Nuclear Information System (INIS)

    Boehm, T.; Alkadhi, H.; Schertler, T.; Baumert, B.; Roos, J.; Marincek, B.; Wildermuth, S.

    2004-01-01

    The initial diagnostic work-up of trauma victims with multiple injuries is currently a combination of conventional radiography (CR), ultrasound (US), and computed tomography (CT). This article reviews the diagnostic quality of the different imaging modalities regarding detection and classification of injuries. CT performs better than US in detecting traumatic lesions of abdominal parenchymal organs. Furthermore, CT is better than CR in detecting therapeutically relevant chest and bone injuries. MSCT may replace CR and US under the condition that it is faster than or at least as fast as the conventional approach to diagnose lite threatening injuries. This can be achieved only by changing the work-flow for the entire trauma team including radiologist. Furthermore, certain prerequisites must be fulfilled including integration of a MSCT scanner into the emergency room. An optimized whole body CT protocol for the assessment of trauma victims using MSCT as well as a two-step algorithm for reporting the imaging findings depending on their clinical significance is presented. (orig.)

  2. CT during selective arteriography: anatomical assessment of unruptured intracranial aneurysms before endovascular treatment

    International Nuclear Information System (INIS)

    Nomura, M.; Kida, S.; Uchiyama, N.; Yamashima, T.; Yamashita, J.; Sanada, J.; Yoshikawa, J.; Matsui, O.

    2001-01-01

    Our aim was to investigate the usefulness of helical CT during selective angiography (CT arteriography) in pretreatment assessment of unruptured intracranial aneurysms. We studied 47 unruptured aneurysms in 34 prospectively recruited patients for whom endovascular embolisation was initially considered. As pretreatment assessment, we performed rotational digital subtraction angiography (DSA) followed by CT arteriography. The findings on axial source images (axial images) and reconstructed three-dimensional CT angiography (3D-CTA) of CT arteriography were compared to those of rotational DSA, with particular attention to the neck of the aneurysm and arterial branches adjacent to it. Information provided by CT arteriography was more useful than that of rotational DSA as regards the neck in 25 (53 %) of 47 cases and as regards branches in 18 (49 %) of 37 aneurysms. On axial images, small arteries such as the anterior choroidal artery were seen in some cases. CT arteriography can provide valuable additional information about unruptured aneurysms, which cannot be obtained by rotational DSA alone. This technique is useful for obtaining anatomical information about aneurysm anatomy and for deciding the therapeutic strategy. (orig.)

  3. Different CT perfusion algorithms in the detection of delayed cerebral ischemia after aneurysmal subarachnoid hemorrhage.

    Science.gov (United States)

    Cremers, Charlotte H P; Dankbaar, Jan Willem; Vergouwen, Mervyn D I; Vos, Pieter C; Bennink, Edwin; Rinkel, Gabriel J E; Velthuis, Birgitta K; van der Schaaf, Irene C

    2015-05-01

    Tracer delay-sensitive perfusion algorithms in CT perfusion (CTP) result in an overestimation of the extent of ischemia in thromboembolic stroke. In diagnosing delayed cerebral ischemia (DCI) after aneurysmal subarachnoid hemorrhage (aSAH), delayed arrival of contrast due to vasospasm may also overestimate the extent of ischemia. We investigated the diagnostic accuracy of tracer delay-sensitive and tracer delay-insensitive algorithms for detecting DCI. From a prospectively collected series of aSAH patients admitted between 2007-2011, we included patients with any clinical deterioration other than rebleeding within 21 days after SAH who underwent NCCT/CTP/CTA imaging. Causes of clinical deterioration were categorized into DCI and no DCI. CTP maps were calculated with tracer delay-sensitive and tracer delay-insensitive algorithms and were visually assessed for the presence of perfusion deficits by two independent observers with different levels of experience. The diagnostic value of both algorithms was calculated for both observers. Seventy-one patients were included. For the experienced observer, the positive predictive values (PPVs) were 0.67 for the delay-sensitive and 0.66 for the delay-insensitive algorithm, and the negative predictive values (NPVs) were 0.73 and 0.74. For the less experienced observer, PPVs were 0.60 for both algorithms, and NPVs were 0.66 for the delay-sensitive and 0.63 for the delay-insensitive algorithm. Test characteristics are comparable for tracer delay-sensitive and tracer delay-insensitive algorithms for the visual assessment of CTP in diagnosing DCI. This indicates that both algorithms can be used for this purpose.

  4. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    Energy Technology Data Exchange (ETDEWEB)

    Wognum, S., E-mail: s.wognum@gmail.com; Heethuis, S. E.; Bel, A. [Department of Radiation Oncology, Academic Medical Center, Meibergdreef 9, 1105 AZ Amsterdam (Netherlands); Rosario, T. [Department of Radiation Oncology, VU University Medical Center, De Boelelaan 1117, 1081 HZ Amsterdam (Netherlands); Hoogeman, M. S. [Department of Radiation Oncology, Erasmus MC Cancer Institute, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2014-07-15

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  5. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers.

    Science.gov (United States)

    Wognum, S; Heethuis, S E; Rosario, T; Hoogeman, M S; Bel, A

    2014-07-01

    The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images of ex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Five excised porcine bladders with a grid of 30-40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100-400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. The authors found good structure accuracy without dependency on

  6. Validation of deformable image registration algorithms on CT images of ex vivo porcine bladders with fiducial markers

    International Nuclear Information System (INIS)

    Wognum, S.; Heethuis, S. E.; Bel, A.; Rosario, T.; Hoogeman, M. S.

    2014-01-01

    Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Five excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure

  7. Feature Selection Criteria for Real Time EKF-SLAM Algorithm

    Directory of Open Access Journals (Sweden)

    Fernando Auat Cheein

    2010-02-01

    Full Text Available This paper presents a seletion procedure for environmet features for the correction stage of a SLAM (Simultaneous Localization and Mapping algorithm based on an Extended Kalman Filter (EKF. This approach decreases the computational time of the correction stage which allows for real and constant-time implementations of the SLAM. The selection procedure consists in chosing the features the SLAM system state covariance is more sensible to. The entire system is implemented on a mobile robot equipped with a range sensor laser. The features extracted from the environment correspond to lines and corners. Experimental results of the real time SLAM algorithm and an analysis of the processing-time consumed by the SLAM with the feature selection procedure proposed are shown. A comparison between the feature selection approach proposed and the classical sequential EKF-SLAM along with an entropy feature selection approach is also performed.

  8. A Fast Algorithm of Convex Hull Vertices Selection for Online Classification.

    Science.gov (United States)

    Ding, Shuguang; Nie, Xiangli; Qiao, Hong; Zhang, Bo

    2018-04-01

    Reducing samples through convex hull vertices selection (CHVS) within each class is an important and effective method for online classification problems, since the classifier can be trained rapidly with the selected samples. However, the process of CHVS is NP-hard. In this paper, we propose a fast algorithm to select the convex hull vertices, based on the convex hull decomposition and the property of projection. In the proposed algorithm, the quadratic minimization problem of computing the distance between a point and a convex hull is converted into a linear equation problem with a low computational complexity. When the data dimension is high, an approximate, instead of exact, convex hull is allowed to be selected by setting an appropriate termination condition in order to delete more nonimportant samples. In addition, the impact of outliers is also considered, and the proposed algorithm is improved by deleting the outliers in the initial procedure. Furthermore, a dimension convention technique via the kernel trick is used to deal with nonlinearly separable problems. An upper bound is theoretically proved for the difference between the support vector machines based on the approximate convex hull vertices selected and all the training samples. Experimental results on both synthetic and real data sets show the effectiveness and validity of the proposed algorithm.

  9. Selective chest imaging for blunt trauma patients: The national emergency X-ray utilization studies (NEXUS-chest algorithm).

    Science.gov (United States)

    Rodriguez, Robert M; Hendey, Gregory W; Mower, William R

    2017-01-01

    Chest imaging plays a prominent role in blunt trauma patient evaluation, but indiscriminate imaging is expensive, may delay care, and unnecessarily exposes patients to potentially harmful ionizing radiation. To improve diagnostic chest imaging utilization, we conducted 3 prospective multicenter studies over 12years to derive and validate decision instruments (DIs) to guide the use of chest x-ray (CXR) and chest computed tomography (CT). The first DI, NEXUS Chest x-ray, consists of seven criteria (Age >60years; rapid deceleration mechanism; chest pain; intoxication; altered mental status; distracting painful injury; and chest wall tenderness) and exhibits a sensitivity of 99.0% (95% confidence interval [CI] 98.2-99.4%) and a specificity of 13.3% (95% CI, 12.6%-14.0%) for detecting clinically significant injuries. We developed two NEXUS Chest CT DIs, which are both highly reliable in detecting clinically major injuries (sensitivity of 99.2%; 95% CI 95.4-100%). Designed primarily to focus on detecting major injuries, the NEXUS Chest CT-Major DI consists of six criteria (abnormal CXR; distracting injury; chest wall tenderness; sternal tenderness; thoracic spine tenderness; and scapular tenderness) and exhibits higher specificity (37.9%; 95% CI 35.8-40.1%). Designed to reliability detect both major and minor injuries (sensitivity 95.4%; 95% CI 93.6-96.9%) with resulting lower specificity (25.5%; 95% CI 23.5-27.5%), the NEXUS CT-All rule consists of seven elements (the six NEXUS CT-Major criteria plus rapid deceleration mechanism). The purpose of this review is to synthesize the three DIs into a novel, cohesive summary algorithm with practical implementation recommendations to guide selective chest imaging in adult blunt trauma patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. The Research and Application of SURF Algorithm Based on Feature Point Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhang Fang Hu

    2014-04-01

    Full Text Available As the pixel information of depth image is derived from the distance information, when implementing SURF algorithm with KINECT sensor for static sign language recognition, there can be some mismatched pairs in palm area. This paper proposes a feature point selection algorithm, by filtering the SURF feature points step by step based on the number of feature points within adaptive radius r and the distance between the two points, it not only greatly improves the recognition rate, but also ensures the robustness under the environmental factors, such as skin color, illumination intensity, complex background, angle and scale changes. The experiment results show that the improved SURF algorithm can effectively improve the recognition rate, has a good robustness.

  11. Effect of adult weight and CT-based selection on rabbit meat quality

    Directory of Open Access Journals (Sweden)

    Zsolt Szendrő

    2010-01-01

    Full Text Available This study compared the meat quality of different genotypes. Maternal (M; adult weight/AW/=4.0-4.5kg; selected for the number of kits born alive, Pannon White (P; AW=4.3-4.8kg and Large type (L; AW=4.8-5.4kg rabbits were analysed. P and L genotypes were selected for carcass traits based on CT/Computer tomography/data. Rabbits were slaughtered at 11wk of age and hindleg (HL meat and M. Longissimus dorsi (LD were analysed for proximate composition and fatty acid (FA profile. Proximate composition was unaffected by the selection programme, even though the meat of P rabbits was leaner and had higher ash content (P<0.10. The LD meat of P rabbits exhibited significantly lower MUFA contents compared to M and L rabbits (25.4 vs 28.0 vs 27.7%; P<0.01 and higher PUFA content compared to M rabbits (31.9 vs 24.9%; P<0.05. This study revealed that long-term CT-based selection is effective in increasing meat leanness and PUFA content.

  12. Improved image quality in abdominal CT in patients who underwent treatment for hepatocellular carcinoma with small metal implants using a raw data-based metal artifact reduction algorithm.

    Science.gov (United States)

    Sofue, Keitaro; Yoshikawa, Takeshi; Ohno, Yoshiharu; Negi, Noriyuki; Inokawa, Hiroyasu; Sugihara, Naoki; Sugimura, Kazuro

    2017-07-01

    To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P small metal implants by reducing metallic artefacts. • SEMAR algorithm significantly reduces metallic artefacts from small implants in abdominal CT. • SEMAR can improve image quality of the liver in dynamic CECT. • Confidence visualization of hepatic vascular anatomies can also be improved by SEMAR.

  13. Value and clinical application of orthopedic metal artifact reduction algorithm in CT scans after orthopedic metal implantation

    International Nuclear Information System (INIS)

    Hu, Yi; Pan, Shinong; Zhao, Xudong; Guo, Wenli; He, Ming; Guo, Qiyong

    2017-01-01

    To evaluate orthopedic metal artifact reduction algorithm (O-MAR) in CT orthopedic metal artifact reduction at different tube voltages, identify an appropriate low tube voltage for clinical practice, and investigate its clinical application. The institutional ethical committee approved all the animal procedures. A stainless-steel plate and four screws were implanted into the femurs of three Japanese white rabbits. Preoperative CT was performed at 120 kVp without O-MAR reconstruction, and postoperative CT was performed at 80–140 kVp with O-MAR. Muscular CT attenuation, artifact index (AI) and signal-to-noise ratio (SNR) were compared between preoperative and postoperative images (unpaired t test), between paired O-MAR and non-O-MAR images (paired Student t test) and among different kVp settings (repeated measures ANOVA). Artifacts' severity, muscular homogeneity, visibility of inter-muscular space and definition of bony structures were subjectively evaluated and compared (Wilcoxon rank-sum test). In the clinical study, 20 patients undertook CT scan at low kVp with O-MAR with informed consent. The diagnostic satisfaction of clinical images was subjectively assessed. Animal experiments showed that the use of O-MAR resulted in accurate CT attenuation, lower AI, better SNR, and higher subjective scores (p < 0.010) at all tube voltages. O-MAR images at 100 kVp had almost the same AI and SNR as non-O-MAR images at 140 kVp. All O-MAR images were scored ≥ 3. In addition, 95% of clinical CT images performed at 100 kVp were considered satisfactory. O-MAR can effectively reduce orthopedic metal artifacts at different tube voltages, and facilitates low-tube-voltage CT for patients with orthopedic metal implants

  14. The reconstruction algorithm used for ["6"8Ga]PSMA-HBED-CC PET/CT reconstruction significantly influences the number of detected lymph node metastases and coeliac ganglia

    International Nuclear Information System (INIS)

    Krohn, Thomas; Birmes, Anita; Winz, Oliver H.; Drude, Natascha I.; Mottaghy, Felix M.; Behrendt, Florian F.; Verburg, Frederik A.

    2017-01-01

    To investigate whether the numbers of lymph node metastases and coeliac ganglia delineated on ["6"8Ga]PSMA-HBED-CC PET/CT scans differ among datasets generated using different reconstruction algorithms. Data were constructed using the BLOB-OS-TF, BLOB-OS and 3D-RAMLA algorithms. All reconstructions were assessed by two nuclear medicine physicians for the number of pelvic/paraaortal lymph node metastases as well the number of coeliac ganglia. Standardized uptake values (SUV) were also calculated in different regions. At least one ["6"8Ga]PSMA-HBED-CC PET/CT-positive pelvic or paraaortal lymph node metastasis was found in 49 and 35 patients using the BLOB-OS-TF algorithm, in 42 and 33 patients using the BLOB-OS algorithm, and in 41 and 31 patients using the 3D-RAMLA algorithm, respectively, and a positive ganglion was found in 92, 59 and 24 of 100 patients using the three algorithms, respectively. Quantitatively, the SUVmean and SUVmax were significantly higher with the BLOB-OS algorithm than with either the BLOB-OS-TF or the 3D-RAMLA algorithm in all measured regions (p < 0.001 for all comparisons). The differences between the SUVs with the BLOB-OS-TF- and 3D-RAMLA algorithms were not significant in the aorta (SUVmean, p = 0.93; SUVmax, p = 0.97) but were significant in all other regions (p < 0.001 in all cases). The SUVmean ganglion/gluteus ratio was significantly higher with the BLOB-OS-TF algorithm than with either the BLOB-OS or the 3D-RAMLA algorithm and was significantly higher with the BLOB-OS than with the 3D-RAMLA algorithm (p < 0.001 in all cases). The results of ["6"8Ga]PSMA-HBED-CC PET/CT are affected by the reconstruction algorithm used. The highest number of lesions and physiological structures will be visualized using a modern algorithm employing time-of-flight information. (orig.)

  15. The reconstruction algorithm used for [{sup 68}Ga]PSMA-HBED-CC PET/CT reconstruction significantly influences the number of detected lymph node metastases and coeliac ganglia

    Energy Technology Data Exchange (ETDEWEB)

    Krohn, Thomas [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Ulm University, Department of Nuclear Medicine, Ulm (Germany); Birmes, Anita; Winz, Oliver H.; Drude, Natascha I. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Mottaghy, Felix M. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Maastricht UMC+, Department of Nuclear Medicine, Maastricht (Netherlands); Behrendt, Florian F. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); Radiology Institute ' ' Aachen Land' ' , Wuerselen (Germany); Verburg, Frederik A. [RWTH University Hospital Aachen, Department of Nuclear Medicine, Aachen (Germany); University Hospital Giessen and Marburg, Department of Nuclear Medicine, Marburg (Germany)

    2017-04-15

    To investigate whether the numbers of lymph node metastases and coeliac ganglia delineated on [{sup 68}Ga]PSMA-HBED-CC PET/CT scans differ among datasets generated using different reconstruction algorithms. Data were constructed using the BLOB-OS-TF, BLOB-OS and 3D-RAMLA algorithms. All reconstructions were assessed by two nuclear medicine physicians for the number of pelvic/paraaortal lymph node metastases as well the number of coeliac ganglia. Standardized uptake values (SUV) were also calculated in different regions. At least one [{sup 68}Ga]PSMA-HBED-CC PET/CT-positive pelvic or paraaortal lymph node metastasis was found in 49 and 35 patients using the BLOB-OS-TF algorithm, in 42 and 33 patients using the BLOB-OS algorithm, and in 41 and 31 patients using the 3D-RAMLA algorithm, respectively, and a positive ganglion was found in 92, 59 and 24 of 100 patients using the three algorithms, respectively. Quantitatively, the SUVmean and SUVmax were significantly higher with the BLOB-OS algorithm than with either the BLOB-OS-TF or the 3D-RAMLA algorithm in all measured regions (p < 0.001 for all comparisons). The differences between the SUVs with the BLOB-OS-TF- and 3D-RAMLA algorithms were not significant in the aorta (SUVmean, p = 0.93; SUVmax, p = 0.97) but were significant in all other regions (p < 0.001 in all cases). The SUVmean ganglion/gluteus ratio was significantly higher with the BLOB-OS-TF algorithm than with either the BLOB-OS or the 3D-RAMLA algorithm and was significantly higher with the BLOB-OS than with the 3D-RAMLA algorithm (p < 0.001 in all cases). The results of [{sup 68}Ga]PSMA-HBED-CC PET/CT are affected by the reconstruction algorithm used. The highest number of lesions and physiological structures will be visualized using a modern algorithm employing time-of-flight information. (orig.)

  16. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    International Nuclear Information System (INIS)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra; Kalogeropoulou, Christina; Pratikakis, Ioannis; Costaridou, Lena

    2015-01-01

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the

  17. Selecting registration schemes in case of interstitial lung disease follow-up in CT

    Energy Technology Data Exchange (ETDEWEB)

    Vlachopoulos, Georgios; Korfiatis, Panayiotis; Skiadopoulos, Spyros; Kazantzi, Alexandra [Department of Medical Physics, School of Medicine,University of Patras, Patras 26504 (Greece); Kalogeropoulou, Christina [Department of Radiology, School of Medicine, University of Patras, Patras 26504 (Greece); Pratikakis, Ioannis [Department of Electrical and Computer Engineering, Democritus University of Thrace, Xanthi 67100 (Greece); Costaridou, Lena, E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece)

    2015-08-15

    Purpose: Primary goal of this study is to select optimal registration schemes in the framework of interstitial lung disease (ILD) follow-up analysis in CT. Methods: A set of 128 multiresolution schemes composed of multiresolution nonrigid and combinations of rigid and nonrigid registration schemes are evaluated, utilizing ten artificially warped ILD follow-up volumes, originating from ten clinical volumetric CT scans of ILD affected patients, to select candidate optimal schemes. Specifically, all combinations of four transformation models (three rigid: rigid, similarity, affine and one nonrigid: third order B-spline), four cost functions (sum-of-square distances, normalized correlation coefficient, mutual information, and normalized mutual information), four gradient descent optimizers (standard, regular step, adaptive stochastic, and finite difference), and two types of pyramids (recursive and Gaussian-smoothing) were considered. The selection process involves two stages. The first stage involves identification of schemes with deformation field singularities, according to the determinant of the Jacobian matrix. In the second stage, evaluation methodology is based on distance between corresponding landmark points in both normal lung parenchyma (NLP) and ILD affected regions. Statistical analysis was performed in order to select near optimal registration schemes per evaluation metric. Performance of the candidate registration schemes was verified on a case sample of ten clinical follow-up CT scans to obtain the selected registration schemes. Results: By considering near optimal schemes common to all ranking lists, 16 out of 128 registration schemes were initially selected. These schemes obtained submillimeter registration accuracies in terms of average distance errors 0.18 ± 0.01 mm for NLP and 0.20 ± 0.01 mm for ILD, in case of artificially generated follow-up data. Registration accuracy in terms of average distance error in clinical follow-up data was in the

  18. Evaluating applicability of metal artifact reduction algorithm for head and neck radiation treatment planning CT

    International Nuclear Information System (INIS)

    Son, Sang Jun; Park, Jang Pil; Kim, Min Jeong; Yoo, Suk Hyun

    2014-01-01

    even turned tissue HU by wrong correction was founded, too. Consequentially, It seems O-MAR algorithm is not perfect to distinguish air cavity and photon starvation artifact. Nevertheless, the differences of HU and dose distribution are not a huge that is not suitable for clinical use. And there are more advantages in clinic for improved quality of CT images and DRRs, precision of contouring OARs or tumors and correcting artifact area. So original and O-MAR CT must be used together in clinic for more accurate treatment plan

  19. Log-Linear Model Based Behavior Selection Method for Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Zhehuang Huang

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  20. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  1. Atlas ranking and selection for automatic segmentation of the esophagus from CT scans

    Science.gov (United States)

    Yang, Jinzhong; Haas, Benjamin; Fang, Raymond; Beadle, Beth M.; Garden, Adam S.; Liao, Zhongxing; Zhang, Lifei; Balter, Peter; Court, Laurence

    2017-12-01

    In radiation treatment planning, the esophagus is an important organ-at-risk that should be spared in patients with head and neck cancer or thoracic cancer who undergo intensity-modulated radiation therapy. However, automatic segmentation of the esophagus from CT scans is extremely challenging because of the structure’s inconsistent intensity, low contrast against the surrounding tissues, complex and variable shape and location, and random air bubbles. The goal of this study is to develop an online atlas selection approach to choose a subset of optimal atlases for multi-atlas segmentation to the delineate esophagus automatically. We performed atlas selection in two phases. In the first phase, we used the correlation coefficient of the image content in a cubic region between each atlas and the new image to evaluate their similarity and to rank the atlases in an atlas pool. A subset of atlases based on this ranking was selected, and deformable image registration was performed to generate deformed contours and deformed images in the new image space. In the second phase of atlas selection, we used Kullback-Leibler divergence to measure the similarity of local-intensity histograms between the new image and each of the deformed images, and the measurements were used to rank the previously selected atlases. Deformed contours were overlapped sequentially, from the most to the least similar, and the overlap ratio was examined. We further identified a subset of optimal atlases by analyzing the variation of the overlap ratio versus the number of atlases. The deformed contours from these optimal atlases were fused together using a modified simultaneous truth and performance level estimation algorithm to produce the final segmentation. The approach was validated with promising results using both internal data sets (21 head and neck cancer patients and 15 thoracic cancer patients) and external data sets (30 thoracic patients).

  2. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi [Department of Radiology, University of Chicago, 5841 South Maryland Avenue, Chicago, Illinois 60637 (United States)

    2010-05-15

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F{<=}f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less

  3. Computer-aided measurement of liver volumes in CT by means of geodesic active contour segmentation coupled with level-set algorithms

    International Nuclear Information System (INIS)

    Suzuki, Kenji; Kohlbrenner, Ryan; Epstein, Mark L.; Obajuluwa, Ademola M.; Xu Jianwu; Hori, Masatoshi

    2010-01-01

    Purpose: Computerized liver extraction from hepatic CT images is challenging because the liver often abuts other organs of a similar density. The purpose of this study was to develop a computer-aided measurement of liver volumes in hepatic CT. Methods: The authors developed a computerized liver extraction scheme based on geodesic active contour segmentation coupled with level-set contour evolution. First, an anisotropic diffusion filter was applied to portal-venous-phase CT images for noise reduction while preserving the liver structure, followed by a scale-specific gradient magnitude filter to enhance the liver boundaries. Then, a nonlinear grayscale converter enhanced the contrast of the liver parenchyma. By using the liver-parenchyma-enhanced image as a speed function, a fast-marching level-set algorithm generated an initial contour that roughly estimated the liver shape. A geodesic active contour segmentation algorithm coupled with level-set contour evolution refined the initial contour to define the liver boundaries more precisely. The liver volume was then calculated using these refined boundaries. Hepatic CT scans of 15 prospective liver donors were obtained under a liver transplant protocol with a multidetector CT system. The liver volumes extracted by the computerized scheme were compared to those traced manually by a radiologist, used as ''gold standard.''Results: The mean liver volume obtained with our scheme was 1504 cc, whereas the mean gold standard manual volume was 1457 cc, resulting in a mean absolute difference of 105 cc (7.2%). The computer-estimated liver volumetrics agreed excellently with the gold-standard manual volumetrics (intraclass correlation coefficient was 0.95) with no statistically significant difference (F=0.77; p(F≤f)=0.32). The average accuracy, sensitivity, specificity, and percent volume error were 98.4%, 91.1%, 99.1%, and 7.2%, respectively. Computerized CT liver volumetry would require substantially less completion time

  4. FDG PET/CT patterns of treatment failure of malignant pleural mesothelioma: relationship to histologic type, treatment algorithm, and survival

    Energy Technology Data Exchange (ETDEWEB)

    Gerbaudo, Victor H.; Mamede, Marcelo [Brigham and Women' s Hospital, Harvard Medical School, Division of Nuclear Medicine and Molecular Imaging, Boston, MA (United States); Trotman-Dickenson, Beatrice; Hatabu, Hiroto [Brigham and Women' s Hospital, Harvard Medical School, Division of Thoracic Radiology, Boston, MA (United States); Sugarbaker, David J. [Brigham and Women' s Hospital, Harvard Medical School, Division of Thoracic Surgery, Boston, MA (United States)

    2011-05-15

    This study investigated the diagnostic performance and prognostic value of fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT in suspected malignant pleural mesothelioma (MPM) recurrence, in the context of patterns and intensity of FDG uptake, histologic type, and treatment algorithm. Fifty patients with MPM underwent FDG PET/CT for restaging 11 {+-} 6 months after therapy. Tumor relapse was confirmed by histopathology, and by clinical evolution and subsequent imaging. Progression-free survival was defined as the time between treatment and the earliest clinical evidence of recurrence. Survival after FDG PET/CT was defined as the time between the scan and death or last follow-up. Overall survival was defined as the time between initial treatment and death or last follow-up date. Treatment failure was confirmed in 42 patients (30 epithelial and 12 non-epithelial MPM). Sensitivity, specificity, accuracy, negative predictive value, and positive predictive value for FDG PET/CT were 97.6, 75, 94, 86, and 95.3%, respectively. FDG PET/CT evidence of single site of recurrence was observed in the ipsilateral hemithorax in 18 patients (44%), contralaterally in 2 (5%), and in the abdomen in 1 patient (2%). Bilateral thoracic relapse was detected in three patients (7%). Simultaneous recurrence in the ipsilateral hemithorax and abdomen was observed in ten (24%) patients and in seven (17%) in all three cavities. Unsuspected distant metastases were detected in 11 patients (26%). Four patterns of uptake were observed in recurrent disease: focal, linear, mixed (focal/linear), and encasing, with a significant difference between the intensity of uptake in malignant lesions compared to benign post-therapeutic changes. Lesion uptake was lower in patients previously treated with more aggressive therapy and higher in intrathoracic lesions of patients with distant metastases. FDG PET/CT helped in the selection of 12 patients (29%) who benefited from additional previously

  5. FDG PET/CT patterns of treatment failure of malignant pleural mesothelioma: relationship to histologic type, treatment algorithm, and survival

    International Nuclear Information System (INIS)

    Gerbaudo, Victor H.; Mamede, Marcelo; Trotman-Dickenson, Beatrice; Hatabu, Hiroto; Sugarbaker, David J.

    2011-01-01

    This study investigated the diagnostic performance and prognostic value of fluorodeoxyglucose (FDG) positron emission tomography (PET)/CT in suspected malignant pleural mesothelioma (MPM) recurrence, in the context of patterns and intensity of FDG uptake, histologic type, and treatment algorithm. Fifty patients with MPM underwent FDG PET/CT for restaging 11 ± 6 months after therapy. Tumor relapse was confirmed by histopathology, and by clinical evolution and subsequent imaging. Progression-free survival was defined as the time between treatment and the earliest clinical evidence of recurrence. Survival after FDG PET/CT was defined as the time between the scan and death or last follow-up. Overall survival was defined as the time between initial treatment and death or last follow-up date. Treatment failure was confirmed in 42 patients (30 epithelial and 12 non-epithelial MPM). Sensitivity, specificity, accuracy, negative predictive value, and positive predictive value for FDG PET/CT were 97.6, 75, 94, 86, and 95.3%, respectively. FDG PET/CT evidence of single site of recurrence was observed in the ipsilateral hemithorax in 18 patients (44%), contralaterally in 2 (5%), and in the abdomen in 1 patient (2%). Bilateral thoracic relapse was detected in three patients (7%). Simultaneous recurrence in the ipsilateral hemithorax and abdomen was observed in ten (24%) patients and in seven (17%) in all three cavities. Unsuspected distant metastases were detected in 11 patients (26%). Four patterns of uptake were observed in recurrent disease: focal, linear, mixed (focal/linear), and encasing, with a significant difference between the intensity of uptake in malignant lesions compared to benign post-therapeutic changes. Lesion uptake was lower in patients previously treated with more aggressive therapy and higher in intrathoracic lesions of patients with distant metastases. FDG PET/CT helped in the selection of 12 patients (29%) who benefited from additional previously

  6. Heuristic algorithms for feature selection under Bayesian models with block-diagonal covariance structure.

    Science.gov (United States)

    Foroughi Pour, Ali; Dalton, Lori A

    2018-03-21

    Many bioinformatics studies aim to identify markers, or features, that can be used to discriminate between distinct groups. In problems where strong individual markers are not available, or where interactions between gene products are of primary interest, it may be necessary to consider combinations of features as a marker family. To this end, recent work proposes a hierarchical Bayesian framework for feature selection that places a prior on the set of features we wish to select and on the label-conditioned feature distribution. While an analytical posterior under Gaussian models with block covariance structures is available, the optimal feature selection algorithm for this model remains intractable since it requires evaluating the posterior over the space of all possible covariance block structures and feature-block assignments. To address this computational barrier, in prior work we proposed a simple suboptimal algorithm, 2MNC-Robust, with robust performance across the space of block structures. Here, we present three new heuristic feature selection algorithms. The proposed algorithms outperform 2MNC-Robust and many other popular feature selection algorithms on synthetic data. In addition, enrichment analysis on real breast cancer, colon cancer, and Leukemia data indicates they also output many of the genes and pathways linked to the cancers under study. Bayesian feature selection is a promising framework for small-sample high-dimensional data, in particular biomarker discovery applications. When applied to cancer data these algorithms outputted many genes already shown to be involved in cancer as well as potentially new biomarkers. Furthermore, one of the proposed algorithms, SPM, outputs blocks of heavily correlated genes, particularly useful for studying gene interactions and gene networks.

  7. TU-G-204-09: The Effects of Reduced- Dose Lung Cancer Screening CT On Lung Nodule Detection Using a CAD Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Young, S; Lo, P; Kim, G; Hsu, W; Hoffman, J; Brown, M; McNitt-Gray, M [UCLA School of Medicine, Los Angeles, CA (United States)

    2015-06-15

    Purpose: While Lung Cancer Screening CT is being performed at low doses, the purpose of this study was to investigate the effects of further reducing dose on the performance of a CAD nodule-detection algorithm. Methods: We selected 50 cases from our local database of National Lung Screening Trial (NLST) patients for which we had both the image series and the raw CT data from the original scans. All scans were acquired with fixed mAs (25 for standard-sized patients, 40 for large patients) on a 64-slice scanner (Sensation 64, Siemens Healthcare). All images were reconstructed with 1-mm slice thickness, B50 kernel. 10 of the cases had at least one nodule reported on the NLST reader forms. Based on a previously-published technique, we added noise to the raw data to simulate reduced-dose versions of each case at 50% and 25% of the original NLST dose (i.e. approximately 1.0 and 0.5 mGy CTDIvol). For each case at each dose level, the CAD detection algorithm was run and nodules greater than 4 mm in diameter were reported. These CAD results were compared to “truth”, defined as the approximate nodule centroids from the NLST reports. Subject-level mean sensitivities and false-positive rates were calculated for each dose level. Results: The mean sensitivities of the CAD algorithm were 35% at the original dose, 20% at 50% dose, and 42.5% at 25% dose. The false-positive rates, in decreasing-dose order, were 3.7, 2.9, and 10 per case. In certain cases, particularly in larger patients, there were severe photon-starvation artifacts, especially in the apical region due to the high-attenuating shoulders. Conclusion: The detection task was challenging for the CAD algorithm at all dose levels, including the original NLST dose. However, the false-positive rate at 25% dose approximately tripled, suggesting a loss of CAD robustness somewhere between 0.5 and 1.0 mGy. NCI grant U01 CA181156 (Quantitative Imaging Network); Tobacco Related Disease Research Project grant 22RT-0131.

  8. TU-G-204-09: The Effects of Reduced- Dose Lung Cancer Screening CT On Lung Nodule Detection Using a CAD Algorithm

    International Nuclear Information System (INIS)

    Young, S; Lo, P; Kim, G; Hsu, W; Hoffman, J; Brown, M; McNitt-Gray, M

    2015-01-01

    Purpose: While Lung Cancer Screening CT is being performed at low doses, the purpose of this study was to investigate the effects of further reducing dose on the performance of a CAD nodule-detection algorithm. Methods: We selected 50 cases from our local database of National Lung Screening Trial (NLST) patients for which we had both the image series and the raw CT data from the original scans. All scans were acquired with fixed mAs (25 for standard-sized patients, 40 for large patients) on a 64-slice scanner (Sensation 64, Siemens Healthcare). All images were reconstructed with 1-mm slice thickness, B50 kernel. 10 of the cases had at least one nodule reported on the NLST reader forms. Based on a previously-published technique, we added noise to the raw data to simulate reduced-dose versions of each case at 50% and 25% of the original NLST dose (i.e. approximately 1.0 and 0.5 mGy CTDIvol). For each case at each dose level, the CAD detection algorithm was run and nodules greater than 4 mm in diameter were reported. These CAD results were compared to “truth”, defined as the approximate nodule centroids from the NLST reports. Subject-level mean sensitivities and false-positive rates were calculated for each dose level. Results: The mean sensitivities of the CAD algorithm were 35% at the original dose, 20% at 50% dose, and 42.5% at 25% dose. The false-positive rates, in decreasing-dose order, were 3.7, 2.9, and 10 per case. In certain cases, particularly in larger patients, there were severe photon-starvation artifacts, especially in the apical region due to the high-attenuating shoulders. Conclusion: The detection task was challenging for the CAD algorithm at all dose levels, including the original NLST dose. However, the false-positive rate at 25% dose approximately tripled, suggesting a loss of CAD robustness somewhere between 0.5 and 1.0 mGy. NCI grant U01 CA181156 (Quantitative Imaging Network); Tobacco Related Disease Research Project grant 22RT-0131

  9. Variability and accuracy of coronary CT angiography including use of iterative reconstruction algorithms for plaque burden assessment as compared with intravascular ultrasound - an ex vivo study

    Energy Technology Data Exchange (ETDEWEB)

    Stolzmann, Paul [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Boston, MA (United States); University Hospital Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); Schlett, Christopher L.; Maurovich-Horvat, Pal; Scheffel, Hans; Engel, Leif-Christopher; Karolyi, Mihaly; Hoffmann, Udo [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Boston, MA (United States); Maehara, Akiko; Ma, Shixin; Mintz, Gary S. [Columbia University Medical Center, Cardiovascular Research Foundation, New York, NY (United States)

    2012-10-15

    To systematically assess inter-technique and inter-/intra-reader variability of coronary CT angiography (CTA) to measure plaque burden compared with intravascular ultrasound (IVUS) and to determine whether iterative reconstruction algorithms affect variability. IVUS and CTA data were acquired from nine human coronary arteries ex vivo. CT images were reconstructed using filtered back projection (FBPR) and iterative reconstruction algorithms: adaptive-statistical (ASIR) and model-based (MBIR). After co-registration of 284 cross-sections between IVUS and CTA, two readers manually delineated the cross-sectional plaque area in all images presented in random order. Average plaque burden by IVUS was 63.7 {+-} 10.7% and correlated significantly with all CTA measurements (r = 0.45-0.52; P < 0.001), while CTA overestimated the burden by 10 {+-} 10%. There were no significant differences among FBPR, ASIR and MBIR (P > 0.05). Increased overestimation was associated with smaller plaques, eccentricity and calcification (P < 0.001). Reproducibility of plaque burden by CTA and IVUS datasets was excellent with a low mean intra-/inter-reader variability of <1/<4% for CTA and <0.5/<1% for IVUS respectively (P < 0.05) with no significant difference between CT reconstruction algorithms (P > 0.05). In ex vivo coronary arteries, plaque burden by coronary CTA had extremely low inter-/intra-reader variability and correlated significantly with IVUS measurements. Accuracy as well as reader reliability were independent of CT image reconstruction algorithm. (orig.)

  10. Deformable image registration based automatic CT-to-CT contour propagation for head and neck adaptive radiotherapy in the routine clinical setting.

    Science.gov (United States)

    Kumarasiri, Akila; Siddiqui, Farzan; Liu, Chang; Yechieli, Raphael; Shah, Mira; Pradhan, Deepak; Zhong, Hualiang; Chetty, Indrin J; Kim, Jinkoo

    2014-12-01

    To evaluate the clinical potential of deformable image registration (DIR)-based automatic propagation of physician-drawn contours from a planning CT to midtreatment CT images for head and neck (H&N) adaptive radiotherapy. Ten H&N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken approximately 3-4 week into treatment, were considered retrospectively. Clinically relevant organs and targets were manually delineated by a radiation oncologist on both sets of images. Four commercial DIR algorithms, two B-spline-based and two Demons-based, were used to deform CT1 and the relevant contour sets onto corresponding CT2 images. Agreement of the propagated contours with manually drawn contours on CT2 was visually rated by four radiation oncologists in a scale from 1 to 5, the volume overlap was quantified using Dice coefficients, and a distance analysis was done using center of mass (CoM) displacements and Hausdorff distances (HDs). Performance of these four commercial algorithms was validated using a parameter-optimized Elastix DIR algorithm. All algorithms attained Dice coefficients of >0.85 for organs with clear boundaries and those with volumes >9 cm(3). Organs with volumes <3 cm(3) and/or those with poorly defined boundaries showed Dice coefficients of ∼ 0.5-0.6. For the propagation of small organs (<3 cm(3)), the B-spline-based algorithms showed higher mean Dice values (Dice = 0.60) than the Demons-based algorithms (Dice = 0.54). For the gross and planning target volumes, the respective mean Dice coefficients were 0.8 and 0.9. There was no statistically significant difference in the Dice coefficients, CoM, or HD among investigated DIR algorithms. The mean radiation oncologist visual scores of the four algorithms ranged from 3.2 to 3.8, which indicated that the quality of transferred contours was "clinically acceptable with minor modification or major modification in a small number of contours." Use of DIR-based contour propagation in the routine

  11. Deformable image registration based automatic CT-to-CT contour propagation for head and neck adaptive radiotherapy in the routine clinical setting

    International Nuclear Information System (INIS)

    Kumarasiri, Akila; Siddiqui, Farzan; Liu, Chang; Yechieli, Raphael; Shah, Mira; Pradhan, Deepak; Zhong, Hualiang; Chetty, Indrin J.; Kim, Jinkoo

    2014-01-01

    Purpose: To evaluate the clinical potential of deformable image registration (DIR)-based automatic propagation of physician-drawn contours from a planning CT to midtreatment CT images for head and neck (H and N) adaptive radiotherapy. Methods: Ten H and N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken approximately 3–4 week into treatment, were considered retrospectively. Clinically relevant organs and targets were manually delineated by a radiation oncologist on both sets of images. Four commercial DIR algorithms, two B-spline-based and two Demons-based, were used to deform CT1 and the relevant contour sets onto corresponding CT2 images. Agreement of the propagated contours with manually drawn contours on CT2 was visually rated by four radiation oncologists in a scale from 1 to 5, the volume overlap was quantified using Dice coefficients, and a distance analysis was done using center of mass (CoM) displacements and Hausdorff distances (HDs). Performance of these four commercial algorithms was validated using a parameter-optimized Elastix DIR algorithm. Results: All algorithms attained Dice coefficients of >0.85 for organs with clear boundaries and those with volumes >9 cm 3 . Organs with volumes <3 cm 3 and/or those with poorly defined boundaries showed Dice coefficients of ∼0.5–0.6. For the propagation of small organs (<3 cm 3 ), the B-spline-based algorithms showed higher mean Dice values (Dice = 0.60) than the Demons-based algorithms (Dice = 0.54). For the gross and planning target volumes, the respective mean Dice coefficients were 0.8 and 0.9. There was no statistically significant difference in the Dice coefficients, CoM, or HD among investigated DIR algorithms. The mean radiation oncologist visual scores of the four algorithms ranged from 3.2 to 3.8, which indicated that the quality of transferred contours was “clinically acceptable with minor modification or major modification in a small number of contours.” Conclusions

  12. Deformable image registration based automatic CT-to-CT contour propagation for head and neck adaptive radiotherapy in the routine clinical setting

    Energy Technology Data Exchange (ETDEWEB)

    Kumarasiri, Akila, E-mail: akumara1@hfhs.org; Siddiqui, Farzan; Liu, Chang; Yechieli, Raphael; Shah, Mira; Pradhan, Deepak; Zhong, Hualiang; Chetty, Indrin J.; Kim, Jinkoo [Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan 48202 (United States)

    2014-12-15

    Purpose: To evaluate the clinical potential of deformable image registration (DIR)-based automatic propagation of physician-drawn contours from a planning CT to midtreatment CT images for head and neck (H and N) adaptive radiotherapy. Methods: Ten H and N patients, each with a planning CT (CT1) and a subsequent CT (CT2) taken approximately 3–4 week into treatment, were considered retrospectively. Clinically relevant organs and targets were manually delineated by a radiation oncologist on both sets of images. Four commercial DIR algorithms, two B-spline-based and two Demons-based, were used to deform CT1 and the relevant contour sets onto corresponding CT2 images. Agreement of the propagated contours with manually drawn contours on CT2 was visually rated by four radiation oncologists in a scale from 1 to 5, the volume overlap was quantified using Dice coefficients, and a distance analysis was done using center of mass (CoM) displacements and Hausdorff distances (HDs). Performance of these four commercial algorithms was validated using a parameter-optimized Elastix DIR algorithm. Results: All algorithms attained Dice coefficients of >0.85 for organs with clear boundaries and those with volumes >9 cm{sup 3}. Organs with volumes <3 cm{sup 3} and/or those with poorly defined boundaries showed Dice coefficients of ∼0.5–0.6. For the propagation of small organs (<3 cm{sup 3}), the B-spline-based algorithms showed higher mean Dice values (Dice = 0.60) than the Demons-based algorithms (Dice = 0.54). For the gross and planning target volumes, the respective mean Dice coefficients were 0.8 and 0.9. There was no statistically significant difference in the Dice coefficients, CoM, or HD among investigated DIR algorithms. The mean radiation oncologist visual scores of the four algorithms ranged from 3.2 to 3.8, which indicated that the quality of transferred contours was “clinically acceptable with minor modification or major modification in a small number of contours

  13. Development of Base Transceiver Station Selection Algorithm for ...

    African Journals Online (AJOL)

    TEMS) equipment was carried out on the existing BTSs, and a linear algorithm optimization program based on the spectral link efficiency of each BTS was developed, the output of this site optimization gives the selected number of base station sites ...

  14. Selection of individual features of a speech signal using genetic algorithms

    Directory of Open Access Journals (Sweden)

    Kamil Kamiński

    2016-03-01

    Full Text Available The paper presents an automatic speaker’s recognition system, implemented in the Matlab environment, and demonstrates how to achieve and optimize various elements of the system. The main emphasis was put on features selection of a speech signal using a genetic algorithm which takes into account synergy of features. The results of optimization of selected elements of a classifier have been also shown, including the number of Gaussian distributions used to model each of the voices. In addition, for creating voice models, a universal voice model has been used.[b]Keywords[/b]: biometrics, automatic speaker recognition, genetic algorithms, feature selection

  15. Value and clinical application of orthopedic metal artifact reduction algorithm in CT scans after orthopedic metal implantation

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Yi; Pan, Shinong; Zhao, Xudong; Guo, Wenli; He, Ming; Guo, Qiyong [Shengjing Hospital of China Medical University, Shenyang (China)

    2017-06-15

    To evaluate orthopedic metal artifact reduction algorithm (O-MAR) in CT orthopedic metal artifact reduction at different tube voltages, identify an appropriate low tube voltage for clinical practice, and investigate its clinical application. The institutional ethical committee approved all the animal procedures. A stainless-steel plate and four screws were implanted into the femurs of three Japanese white rabbits. Preoperative CT was performed at 120 kVp without O-MAR reconstruction, and postoperative CT was performed at 80–140 kVp with O-MAR. Muscular CT attenuation, artifact index (AI) and signal-to-noise ratio (SNR) were compared between preoperative and postoperative images (unpaired t test), between paired O-MAR and non-O-MAR images (paired Student t test) and among different kVp settings (repeated measures ANOVA). Artifacts' severity, muscular homogeneity, visibility of inter-muscular space and definition of bony structures were subjectively evaluated and compared (Wilcoxon rank-sum test). In the clinical study, 20 patients undertook CT scan at low kVp with O-MAR with informed consent. The diagnostic satisfaction of clinical images was subjectively assessed. Animal experiments showed that the use of O-MAR resulted in accurate CT attenuation, lower AI, better SNR, and higher subjective scores (p < 0.010) at all tube voltages. O-MAR images at 100 kVp had almost the same AI and SNR as non-O-MAR images at 140 kVp. All O-MAR images were scored ≥ 3. In addition, 95% of clinical CT images performed at 100 kVp were considered satisfactory. O-MAR can effectively reduce orthopedic metal artifacts at different tube voltages, and facilitates low-tube-voltage CT for patients with orthopedic metal implants.

  16. Fully automated segmentation of callus by micro-CT compared to biomechanics.

    Science.gov (United States)

    Bissinger, Oliver; Götz, Carolin; Wolff, Klaus-Dietrich; Hapfelmeier, Alexander; Prodinger, Peter Michael; Tischer, Thomas

    2017-07-11

    A high percentage of closed femur fractures have slight comminution. Using micro-CTCT), multiple fragment segmentation is much more difficult than segmentation of unfractured or osteotomied bone. Manual or semi-automated segmentation has been performed to date. However, such segmentation is extremely laborious, time-consuming and error-prone. Our aim was to therefore apply a fully automated segmentation algorithm to determine μCT parameters and examine their association with biomechanics. The femura of 64 rats taken after randomised inhibitory or neutral medication, in terms of the effect on fracture healing, and controls were closed fractured after a Kirschner wire was inserted. After 21 days, μCT and biomechanical parameters were determined by a fully automated method and correlated (Pearson's correlation). The fully automated segmentation algorithm automatically detected bone and simultaneously separated cortical bone from callus without requiring ROI selection for each single bony structure. We found an association of structural callus parameters obtained by μCT to the biomechanical properties. However, results were only explicable by additionally considering the callus location. A large number of slightly comminuted fractures in combination with therapies that influence the callus qualitatively and/or quantitatively considerably affects the association between μCT and biomechanics. In the future, contrast-enhanced μCT imaging of the callus cartilage might provide more information to improve the non-destructive and non-invasive prediction of callus mechanical properties. As studies evaluating such important drugs increase, fully automated segmentation appears to be clinically important.

  17. Smartphone-Guided Needle Angle Selection During CT-Guided Procedures.

    Science.gov (United States)

    Xu, Sheng; Krishnasamy, Venkatesh; Levy, Elliot; Li, Ming; Tse, Zion Tsz Ho; Wood, Bradford John

    2018-01-01

    In CT-guided intervention, translation from a planned needle insertion angle to the actual insertion angle is estimated only with the physician's visuospatial abilities. An iPhone app was developed to reduce reliance on operator ability to estimate and reproduce angles. The iPhone app overlays the planned angle on the smartphone's camera display in real-time based on the smartphone's orientation. The needle's angle is selected by visually comparing the actual needle with the guideline in the display. If the smartphone's screen is perpendicular to the planned path, the smartphone shows the Bull's-Eye View mode, in which the angle is selected after the needle's hub overlaps the tip in the camera. In phantom studies, we evaluated the accuracies of the hardware, the Guideline mode, and the Bull's-Eye View mode and showed the app's clinical efficacy. A proof-of-concept clinical case was also performed. The hardware accuracy was 0.37° ± 0.27° (mean ± SD). The mean error and navigation time were 1.0° ± 0.9° and 8.7 ± 2.3 seconds for a senior radiologist with 25 years' experience and 1.5° ± 1.3° and 8.0 ± 1.6 seconds for a junior radiologist with 4 years' experience. The accuracy of the Bull's-Eye View mode was 2.9° ± 1.1°. Combined CT and smart-phone guidance was significantly more accurate than CT-only guidance for the first needle pass (p = 0.046), which led to a smaller final targeting error (mean distance from needle tip to target, 2.5 vs 7.9 mm). Mobile devices can be useful for guiding needle-based interventions. The hardware is low cost and widely available. The method is accurate, effective, and easy to implement.

  18. Three-dimensional monochromatic x-ray CT

    Science.gov (United States)

    Saito, Tsuneo; Kudo, Hiroyuki; Takeda, Tohoru; Itai, Yuji; Tokumori, Kenji; Toyofuku, Fukai; Hyodo, Kazuyuki; Ando, Masami; Nishimura, Ktsuyuki; Uyama, Chikao

    1995-08-01

    In this paper, we describe a 3D computed tomography (3D CT) using monochromatic x-rays generated by synchrotron radiation, which performs a direct reconstruction of 3D volume image of an object from its cone-beam projections. For the develpment of 3D CT, scanning orbit of x-ray source to obtain complete 3D information about an object and corresponding 3D image reconstruction algorithm are considered. Computer simulation studies demonstrate the validities of proposed scanning method and reconstruction algorithm. A prototype experimental system of 3D CT was constructed. Basic phantom examinations and specific material CT image by energy subtraction obtained in this experimental system are shown.

  19. Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data.

    Science.gov (United States)

    Kroenke, Candyce H; Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J

    2016-03-01

    The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women's Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms-one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV-using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this "triangulation." Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. © The Author 2015. Published by Oxford University Press. All rights reserved. For

  20. Multidetector CT: a new gold standard in the diagnosis of pulmonary embolism? State of the art and diagnostic algorithms

    International Nuclear Information System (INIS)

    Russo, Vincenzo; Piva, Tommaso; Lovato, Luigi; Fattori, Rossella; Gavelli, Giampaolo

    2005-01-01

    Purpose: From the early 90s, spiral CT technology has considerably changed the diagnostic capability of Pulmonary Embolism (PE), giving a direct vision of intravascular thrombi. Further technological progress has straightened its diagnostic impact leading to an essential role in clinical practice. The advent of Multi-Detector CT (MDCT) has subsequently increased the reliability of this technique to the point of undermining the role of pulmonary angiography as the gold standard and occupying a central position in diagnostic algorithms. The aim of this paper is to appraise this evolution by means of a meta-analysis of the relevant literature from 1995 to 2004. Results: The review of the literature showed the sensitivity and specificity of CT to have increased from 37-94% and 91-100% (single detector CT) to 87-94% and 94-100% (4-channel multidetector CT), especially thanks to the possibility of depicting subsegmental clots, with an interobserver agreement of 0.63-0.94 (k). Conclusions: CT is one of the most reliable and effective methods in the diagnosis is PE, with the advantage of being extremely fast and providing alternative diagnoses. Recent improvements in MDCT technology confers the highest value of diagnostic accuracy with respect to other imaging modalities such as scintigraphy, angiography, MRI, D-dimer essay and Doppler US [it

  1. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    Science.gov (United States)

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. An algorithm for longitudinal registration of PET/CT images acquired during neoadjuvant chemotherapy in breast cancer: preliminary results.

    Science.gov (United States)

    Li, Xia; Abramson, Richard G; Arlinghaus, Lori R; Chakravarthy, Anuradha Bapsi; Abramson, Vandana; Mayer, Ingrid; Farley, Jaime; Delbeke, Dominique; Yankeelov, Thomas E

    2012-11-16

    By providing estimates of tumor glucose metabolism, 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) can potentially characterize the response of breast tumors to treatment. To assess therapy response, serial measurements of FDG-PET parameters (derived from static and/or dynamic images) can be obtained at different time points during the course of treatment. However, most studies track the changes in average parameter values obtained from the whole tumor, thereby discarding all spatial information manifested in tumor heterogeneity. Here, we propose a method whereby serially acquired FDG-PET breast data sets can be spatially co-registered to enable the spatial comparison of parameter maps at the voxel level. The goal is to optimally register normal tissues while simultaneously preventing tumor distortion. In order to accomplish this, we constructed a PET support device to enable PET/CT imaging of the breasts of ten patients in the prone position and applied a mutual information-based rigid body registration followed by a non-rigid registration. The non-rigid registration algorithm extended the adaptive bases algorithm (ABA) by incorporating a tumor volume-preserving constraint, which computed the Jacobian determinant over the tumor regions as outlined on the PET/CT images, into the cost function. We tested this approach on ten breast cancer patients undergoing neoadjuvant chemotherapy. By both qualitative and quantitative evaluation, our constrained algorithm yielded significantly less tumor distortion than the unconstrained algorithm: considering the tumor volume determined from standard uptake value maps, the post-registration median tumor volume changes, and the 25th and 75th quantiles were 3.42% (0%, 13.39%) and 16.93% (9.21%, 49.93%) for the constrained and unconstrained algorithms, respectively (p = 0.002), while the bending energy (a measure of the smoothness of the deformation) was 0.0015 (0.0005, 0.012) and 0.017 (0.005, 0

  3. Identification of dental root canals and their medial line from micro-CT and cone-beam CT records

    Directory of Open Access Journals (Sweden)

    Benyó Balázs

    2012-10-01

    Full Text Available Abstract Background Shape of the dental root canal is highly patient specific. Automated identification methods of the medial line of dental root canals and the reproduction of their 3D shape can be beneficial for planning endodontic interventions as severely curved root canals or multi-rooted teeth may pose treatment challenges. Accurate shape information of the root canals may also be used by manufacturers of endodontic instruments in order to make more efficient clinical tools. Method Novel image processing procedures dedicated to the automated detection of the medial axis of the root canal from dental micro-CT and cone-beam CT records are developed. For micro-CT, the 3D model of the root canal is built up from several hundred parallel cross sections, using image enhancement, histogram based fuzzy c-means clustering, center point detection in the segmented slice, three dimensional inner surface reconstruction, and potential field driven curve skeleton extraction in three dimensions. Cone-beam CT records are processed with image enhancement filters and fuzzy chain based regional segmentation, followed by the reconstruction of the root canal surface and detecting its skeleton via a mesh contraction algorithm. Results The proposed medial line identification and root canal detection algorithms are validated on clinical data sets. 25 micro-CT and 36 cone-beam-CT records are used in the validation procedure. The overall success rate of the automatic dental root canal identification was about 92% in both procedures. The algorithms proved to be accurate enough for endodontic therapy planning. Conclusions Accurate medial line identification and shape detection algorithms of dental root canal have been developed. Different procedures are defined for micro-CT and cone-beam CT records. The automated execution of the subsequent processing steps allows easy application of the algorithms in the dental care. The output data of the image processing procedures

  4. Effect of reconstruction algorithm on image quality and identification of ground-glass opacities and partly solid nodules on low-dose thin-section CT: Experimental study using chest phantom

    International Nuclear Information System (INIS)

    Koyama, Hisanobu; Ohno, Yoshiharu; Kono, Atsushi A.; Kusaka, Akiko; Konishi, Minoru; Yoshii, Masaru; Sugimura, Kazuro

    2010-01-01

    Purpose: The purpose of this study was to assess the influence of reconstruction algorithm on identification and image quality of ground-glass opacities (GGOs) and partly solid nodules on low-dose thin-section CT. Materials and methods: A chest CT phantom including simulated GGOs and partly solid nodules was scanned with five different tube currents and reconstructed by using standard (A) and newly developed (B) high-resolution reconstruction algorithms, followed by visually assessment of identification and image quality of GGOs and partly solid nodules by two chest radiologists. Inter-observer agreement, ROC analysis and ANOVA were performed to compare identification and image quality of each data set with those of the standard reference. The standard reference used 120 mA s in conjunction with reconstruction algorithm A. Results: Kappa values (κ) of overall identification and image qualities were substantial or almost perfect (0.60 < κ). Assessment of identification showed that area under the curve of 25 mA reconstructed with reconstruction algorithm A was significantly lower than that of standard reference (p < 0.05), while assessment of image quality indicated that 50 mA s reconstructed with reconstruction algorithm A and 25 mA s reconstructed with both reconstruction algorithms were significantly lower than standard reference (p < 0.05). Conclusion: Reconstruction algorithm may be an important factor for identification and image quality of ground-glass opacities and partly solid nodules on low-dose CT examination.

  5. CT perfusion-guided patient selection for endovascular recanalization in acute ischemic stroke: a multicenter study.

    Science.gov (United States)

    Turk, Aquilla S; Magarick, Jordan Asher; Frei, Don; Fargen, Kyle Michael; Chaudry, Imran; Holmstedt, Christine A; Nicholas, Joyce; Mocco, J; Turner, Raymond D; Huddle, Daniel; Loy, David; Bellon, Richard; Dooley, Gwendolyn; Adams, Robert; Whaley, Michelle; Fanale, Chris; Jauch, Edward

    2013-11-01

    The treatment of acute ischemic stroke is traditionally centered on time criteria, although recent evidence suggests that physiologic neuroimaging may be useful. In a multicenter study we evaluated the use of CT perfusion, regardless of time from symptom onset, in patients selected for intra-arterial treatment of ischemic stroke. Three medical centers retrospectively assessed stroke patients with a National Institute of Health Stroke Scale of ≥ 8, regardless of time from symptom onset. CT perfusion maps were qualitatively assessed. Patients with defined salvageable penumbra underwent intra-arterial revascularization of their occlusion. Functional outcome using the modified Rankin Score (mRS) was recorded. Two hundred and forty-seven patients were selected to undergo intra-arterial treatment based on CT perfusion imaging. The median time from symptom onset to procedure was 6 h. Patients were divided into two groups for analysis: ≤ 8 h and >8 h from symptom onset to endovascular procedure. We found no difference in functional outcome between the two groups (42.8% and 41.9% achieved 90-day mRS ≤ 2, respectively (p=1.0), and 54.9% vs 55.4% (p=1.0) achieved 90-day mRS ≤ 3, respectively). Overall, 48 patients (19.4%) had hemorrhages, of which 20 (8.0%) were symptomatic, with no difference between the groups (p=1.0). In a multicenter study, we demonstrated similar rates of good functional outcome and intracranial hemorrhage in patients with ischemic stroke when endovascular treatment was performed based on CT perfusion selection rather than time-guided selection. Our findings suggest that physiologic imaging-guided patient selection rather than time for endovascular reperfusion in ischemic stroke may be effective and safe.

  6. Threshold-selecting strategy for best possible ground state detection with genetic algorithms

    Science.gov (United States)

    Lässig, Jörg; Hoffmann, Karl Heinz

    2009-04-01

    Genetic algorithms are a standard heuristic to find states of low energy in complex state spaces as given by physical systems such as spin glasses but also in combinatorial optimization. The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover. Many schemes have been considered in literature as possible crossover selection strategies. We show for a large class of quality measures that the best possible probability distribution for selecting individuals in each generation of the algorithm execution is a rectangular distribution over the individuals sorted by their energy values. This means uniform probabilities have to be assigned to a group of the individuals with lowest energy in the population but probabilities equal to zero to individuals which are corresponding to energy values higher than a fixed cutoff, which is equal to a certain rank in the vector sorted by the energy of the states in the current population. The considered strategy is dubbed threshold selecting. The proof applies basic arguments of Markov chains and linear optimization and makes only a few assumptions on the underlying principles and hence applies to a large class of algorithms.

  7. Fast shading correction for cone beam CT in radiation therapy via sparse sampling on planning CT.

    Science.gov (United States)

    Shi, Linxi; Tsui, Tiffany; Wei, Jikun; Zhu, Lei

    2017-05-01

    The image quality of cone beam computed tomography (CBCT) is limited by severe shading artifacts, hindering its quantitative applications in radiation therapy. In this work, we propose an image-domain shading correction method using planning CT (pCT) as prior information which is highly adaptive to clinical environment. We propose to perform shading correction via sparse sampling on pCT. The method starts with a coarse mapping between the first-pass CBCT images obtained from the Varian TrueBeam system and the pCT. The scatter correction method embedded in the Varian commercial software removes some image errors but the CBCT images still contain severe shading artifacts. The difference images between the mapped pCT and the CBCT are considered as shading errors, but only sparse shading samples are selected for correction using empirical constraints to avoid carrying over false information from pCT. A Fourier-Transform-based technique, referred to as local filtration, is proposed to efficiently process the sparse data for effective shading correction. The performance of the proposed method is evaluated on one anthropomorphic pelvis phantom and 17 patients, who were scheduled for radiation therapy. (The codes of the proposed method and sample data can be downloaded from https://sites.google.com/view/linxicbct) RESULTS: The proposed shading correction substantially improves the CBCT image quality on both the phantom and the patients to a level close to that of the pCT images. On the phantom, the spatial nonuniformity (SNU) difference between CBCT and pCT is reduced from 74 to 1 HU. The root of mean square difference of SNU between CBCT and pCT is reduced from 83 to 10 HU on the pelvis patients, and from 101 to 12 HU on the thorax patients. The robustness of the proposed shading correction is fully investigated with simulated registration errors between CBCT and pCT on the phantom and mis-registration on patients. The sparse sampling scheme of our method successfully

  8. Optimized hyperspectral band selection using hybrid genetic algorithm and gravitational search algorithm

    Science.gov (United States)

    Zhang, Aizhu; Sun, Genyun; Wang, Zhenjie

    2015-12-01

    The serious information redundancy in hyperspectral images (HIs) cannot contribute to the data analysis accuracy, instead it require expensive computational resources. Consequently, to identify the most useful and valuable information from the HIs, thereby improve the accuracy of data analysis, this paper proposed a novel hyperspectral band selection method using the hybrid genetic algorithm and gravitational search algorithm (GA-GSA). In the proposed method, the GA-GSA is mapped to the binary space at first. Then, the accuracy of the support vector machine (SVM) classifier and the number of selected spectral bands are utilized to measure the discriminative capability of the band subset. Finally, the band subset with the smallest number of spectral bands as well as covers the most useful and valuable information is obtained. To verify the effectiveness of the proposed method, studies conducted on an AVIRIS image against two recently proposed state-of-the-art GSA variants are presented. The experimental results revealed the superiority of the proposed method and indicated that the method can indeed considerably reduce data storage costs and efficiently identify the band subset with stable and high classification precision.

  9. A New Adaptive Gamma Correction Based Algorithm Using DWT-SVD for Non-Contrast CT Image Enhancement.

    Science.gov (United States)

    Kallel, Fathi; Ben Hamida, Ahmed

    2017-12-01

    The performances of medical image processing techniques, in particular CT scans, are usually affected by poor contrast quality introduced by some medical imaging devices. This suggests the use of contrast enhancement methods as a solution to adjust the intensity distribution of the dark image. In this paper, an advanced adaptive and simple algorithm for dark medical image enhancement is proposed. This approach is principally based on adaptive gamma correction using discrete wavelet transform with singular-value decomposition (DWT-SVD). In a first step, the technique decomposes the input medical image into four frequency sub-bands by using DWT and then estimates the singular-value matrix of the low-low (LL) sub-band image. In a second step, an enhanced LL component is generated using an adequate correction factor and inverse singular value decomposition (SVD). In a third step, for an additional improvement of LL component, obtained LL sub-band image from SVD enhancement stage is classified into two main classes (low contrast and moderate contrast classes) based on their statistical information and therefore processed using an adaptive dynamic gamma correction function. In fact, an adaptive gamma correction factor is calculated for each image according to its class. Finally, the obtained LL sub-band image undergoes inverse DWT together with the unprocessed low-high (LH), high-low (HL), and high-high (HH) sub-bands for enhanced image generation. Different types of non-contrast CT medical images are considered for performance evaluation of the proposed contrast enhancement algorithm based on adaptive gamma correction using DWT-SVD (DWT-SVD-AGC). Results show that our proposed algorithm performs better than other state-of-the-art techniques.

  10. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  11. Dose related, comparative evaluation of a novel bone-subtraction algorithm in 64-row cervico-cranial CT angiography

    Energy Technology Data Exchange (ETDEWEB)

    Siebert, E.; Bohner, G. [Department of Neuroradiology, Charite Universitary Medicine Berlin (Germany); Dewey, M.; Bauknecht, C. [Department of Radiology, Charite Universitary Medicine Berlin (Germany); Klingebiel, R. [Department of Neuroradiology, Charite Universitary Medicine Berlin (Germany)], E-mail: randolf.klingebiel@charite.de

    2010-01-15

    Purpose: Comparative evaluation of a low-dose scan protocol for a novel bone-subtraction (BS) algorithm, applicable to 64-row cervico-cranial (cc) CT angiography (MSCTA). Methods and patients: BS algorithm assessment was performed in cadaveric phantom studies by stepwise variation of tube current and head malrotation using a 64-row CT scanner. In order to define minimum dose requirements and the rotation correction capacity, a low dose BS MSCTA protocol was defined and evaluated in 12 patients in comparison to a common manual bone removal algorithm. Standard MIPs of both modalities were evaluated in a blinded manner by two neuroradiologists for image quality composed, of vessel contour sharpness and bony vessel superposition, by using a five-point score each. Effective Dose (E) and data post-processing times were defined. Results: In experimental studies prescan tube current could be cut down to one-sixth of post-contrast scan doses without compromise of bone-subtraction whereas incomplete subtraction appeared from four degrees head malrotation on. Prescan E amounted to additional 1.1 mSv (+25%) in clinical studies. BS MSCTA performed significantly superior in terms of bony superposition for vascular segments C3-C7 (p < 0.001), V1-V2, V3-V4 (p < 0.05, p < 0.001 respectively) and the ophthalmic artery (p < 0.05), whereas vessel contour sharpness in BS MSCTA only proved superior for arterial segments V3-V4 (p < 0.001) and C3-C7 (p < 0.001). MBR MSCTA received higher ratings in vessel contour sharpness for C1-C2 (p < 0.001), callosomarginal artery (p < 0.001), M1, M2, M3 (p < 0.001 each) and the basilar artery (p < 0.001). Reconstruction times amounted to an average of 1.5 (BS MSCTA) and 3 min (MBR MSCTA) respectively. Conclusion: The novel BS algorithm provides superior skull base artery visualisation as compared to common manual bone removal algorithms, increasing the Effective Dose by one-fourth. Yet, inferior vessel contour sharpness was noted intracranially, thus

  12. Dose related, comparative evaluation of a novel bone-subtraction algorithm in 64-row cervico-cranial CT angiography

    International Nuclear Information System (INIS)

    Siebert, E.; Bohner, G.; Dewey, M.; Bauknecht, C.; Klingebiel, R.

    2010-01-01

    Purpose: Comparative evaluation of a low-dose scan protocol for a novel bone-subtraction (BS) algorithm, applicable to 64-row cervico-cranial (cc) CT angiography (MSCTA). Methods and patients: BS algorithm assessment was performed in cadaveric phantom studies by stepwise variation of tube current and head malrotation using a 64-row CT scanner. In order to define minimum dose requirements and the rotation correction capacity, a low dose BS MSCTA protocol was defined and evaluated in 12 patients in comparison to a common manual bone removal algorithm. Standard MIPs of both modalities were evaluated in a blinded manner by two neuroradiologists for image quality composed, of vessel contour sharpness and bony vessel superposition, by using a five-point score each. Effective Dose (E) and data post-processing times were defined. Results: In experimental studies prescan tube current could be cut down to one-sixth of post-contrast scan doses without compromise of bone-subtraction whereas incomplete subtraction appeared from four degrees head malrotation on. Prescan E amounted to additional 1.1 mSv (+25%) in clinical studies. BS MSCTA performed significantly superior in terms of bony superposition for vascular segments C3-C7 (p < 0.001), V1-V2, V3-V4 (p < 0.05, p < 0.001 respectively) and the ophthalmic artery (p < 0.05), whereas vessel contour sharpness in BS MSCTA only proved superior for arterial segments V3-V4 (p < 0.001) and C3-C7 (p < 0.001). MBR MSCTA received higher ratings in vessel contour sharpness for C1-C2 (p < 0.001), callosomarginal artery (p < 0.001), M1, M2, M3 (p < 0.001 each) and the basilar artery (p < 0.001). Reconstruction times amounted to an average of 1.5 (BS MSCTA) and 3 min (MBR MSCTA) respectively. Conclusion: The novel BS algorithm provides superior skull base artery visualisation as compared to common manual bone removal algorithms, increasing the Effective Dose by one-fourth. Yet, inferior vessel contour sharpness was noted intracranially, thus

  13. Self-organized spectrum chunk selection algorithm for Local Area LTE-Advanced

    DEFF Research Database (Denmark)

    Kumar, Sanjay; Wang, Yuanye; Marchetti, Nicola

    2010-01-01

    This paper presents a self organized spectrum chunk selection algorithm in order to minimize the mutual intercell interference among Home Node Bs (HeNBs), aiming to improve the system throughput performance compared to the existing frequency reuse one scheme. The proposed algorithm is useful...

  14. The linear attenuation coefficients as features of multiple energy CT image classification

    International Nuclear Information System (INIS)

    Homem, M.R.P.; Mascarenhas, N.D.A.; Cruvinel, P.E.

    2000-01-01

    We present in this paper an analysis of the linear attenuation coefficients as useful features of single and multiple energy CT images with the use of statistical pattern classification tools. We analyzed four CT images through two pointwise classifiers (the first classifier is based on the maximum-likelihood criterion and the second classifier is based on the k-means clustering algorithm) and one contextual Bayesian classifier (ICM algorithm - Iterated Conditional Modes) using an a priori Potts-Strauss model. A feature extraction procedure using the Jeffries-Matusita (J-M) distance and the Karhunen-Loeve transformation was also performed. Both the classification and the feature selection procedures were found to be in agreement with the predicted discrimination given by the separation of the linear attenuation coefficient curves for different materials

  15. SU-F-T-20: Novel Catheter Lumen Recognition Algorithm for Rapid Digitization

    Energy Technology Data Exchange (ETDEWEB)

    Dise, J; McDonald, D; Ashenafi, M; Peng, J; Mart, C; Koch, N; Vanek, K [Medical University of South Carolina, Charleston, SC (United States)

    2016-06-15

    Purpose: Manual catheter recognition remains a time-consuming aspect of high-dose-rate brachytherapy (HDR) treatment planning. In this work, a novel catheter lumen recognition algorithm was created for accurate and rapid digitization. Methods: MatLab v8.5 was used to create the catheter recognition algorithm. Initially, the algorithm searches the patient CT dataset using an intensity based k-means filter designed to locate catheters. Once the catheters have been located, seed points are manually selected to initialize digitization of each catheter. From each seed point, the algorithm searches locally in order to automatically digitize the remaining catheter. This digitization is accomplished by finding pixels with similar image curvature and divergence parameters compared to the seed pixel. Newly digitized pixels are treated as new seed positions, and hessian image analysis is used to direct the algorithm toward neighboring catheter pixels, and to make the algorithm insensitive to adjacent catheters that are unresolvable on CT, air pockets, and high Z artifacts. The algorithm was tested using 11 HDR treatment plans, including the Syed template, tandem and ovoid applicator, and multi-catheter lung brachytherapy. Digitization error was calculated by comparing manually determined catheter positions to those determined by the algorithm. Results: he digitization error was 0.23 mm ± 0.14 mm axially and 0.62 mm ± 0.13 mm longitudinally at the tip. The time of digitization, following initial seed placement was less than 1 second per catheter. The maximum total time required to digitize all tested applicators was 4 minutes (Syed template with 15 needles). Conclusion: This algorithm successfully digitizes HDR catheters for a variety of applicators with or without CT markers. The minimal axial error demonstrates the accuracy of the algorithm, and its insensitivity to image artifacts and challenging catheter positioning. Future work to automatically place initial seed

  16. SU-F-T-20: Novel Catheter Lumen Recognition Algorithm for Rapid Digitization

    International Nuclear Information System (INIS)

    Dise, J; McDonald, D; Ashenafi, M; Peng, J; Mart, C; Koch, N; Vanek, K

    2016-01-01

    Purpose: Manual catheter recognition remains a time-consuming aspect of high-dose-rate brachytherapy (HDR) treatment planning. In this work, a novel catheter lumen recognition algorithm was created for accurate and rapid digitization. Methods: MatLab v8.5 was used to create the catheter recognition algorithm. Initially, the algorithm searches the patient CT dataset using an intensity based k-means filter designed to locate catheters. Once the catheters have been located, seed points are manually selected to initialize digitization of each catheter. From each seed point, the algorithm searches locally in order to automatically digitize the remaining catheter. This digitization is accomplished by finding pixels with similar image curvature and divergence parameters compared to the seed pixel. Newly digitized pixels are treated as new seed positions, and hessian image analysis is used to direct the algorithm toward neighboring catheter pixels, and to make the algorithm insensitive to adjacent catheters that are unresolvable on CT, air pockets, and high Z artifacts. The algorithm was tested using 11 HDR treatment plans, including the Syed template, tandem and ovoid applicator, and multi-catheter lung brachytherapy. Digitization error was calculated by comparing manually determined catheter positions to those determined by the algorithm. Results: he digitization error was 0.23 mm ± 0.14 mm axially and 0.62 mm ± 0.13 mm longitudinally at the tip. The time of digitization, following initial seed placement was less than 1 second per catheter. The maximum total time required to digitize all tested applicators was 4 minutes (Syed template with 15 needles). Conclusion: This algorithm successfully digitizes HDR catheters for a variety of applicators with or without CT markers. The minimal axial error demonstrates the accuracy of the algorithm, and its insensitivity to image artifacts and challenging catheter positioning. Future work to automatically place initial seed

  17. An Efficient Cost-Sensitive Feature Selection Using Chaos Genetic Algorithm for Class Imbalance Problem

    Directory of Open Access Journals (Sweden)

    Jing Bian

    2016-01-01

    Full Text Available In the era of big data, feature selection is an essential process in machine learning. Although the class imbalance problem has recently attracted a great deal of attention, little effort has been undertaken to develop feature selection techniques. In addition, most applications involving feature selection focus on classification accuracy but not cost, although costs are important. To cope with imbalance problems, we developed a cost-sensitive feature selection algorithm that adds the cost-based evaluation function of a filter feature selection using a chaos genetic algorithm, referred to as CSFSG. The evaluation function considers both feature-acquiring costs (test costs and misclassification costs in the field of network security, thereby weakening the influence of many instances from the majority of classes in large-scale datasets. The CSFSG algorithm reduces the total cost of feature selection and trades off both factors. The behavior of the CSFSG algorithm is tested on a large-scale dataset of network security, using two kinds of classifiers: C4.5 and k-nearest neighbor (KNN. The results of the experimental research show that the approach is efficient and able to effectively improve classification accuracy and to decrease classification time. In addition, the results of our method are more promising than the results of other cost-sensitive feature selection algorithms.

  18. Analyzing radiation absorption difference of dental substance by using Dual CT

    Science.gov (United States)

    Yu, H.; Lee, H. K.; Cho, J. H.; Yang, H. J.; Ju, Y. S.

    2015-07-01

    The purpose of this study was to evaluate the changes of noise and computer tomography (CT) number in each dental substance, by using the metal artefact reduction algorithm; we used dual CT for this study. For the study, we produced resin, titanium, gypsum, and wax that are widely used by dentists. In addition, we made nickel to increase the artefact. While making the study materials, we made sure that there is no difficulty when inserting the substances inside phantom. In order to study, we scanned before and after using the metal artefact reduction algorithm. We conducted an average analysis of CT number and noise, before and after using the metal artefact reduction algorithm. As a result, there was no difference in CT number and noise before and after using the metal artefact reduction algorithm. However, when it comes to the noise value in each substance, wax's noise value was the lowest whereas titanium's noise value was the highest, after applying the metal artefact reduction algorithm. In nickel, CT number and noise value from artefact area showed a decreased noise value when applying the metal artefact reduction algorithm. In conclusion, we assumed that we could increase the effectiveness of CT examination by applying dual energy's metal artefact reduction algorithm.

  19. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    Science.gov (United States)

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  1. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  2. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  3. Design optimization and analysis of selected thermal devices using self-adaptive Jaya algorithm

    International Nuclear Information System (INIS)

    Rao, R.V.; More, K.C.

    2017-01-01

    Highlights: • Self-adaptive Jaya algorithm is proposed for optimal design of thermal devices. • Optimization of heat pipe, cooling tower, heat sink and thermo-acoustic prime mover is presented. • Results of the proposed algorithm are better than the other optimization techniques. • The proposed algorithm may be conveniently used for the optimization of other devices. - Abstract: The present study explores the use of an improved Jaya algorithm called self-adaptive Jaya algorithm for optimal design of selected thermal devices viz; heat pipe, cooling tower, honeycomb heat sink and thermo-acoustic prime mover. Four different optimization case studies of the selected thermal devices are presented. The researchers had attempted the same design problems in the past using niched pareto genetic algorithm (NPGA), response surface method (RSM), leap-frog optimization program with constraints (LFOPC) algorithm, teaching-learning based optimization (TLBO) algorithm, grenade explosion method (GEM) and multi-objective genetic algorithm (MOGA). The results achieved by using self-adaptive Jaya algorithm are compared with those achieved by using the NPGA, RSM, LFOPC, TLBO, GEM and MOGA algorithms. The self-adaptive Jaya algorithm is proved superior as compared to the other optimization methods in terms of the results, computational effort and function evalutions.

  4. Single-slice rebinning method for helical cone-beam CT

    International Nuclear Information System (INIS)

    Noo, F.; Defrise, M.; Clackdoyle, R.

    1999-01-01

    In this paper, we present reconstruction results from helical cone-beam CT data, obtained using a simple and fast algorithm, which we call the CB-SSRB algorithm. This algorithm combines the single-slice rebinning method of PET imaging with the weighting schemes of spiral CT algorithms. The reconstruction is approximate but can be performed using 2D multislice fan-beam filtered backprojection. The quality of the results is surprisingly good, and far exceeds what one might expect, even when the pitch of the helix is large. In particular, with this algorithm comparable quality is obtained using helical cone-beam data with a normalized pitch of 10 to that obtained using standard spiral CT reconstruction with a normalized pitch of 2. (author)

  5. SU-D-202-04: Validation of Deformable Image Registration Algorithms for Head and Neck Adaptive Radiotherapy in Routine Clinical Setting

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, L; Pi, Y; Chen, Z; Xu, X [University of Science and Technology of China, Hefei, Anhui (China); Wang, Z [University of Science and Technology of China, Hefei, Anhui (China); The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui (China); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Long, T; Luo, W; Wang, F [The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui (China)

    2016-06-15

    Purpose: To evaluate the ROI contours and accumulated dose difference using different deformable image registration (DIR) algorithms for head and neck (H&N) adaptive radiotherapy. Methods: Eight H&N cancer patients were randomly selected from the affiliated hospital. During the treatment, patients were rescanned every week with ROIs well delineated by radiation oncologist on each weekly CT. New weekly treatment plans were also re-designed with consistent dose prescription on the rescanned CT and executed for one week on Siemens CT-on-rails accelerator. At the end, we got six weekly CT scans from CT1 to CT6 including six weekly treatment plans for each patient. The primary CT1 was set as the reference CT for DIR proceeding with the left five weekly CTs using ANACONDA and MORFEUS algorithms separately in RayStation and the external skin ROI was set to be the controlling ROI both. The entire calculated weekly dose were deformed and accumulated on corresponding reference CT1 according to the deformation vector field (DVFs) generated by the two different DIR algorithms respectively. Thus we got both the ANACONDA-based and MORFEUS-based accumulated total dose on CT1 for each patient. At the same time, we mapped the ROIs on CT1 to generate the corresponding ROIs on CT6 using ANACONDA and MORFEUS DIR algorithms. DICE coefficients between the DIR deformed and radiation oncologist delineated ROIs on CT6 were calculated. Results: For DIR accumulated dose, PTV D95 and Left-Eyeball Dmax show significant differences with 67.13 cGy and 109.29 cGy respectively (Table1). For DIR mapped ROIs, PTV, Spinal cord and Left-Optic nerve show difference with −0.025, −0.127 and −0.124 (Table2). Conclusion: Even two excellent DIR algorithms can give divergent results for ROI deformation and dose accumulation. As more and more TPS get DIR module integrated, there is an urgent need to realize the potential risk using DIR in clinical.

  6. Algoritmi selektivnog šifrovanja - pregled sa ocenom performansi / Selective encryption algorithms: Overview with performance evaluation

    Directory of Open Access Journals (Sweden)

    Boriša Ž. Jovanović

    2010-10-01

    Full Text Available Digitalni multimedijalni sadržaj postaje zastupljeniji i sve više se razmenjuje putem računarskih mreža i javnih kanala (satelitske komunikacije, bežične mreže, internet, itd. koji predstavljaju nebezbedne medijume za prenos informacija osetljive sadržine. Sve više na značaju dobijaju mehanizmi kriptološke zaštite slika i video sadržaja. Tradicionalni sistemi kriptografske obrade u sistemima za prenos ovih vrsta informacija garantuju visok stepen sigurnosti, ali i imaju svoje nedostatke - visoku cenu implementacije i znatno kašnjenje u prenosu podataka. Pomenuti nedostaci se prevazilaze primenom algoritama selektivnog šifrovanja. / Digital multimedia content is becoming widely used and increasingly exchanged over computer network and public channels (satelite, wireless networks, Internet, etc. which is unsecured transmission media for ex changing that kind of information. Mechanisms made to encrypt image and video data are becoming more and more significant. Traditional cryptographic techniques can guarantee a high level of security but at the cost of expensive implementation and important transmission delays. These shortcomings can be exceeded using selective encryption algorithms. Introduction In traditional image and video content protection schemes, called fully layered, the whole content is first compressed. Then, the compressed bitstream is entirely encrypted using a standard cipher (DES - Data Encryption Algorithm, IDEA - International Data Encryption Algorithm, AES - Advanced Encryption Algorithm etc.. The specific characteristics of this kind of data, high-transmission rate with limited bandwidth, make standard encryption algorithms inadequate. Another limitation of traditional systems consists of altering the whole bitstream syntax which may disable some codec functionalities on the delivery site coder and decoder on the receiving site. Selective encryption is a new trend in image and video content protection. As its

  7. Improved image quality in abdominal CT in patients who underwent treatment for hepatocellular carcinoma with small metal implants using a raw data-based metal artifact reduction algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Sofue, Keitaro; Sugimura, Kazuro [Kobe University Graduate School of Medicine, Department of Radiology, Kobe, Hyogo (Japan); Yoshikawa, Takeshi; Ohno, Yoshiharu [Kobe University Graduate School of Medicine, Advanced Biomedical Imaging Research Center, Kobe, Hyogo (Japan); Kobe University Graduate School of Medicine, Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe, Hyogo (Japan); Negi, Noriyuki [Kobe University Hospital, Division of Radiology, Kobe, Hyogo (Japan); Inokawa, Hiroyasu; Sugihara, Naoki [Toshiba Medical Systems Corporation, Otawara, Tochigi (Japan)

    2017-07-15

    To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P < 0.0001). Liver and pancreas image qualities and visualizations of vasculature were significantly improved on CT with SEMAR (P < 0.0001) with substantial or almost perfect agreement (0.62 ≤ κ ≤ 0.83). SEMAR can improve image quality in abdominal CT in patients with small metal implants by reducing metallic artefacts. (orig.)

  8. Improved image quality in abdominal CT in patients who underwent treatment for hepatocellular carcinoma with small metal implants using a raw data-based metal artifact reduction algorithm

    International Nuclear Information System (INIS)

    Sofue, Keitaro; Sugimura, Kazuro; Yoshikawa, Takeshi; Ohno, Yoshiharu; Negi, Noriyuki; Inokawa, Hiroyasu; Sugihara, Naoki

    2017-01-01

    To determine the value of a raw data-based metal artifact reduction (SEMAR) algorithm for image quality improvement in abdominal CT for patients with small metal implants. Fifty-eight patients with small metal implants (3-15 mm in size) who underwent treatment for hepatocellular carcinoma were imaged with CT. CT data were reconstructed by filtered back projection with and without SEMAR algorithm in axial and coronal planes. To evaluate metal artefact reduction, mean CT number (HU and SD) and artefact index (AI) values within the liver were calculated. Two readers independently evaluated image quality of the liver and pancreas and visualization of vasculature using a 5-point visual score. HU and AI values and image quality on images with and without SEMAR were compared using the paired Student's t-test and Wilcoxon signed rank test. Interobserver agreement was evaluated using linear-weighted κ test. Mean HU and AI on images with SEMAR was significantly lower than those without SEMAR (P < 0.0001). Liver and pancreas image qualities and visualizations of vasculature were significantly improved on CT with SEMAR (P < 0.0001) with substantial or almost perfect agreement (0.62 ≤ κ ≤ 0.83). SEMAR can improve image quality in abdominal CT in patients with small metal implants by reducing metallic artefacts. (orig.)

  9. Automated segmentation of knee and ankle regions of rats from CT images to quantify bone mineral density for monitoring treatments of rheumatoid arthritis

    Science.gov (United States)

    Cruz, Francisco; Sevilla, Raquel; Zhu, Joe; Vanko, Amy; Lee, Jung Hoon; Dogdas, Belma; Zhang, Weisheng

    2014-03-01

    Bone mineral density (BMD) obtained from a CT image is an imaging biomarker used pre-clinically for characterizing the Rheumatoid arthritis (RA) phenotype. We use this biomarker in animal studies for evaluating disease progression and for testing various compounds. In the current setting, BMD measurements are obtained manually by selecting the regions of interest from three-dimensional (3-D) CT images of rat legs, which results in a laborious and low-throughput process. Combining image processing techniques, such as intensity thresholding and skeletonization, with mathematical techniques in curve fitting and curvature calculations, we developed an algorithm for quick, consistent, and automatic detection of joints in large CT data sets. The implemented algorithm has reduced analysis time for a study with 200 CT images from 10 days to 3 days and has improved the robust detection of the obtained regions of interest compared with manual segmentation. This algorithm has been used successfully in over 40 studies.

  10. CT and MRI techniques for imaging around orthopedic hardware

    Energy Technology Data Exchange (ETDEWEB)

    Do, Thuy Duong; Skornitzke, Stephan; Weber, Marc-Andre [Heidelberg Univ. (Germany). Dept. of Clinical Radiology; Sutter, Reto [Uniklinik Balgrist, Zurich (Switzerland). Radiology

    2018-01-15

    Orthopedic hardware impairs image quality in cross-sectional imaging. With an increasing number of orthopedic implants in an aging population, the need to mitigate metal artifacts in computed tomography and magnetic resonance imaging is becoming increasingly relevant. This review provides an overview of the major artifacts in CT and MRI and state-of-the-art solutions to improve image quality. All steps of image acquisition from device selection, scan preparations and parameters to image post-processing influence the magnitude of metal artifacts. Technological advances like dual-energy CT with the possibility of virtual monochromatic imaging (VMI) and new materials offer opportunities to further reduce artifacts in CT and MRI. Dedicated metal artifact reduction sequences contain algorithms to reduce artifacts and improve imaging of surrounding tissue and are essential tools in orthopedic imaging to detect postoperative complications in early stages.

  11. Comparative analysis of instance selection algorithms for instance-based classifiers in the context of medical decision support

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Tourassi, Georgia D; Malof, Jordan M

    2011-01-01

    When constructing a pattern classifier, it is important to make best use of the instances (a.k.a. cases, examples, patterns or prototypes) available for its development. In this paper we present an extensive comparative analysis of algorithms that, given a pool of previously acquired instances, attempt to select those that will be the most effective to construct an instance-based classifier in terms of classification performance, time efficiency and storage requirements. We evaluate seven previously proposed instance selection algorithms and compare their performance to simple random selection of instances. We perform the evaluation using k-nearest neighbor classifier and three classification problems: one with simulated Gaussian data and two based on clinical databases for breast cancer detection and diagnosis, respectively. Finally, we evaluate the impact of the number of instances available for selection on the performance of the selection algorithms and conduct initial analysis of the selected instances. The experiments show that for all investigated classification problems, it was possible to reduce the size of the original development dataset to less than 3% of its initial size while maintaining or improving the classification performance. Random mutation hill climbing emerges as the superior selection algorithm. Furthermore, we show that some previously proposed algorithms perform worse than random selection. Regarding the impact of the number of instances available for the classifier development on the performance of the selection algorithms, we confirm that the selection algorithms are generally more effective as the pool of available instances increases. In conclusion, instance selection is generally beneficial for instance-based classifiers as it can improve their performance, reduce their storage requirements and improve their response time. However, choosing the right selection algorithm is crucial.

  12. Toward optimal feature selection using ranking methods and classification algorithms

    Directory of Open Access Journals (Sweden)

    Novaković Jasmina

    2011-01-01

    Full Text Available We presented a comparison between several feature ranking methods used on two real datasets. We considered six ranking methods that can be divided into two broad categories: statistical and entropy-based. Four supervised learning algorithms are adopted to build models, namely, IB1, Naive Bayes, C4.5 decision tree and the RBF network. We showed that the selection of ranking methods could be important for classification accuracy. In our experiments, ranking methods with different supervised learning algorithms give quite different results for balanced accuracy. Our cases confirm that, in order to be sure that a subset of features giving the highest accuracy has been selected, the use of many different indices is recommended.

  13. Trust Based Algorithm for Candidate Node Selection in Hybrid MANET-DTN

    Directory of Open Access Journals (Sweden)

    Jan Papaj

    2014-01-01

    Full Text Available The hybrid MANET - DTN is a mobile network that enables transport of the data between groups of the disconnected mobile nodes. The network provides benefits of the Mobile Ad-Hoc Networks (MANET and Delay Tolerant Network (DTN. The main problem of the MANET occurs if the communication path is broken or disconnected for some short time period. On the other side, DTN allows sending data in the disconnected environment with respect to higher tolerance to delay. Hybrid MANET - DTN provides optimal solution for emergency situation in order to transport information. Moreover, the security is the critical factor because the data are transported by mobile devices. In this paper, we investigate the issue of secure candidate node selection for transportation of the data in a disconnected environment for hybrid MANET- DTN. To achieve the secure selection of the reliable mobile nodes, the trust algorithm is introduced. The algorithm enables select reliable nodes based on collecting routing information. This algorithm is implemented to the simulator OPNET modeler.

  14. Immediate total-body CT scanning versus conventional imaging and selective CT scanning in patients with severe trauma (REACT-2): a randomised controlled trial.

    Science.gov (United States)

    Sierink, Joanne C; Treskes, Kaij; Edwards, Michael J R; Beuker, Benn J A; den Hartog, Dennis; Hohmann, Joachim; Dijkgraaf, Marcel G W; Luitse, Jan S K; Beenen, Ludo F M; Hollmann, Markus W; Goslings, J Carel

    2016-08-13

    Published work suggests a survival benefit for patients with trauma who undergo total-body CT scanning during the initial trauma assessment; however, level 1 evidence is absent. We aimed to assess the effect of total-body CT scanning compared with the standard work-up on in-hospital mortality in patients with trauma. We undertook an international, multicentre, randomised controlled trial at four hospitals in the Netherlands and one in Switzerland. Patients aged 18 years or older with trauma with compromised vital parameters, clinical suspicion of life-threatening injuries, or severe injury were randomly assigned (1:1) by ALEA randomisation to immediate total-body CT scanning or to a standard work-up with conventional imaging supplemented with selective CT scanning. Neither doctors nor patients were masked to treatment allocation. The primary endpoint was in-hospital mortality, analysed in the intention-to-treat population and in subgroups of patients with polytrauma and those with traumatic brain injury. The χ(2) test was used to assess differences in mortality. This trial is registered with ClinicalTrials.gov, number NCT01523626. Between April 22, 2011, and Jan 1, 2014, 5475 patients were assessed for eligibility, 1403 of whom were randomly assigned: 702 to immediate total-body CT scanning and 701 to the standard work-up. 541 patients in the immediate total-body CT scanning group and 542 in the standard work-up group were included in the primary analysis. In-hospital mortality did not differ between groups (total-body CT 86 [16%] of 541 vs standard work-up 85 [16%] of 542; p=0.92). In-hospital mortality also did not differ between groups in subgroup analyses in patients with polytrauma (total-body CT 81 [22%] of 362 vs standard work-up 82 [25%] of 331; p=0.46) and traumatic brain injury (68 [38%] of 178 vs 66 [44%] of 151; p=0.31). Three serious adverse events were reported in patients in the total-body CT group (1%), one in the standard work-up group (<1%), and

  15. SU-E-J-115: Correlation of Displacement Vector Fields Calculated by Deformable Image Registration Algorithms with Motion Parameters of CT Images with Well-Defined Targets and Controlled-Motion

    Energy Technology Data Exchange (ETDEWEB)

    Jaskowiak, J; Ahmad, S; Ali, I [University of Oklahoma Health Sciences Center, Oklahoma City, OK (United States); Alsbou, N [Ohio Northern University, Ada, OH (United States)

    2015-06-15

    Purpose: To investigate correlation of displacement vector fields (DVF) calculated by deformable image registration algorithms with motion parameters in helical axial and cone-beam CT images with motion artifacts. Methods: A mobile thorax phantom with well-known targets with different sizes that were made from water-equivalent material and inserted in foam to simulate lung lesions. The thorax phantom was imaged with helical, axial and cone-beam CT. The phantom was moved with a cyclic motion with different motion amplitudes and frequencies along the superior-inferior direction. Different deformable image registration algorithms including demons, fast demons, Horn-Shunck and iterative-optical-flow from the DIRART software were used to deform CT images for the phantom with different motion patterns. The CT images of the mobile phantom were deformed to CT images of the stationary phantom. Results: The values of displacement vectors calculated by deformable image registration algorithm correlated strongly with motion amplitude where large displacement vectors were calculated for CT images with large motion amplitudes. For example, the maximal displacement vectors were nearly equal to the motion amplitudes (5mm, 10mm or 20mm) at interfaces between the mobile targets lung tissue, while the minimal displacement vectors were nearly equal to negative the motion amplitudes. The maximal and minimal displacement vectors matched with edges of the blurred targets along the Z-axis (motion-direction), while DVF’s were small in the other directions. This indicates that the blurred edges by phantom motion were shifted largely to match with the actual target edge. These shifts were nearly equal to the motion amplitude. Conclusions: The DVF from deformable-image registration algorithms correlated well with motion amplitude of well-defined mobile targets. This can be used to extract motion parameters such as amplitude. However, as motion amplitudes increased, image artifacts increased

  16. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    Science.gov (United States)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  17. Automatic motor task selection via a bandit algorithm for a brain-controlled button

    Science.gov (United States)

    Fruitet, Joan; Carpentier, Alexandra; Munos, Rémi; Clerc, Maureen

    2013-02-01

    Objective. Brain-computer interfaces (BCIs) based on sensorimotor rhythms use a variety of motor tasks, such as imagining moving the right or left hand, the feet or the tongue. Finding the tasks that yield best performance, specifically to each user, is a time-consuming preliminary phase to a BCI experiment. This study presents a new adaptive procedure to automatically select (online) the most promising motor task for an asynchronous brain-controlled button. Approach. We develop for this purpose an adaptive algorithm UCB-classif based on the stochastic bandit theory and design an EEG experiment to test our method. We compare (offline) the adaptive algorithm to a naïve selection strategy which uses uniformly distributed samples from each task. We also run the adaptive algorithm online to fully validate the approach. Main results. By not wasting time on inefficient tasks, and focusing on the most promising ones, this algorithm results in a faster task selection and a more efficient use of the BCI training session. More precisely, the offline analysis reveals that the use of this algorithm can reduce the time needed to select the most appropriate task by almost half without loss in precision, or alternatively, allow us to investigate twice the number of tasks within a similar time span. Online tests confirm that the method leads to an optimal task selection. Significance. This study is the first one to optimize the task selection phase by an adaptive procedure. By increasing the number of tasks that can be tested in a given time span, the proposed method could contribute to reducing ‘BCI illiteracy’.

  18. A Shearlet-based algorithm for quantum noise removal in low-dose CT images

    Science.gov (United States)

    Zhang, Aguan; Jiang, Huiqin; Ma, Ling; Liu, Yumin; Yang, Xiaopeng

    2016-03-01

    Low-dose CT (LDCT) scanning is a potential way to reduce the radiation exposure of X-ray in the population. It is necessary to improve the quality of low-dose CT images. In this paper, we propose an effective algorithm for quantum noise removal in LDCT images using shearlet transform. Because the quantum noise can be simulated by Poisson process, we first transform the quantum noise by using anscombe variance stabilizing transform (VST), producing an approximately Gaussian noise with unitary variance. Second, the non-noise shearlet coefficients are obtained by adaptive hard-threshold processing in shearlet domain. Third, we reconstruct the de-noised image using the inverse shearlet transform. Finally, an anscombe inverse transform is applied to the de-noised image, which can produce the improved image. The main contribution is to combine the anscombe VST with the shearlet transform. By this way, edge coefficients and noise coefficients can be separated from high frequency sub-bands effectively. A number of experiments are performed over some LDCT images by using the proposed method. Both quantitative and visual results show that the proposed method can effectively reduce the quantum noise while enhancing the subtle details. It has certain value in clinical application.

  19. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using

  20. Implementation techniques and acceleration of DBPF reconstruction algorithm based on GPGPU for helical cone beam CT

    International Nuclear Information System (INIS)

    Shen Le; Xing Yuxiang

    2010-01-01

    The derivative back-projection filtered algorithm for a helical cone-beam CT is a newly developed exact reconstruction method. Due to its large computational complexity, the reconstruction is rather slow for practical use. General purpose graphic processing unit (GPGPU) is an SIMD paralleled hardware architecture with powerful float-point operation capacity. In this paper,we propose a new method for PI-line choice and sampling grid, and a paralleled PI-line reconstruction algorithm implemented on NVIDIA's Compute Unified Device Architecture (CUDA). Numerical simulation studies are carried out to validate our method. Compared with conventional CPU implementation, the CUDA accelerated method provides images of the same quality with a speedup factor of 318. Optimization strategies for the GPU acceleration are presented. Finally, influence of the parameters of the PI-line samples on the reconstruction speed and image quality is discussed. (authors)

  1. CT image registration in sinogram space.

    Science.gov (United States)

    Mao, Weihua; Li, Tianfang; Wink, Nicole; Xing, Lei

    2007-09-01

    Object displacement in a CT scan is generally reflected in CT projection data or sinogram. In this work, the direct relationship between object motion and the change of CT projection data (sinogram) is investigated and this knowledge is applied to create a novel algorithm for sinogram registration. Calculated and experimental results demonstrate that the registration technique works well for registering rigid 2D or 3D motion in parallel and fan beam samplings. Problem and solution for 3D sinogram-based registration of metallic fiducials are also addressed. Since the motion is registered before image reconstruction, the presented algorithm is particularly useful when registering images with metal or truncation artifacts. In addition, this algorithm is valuable for dealing with situations where only limited projection data are available, making it appealing for various applications in image guided radiation therapy.

  2. CT image registration in sinogram space

    International Nuclear Information System (INIS)

    Mao Weihua; Li Tianfang; Wink, Nicole; Xing Lei

    2007-01-01

    Object displacement in a CT scan is generally reflected in CT projection data or sinogram. In this work, the direct relationship between object motion and the change of CT projection data (sinogram) is investigated and this knowledge is applied to create a novel algorithm for sinogram registration. Calculated and experimental results demonstrate that the registration technique works well for registering rigid 2D or 3D motion in parallel and fan beam samplings. Problem and solution for 3D sinogram-based registration of metallic fiducials are also addressed. Since the motion is registered before image reconstruction, the presented algorithm is particularly useful when registering images with metal or truncation artifacts. In addition, this algorithm is valuable for dealing with situations where only limited projection data are available, making it appealing for various applications in image guided radiation therapy

  3. Automated measurement of CT noise in patient images with a novel structure coherence feature

    International Nuclear Information System (INIS)

    Chun, Minsoo; Kim, Jong Hyo; Choi, Young Hun

    2015-01-01

    While the assessment of CT noise constitutes an important task for the optimization of scan protocols in clinical routine, the majority of noise measurements in practice still rely on manual operation, hence limiting their efficiency and reliability. This study presents an algorithm for the automated measurement of CT noise in patient images with a novel structure coherence feature. The proposed algorithm consists of a four-step procedure including subcutaneous fat tissue selection, the calculation of structure coherence feature, the determination of homogeneous ROIs, and the estimation of the average noise level. In an evaluation with 94 CT scans (16 517 images) of pediatric and adult patients along with the participation of two radiologists, ROIs were placed on a homogeneous fat region at 99.46% accuracy, and the agreement of the automated noise measurements with the radiologists’ reference noise measurements (PCC  =  0.86) was substantially higher than the within and between-rater agreements of noise measurements (PCC within   =  0.75, PCC between   =  0.70). In addition, the absolute noise level measurements matched closely the theoretical noise levels generated by a reduced-dose simulation technique. Our proposed algorithm has the potential to be used for examining the appropriateness of radiation dose and the image quality of CT protocols for research purposes as well as clinical routine. (paper)

  4. A Multiagent Evolutionary Algorithm for the Resource-Constrained Project Portfolio Selection and Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Yongyi Shou

    2014-01-01

    Full Text Available A multiagent evolutionary algorithm is proposed to solve the resource-constrained project portfolio selection and scheduling problem. The proposed algorithm has a dual level structure. In the upper level a set of agents make decisions to select appropriate project portfolios. Each agent selects its project portfolio independently. The neighborhood competition operator and self-learning operator are designed to improve the agent’s energy, that is, the portfolio profit. In the lower level the selected projects are scheduled simultaneously and completion times are computed to estimate the expected portfolio profit. A priority rule-based heuristic is used by each agent to solve the multiproject scheduling problem. A set of instances were generated systematically from the widely used Patterson set. Computational experiments confirmed that the proposed evolutionary algorithm is effective for the resource-constrained project portfolio selection and scheduling problem.

  5. Evaluation of deformable image registration for contour propagation between CT and cone-beam CT images in adaptive head and neck radiotherapy.

    Science.gov (United States)

    Li, X; Zhang, Y Y; Shi, Y H; Zhou, L H; Zhen, X

    2016-04-29

    Deformable image registration (DIR) is a critical technic in adaptive radiotherapy (ART) to propagate contours between planning computerized tomography (CT) images and treatment CT/Cone-beam CT (CBCT) image to account for organ deformation for treatment re-planning. To validate the ability and accuracy of DIR algorithms in organ at risk (OAR) contours mapping, seven intensity-based DIR strategies are tested on the planning CT and weekly CBCT images from six Head & Neck cancer patients who underwent a 6 ∼ 7 weeks intensity-modulated radiation therapy (IMRT). Three similarity metrics, i.e. the Dice similarity coefficient (DSC), the percentage error (PE) and the Hausdorff distance (HD), are employed to measure the agreement between the propagated contours and the physician delineated ground truths. It is found that the performance of all the evaluated DIR algorithms declines as the treatment proceeds. No statistically significant performance difference is observed between different DIR algorithms (p> 0.05), except for the double force demons (DFD) which yields the worst result in terms of DSC and PE. For the metric HD, all the DIR algorithms behaved unsatisfactorily with no statistically significant performance difference (p= 0.273). These findings suggested that special care should be taken when utilizing the intensity-based DIR algorithms involved in this study to deform OAR contours between CT and CBCT, especially for those organs with low contrast.

  6. TH-E-17A-01: Internal Respiratory Surrogate for 4D CT Using Fourier Transform and Anatomical Features

    Energy Technology Data Exchange (ETDEWEB)

    Hui, C; Suh, Y; Robertson, D; Pan, T; Das, P; Crane, C; Beddar, S [MD Anderson Cancer Center, Houston, TX (United States)

    2014-06-15

    Purpose: To develop a novel algorithm to generate internal respiratory signals for sorting of four-dimensional (4D) computed tomography (CT) images. Methods: The proposed algorithm extracted multiple time resolved features as potential respiratory signals. These features were taken from the 4D CT images and its Fourier transformed space. Several low-frequency locations in the Fourier space and selected anatomical features from the images were used as potential respiratory signals. A clustering algorithm was then used to search for the group of appropriate potential respiratory signals. The chosen signals were then normalized and averaged to form the final internal respiratory signal. Performance of the algorithm was tested in 50 4D CT data sets and results were compared with external signals from the real-time position management (RPM) system. Results: In almost all cases, the proposed algorithm generated internal respiratory signals that visibly matched the external respiratory signals from the RPM system. On average, the end inspiration times calculated by the proposed algorithm were within 0.1 s of those given by the RPM system. Less than 3% of the calculated end inspiration times were more than one time frame away from those given by the RPM system. In 3 out of the 50 cases, the proposed algorithm generated internal respiratory signals that were significantly smoother than the RPM signals. In these cases, images sorted using the internal respiratory signals showed fewer artifacts in locations corresponding to the discrepancy in the internal and external respiratory signals. Conclusion: We developed a robust algorithm that generates internal respiratory signals from 4D CT images. In some cases, it even showed the potential to outperform the RPM system. The proposed algorithm is completely automatic and generally takes less than 2 min to process. It can be easily implemented into the clinic and can potentially replace the use of external surrogates.

  7. TH-E-17A-01: Internal Respiratory Surrogate for 4D CT Using Fourier Transform and Anatomical Features

    International Nuclear Information System (INIS)

    Hui, C; Suh, Y; Robertson, D; Pan, T; Das, P; Crane, C; Beddar, S

    2014-01-01

    Purpose: To develop a novel algorithm to generate internal respiratory signals for sorting of four-dimensional (4D) computed tomography (CT) images. Methods: The proposed algorithm extracted multiple time resolved features as potential respiratory signals. These features were taken from the 4D CT images and its Fourier transformed space. Several low-frequency locations in the Fourier space and selected anatomical features from the images were used as potential respiratory signals. A clustering algorithm was then used to search for the group of appropriate potential respiratory signals. The chosen signals were then normalized and averaged to form the final internal respiratory signal. Performance of the algorithm was tested in 50 4D CT data sets and results were compared with external signals from the real-time position management (RPM) system. Results: In almost all cases, the proposed algorithm generated internal respiratory signals that visibly matched the external respiratory signals from the RPM system. On average, the end inspiration times calculated by the proposed algorithm were within 0.1 s of those given by the RPM system. Less than 3% of the calculated end inspiration times were more than one time frame away from those given by the RPM system. In 3 out of the 50 cases, the proposed algorithm generated internal respiratory signals that were significantly smoother than the RPM signals. In these cases, images sorted using the internal respiratory signals showed fewer artifacts in locations corresponding to the discrepancy in the internal and external respiratory signals. Conclusion: We developed a robust algorithm that generates internal respiratory signals from 4D CT images. In some cases, it even showed the potential to outperform the RPM system. The proposed algorithm is completely automatic and generally takes less than 2 min to process. It can be easily implemented into the clinic and can potentially replace the use of external surrogates

  8. A review of channel selection algorithms for EEG signal processing

    Science.gov (United States)

    Alotaiby, Turky; El-Samie, Fathi E. Abd; Alshebeili, Saleh A.; Ahmad, Ishtiaq

    2015-12-01

    Digital processing of electroencephalography (EEG) signals has now been popularly used in a wide variety of applications such as seizure detection/prediction, motor imagery classification, mental task classification, emotion classification, sleep state classification, and drug effects diagnosis. With the large number of EEG channels acquired, it has become apparent that efficient channel selection algorithms are needed with varying importance from one application to another. The main purpose of the channel selection process is threefold: (i) to reduce the computational complexity of any processing task performed on EEG signals by selecting the relevant channels and hence extracting the features of major importance, (ii) to reduce the amount of overfitting that may arise due to the utilization of unnecessary channels, for the purpose of improving the performance, and (iii) to reduce the setup time in some applications. Signal processing tools such as time-domain analysis, power spectral estimation, and wavelet transform have been used for feature extraction and hence for channel selection in most of channel selection algorithms. In addition, different evaluation approaches such as filtering, wrapper, embedded, hybrid, and human-based techniques have been widely used for the evaluation of the selected subset of channels. In this paper, we survey the recent developments in the field of EEG channel selection methods along with their applications and classify these methods according to the evaluation approach.

  9. A new cone-beam X-ray CT system with a reduced size planar detector

    International Nuclear Information System (INIS)

    Li Liang; Chen Zhiqiang; Zhang Li; Xing Yuxiang; Kang Kejun

    2006-01-01

    In a traditional cone-beam CT system, the cost of product and computation is very high. The authors propose a transversely truncated cone-beam X-ray CT system with a reduced size detector positioned off-center, in which X-ray beams only cover half of the object. The reduced detector size cuts the cost and the X-ray dose of the CT system. The existing CT reconstruction algorithms are not directly applicable in this new CT system. Hence, the authors develop a BPF-type direct backprojection algorithm. Different from the traditional rebinding methods, our algorithm directly backprojects the pretreated projection data without rebinding. This makes the algorithm compact and computationally more efficient. Finally, some numerical simulations and practical experiments are done to validate the proposed algorithm. (authors)

  10. Association between the mean CT value on a scout view and the dependent mA selection method in coronary artery imaging on 64-row multi-slice spiral CT

    International Nuclear Information System (INIS)

    Gao Jianhua; Li Tao; Mi Fengtang; Li Na; Cui Ying; Dai Ruping; Li Jianying

    2009-01-01

    Objective: To characterize the association between the mean CT value on a scout view and the dependent mA selection method, and to evaluate the clinical value of a mA selection method based on scout view mean CT value in obtaining individualized scan protocol and consistent image quality for patient population on 64-row MSCT CT coronary angiography (CTCA). Methods: One hundred patients (group A) underwent CTCA consecutively using standard protocol with a fixed mA. The mean CT value of a fixed ROI (region of interest) from the scout AP view and the CTCA image noise (standard deviation on the root of ascending aorta) were measured. The correlation between CT values and noise was studied to establish a formula and a list to determine the required mA for obtaining a consistent CTCA image noise based on the measured SV CT value. Another 100 patients (group B) were scanned using the same parameters as group A except the mA and the CT value was also measured. The mA was determined by the list established previously. The CTCA image quality (IQ) as well as the image noise (IN) and the effective dose (ED) from the two groups were statistically analyzed using t-test. The CT findings for the 32 patients in the group B were also compared with the selective coronary angiography (SCA) results. The sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy of CTCA for detection of significant stenosis were obtained. Results: The formula between the required mA and the CT value was: XmA=FmA x [(K 1 x CTscout + C 1 )/INa] 2 . The CTCA images in B group had statistically higher IN (27.66±2.57, 22.22±4.17, t=11.33, P=0.000), but no statistical difference between IQ scores for the two groups (3.29±0.66, 3.37±0.67, t=0.009, P=0.990), and ED [(8.72±2.51) versus (12.53±0.90) mSv] was 30% lower for the B group (P<0.01). For the 32 patients in the B group who had SCA, the CTCA sensitivity, specificity, positive predictive value, negative

  11. Combinatorial Optimization in Project Selection Using Genetic Algorithm

    Science.gov (United States)

    Dewi, Sari; Sawaluddin

    2018-01-01

    This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.

  12. Natural selection and algorithmic design of mRNA.

    Science.gov (United States)

    Cohen, Barry; Skiena, Steven

    2003-01-01

    Messenger RNA (mRNA) sequences serve as templates for proteins according to the triplet code, in which each of the 4(3) = 64 different codons (sequences of three consecutive nucleotide bases) in RNA either terminate transcription or map to one of the 20 different amino acids (or residues) which build up proteins. Because there are more codons than residues, there is inherent redundancy in the coding. Certain residues (e.g., tryptophan) have only a single corresponding codon, while other residues (e.g., arginine) have as many as six corresponding codons. This freedom implies that the number of possible RNA sequences coding for a given protein grows exponentially in the length of the protein. Thus nature has wide latitude to select among mRNA sequences which are informationally equivalent, but structurally and energetically divergent. In this paper, we explore how nature takes advantage of this freedom and how to algorithmically design structures more energetically favorable than have been built through natural selection. In particular: (1) Natural Selection--we perform the first large-scale computational experiment comparing the stability of mRNA sequences from a variety of organisms to random synonymous sequences which respect the codon preferences of the organism. This experiment was conducted on over 27,000 sequences from 34 microbial species with 36 genomic structures. We provide evidence that in all genomic structures highly stable sequences are disproportionately abundant, and in 19 of 36 cases highly unstable sequences are disproportionately abundant. This suggests that the stability of mRNA sequences is subject to natural selection. (2) Artificial Selection--motivated by these biological results, we examine the algorithmic problem of designing the most stable and unstable mRNA sequences which code for a target protein. We give a polynomial-time dynamic programming solution to the most stable sequence problem (MSSP), which is asymptotically no more complex

  13. A New Manufacturing Service Selection and Composition Method Using Improved Flower Pollination Algorithm

    Directory of Open Access Journals (Sweden)

    Wenyu Zhang

    2016-01-01

    Full Text Available With an increasing number of manufacturing services, the means by which to select and compose these manufacturing services have become a challenging problem. It can be regarded as a multiobjective optimization problem that involves a variety of conflicting quality of service (QoS attributes. In this study, a multiobjective optimization model of manufacturing service composition is presented that is based on QoS and an environmental index. Next, the skyline operator is applied to reduce the solution space. And then a new method called improved Flower Pollination Algorithm (FPA is proposed for solving the problem of manufacturing service selection and composition. The improved FPA enhances the performance of basic FPA by combining the latter with crossover and mutation operators of the Differential Evolution (DE algorithm. Finally, a case study is conducted to compare the proposed method with other evolutionary algorithms, including the Genetic Algorithm, DE, basic FPA, and extended FPA. The experimental results reveal that the proposed method performs best at solving the problem of manufacturing service selection and composition.

  14. Ultrasound and PET-CT image fusion for prostate brachytherapy image guidance

    International Nuclear Information System (INIS)

    Hasford, F.

    2015-01-01

    , indicating the system’s ability to visualize low contrast objects 5.4 cm into a patient. PET-CT system’s performance evaluation also produced satisfactory results in accordance with set tolerances as recommended by IAEA Human Health Series 1. Computed tomography laser alignment test ensured that all CT gantry lasers were properly aligned with the patient bed. Image display width test ensured that volume of patient or organ being measured and displayed was equivalent to that selected on the CT scanner console, to a deviation of ± 1 mm. Results from CT image uniformity test showed that mean CT numbers in peripheral regions of interest deviated from the central mean to within recommended tolerance level of ± 5 HU, indicating a good level of uniformity. Computed tomographic dose indices for head and body phantoms were estimated as 44.30 mGy and 20.08 mGy, comparative to console displayed doses of 42.40 mGy and 19.49 mGy respectively. Registration accuracy for PET-CT images was to have displacements of less than 1 mm in x, y and z directions. Image quality of PET-CT images was performed to produce images simulating those obtained in a total body imaging study involving both hot and cold lesions. Percentage contrast estimates of 49.3% and 52.6% were obtained for hot spheres of diameters 1.3 cm and 2.2 cm respectively, while contrast estimates of 74.8% and 75.6% were obtained for cold spheres of diameters 2.8 cm and 3.7 cm respectively. The PET-CT system resolution was estimated as 0.5 ± 0.01 cm, indicating the system’s ability to image tumours of the size of about 5 mm. Satisfactory results from the performance evaluation of ultrasound and PET-CT systems, paved way for them to be used in acquiring prostatic images for the study. Developed MATLAB image enhancement algorithm enhanced the quality of prostatic images before fusion. The algorithm was developed by mapping the intensity values in raw images to new values in a modified image using imadjust function. Contrast

  15. Adaptation of the Maracas algorithm for carotid artery segmentation and stenosis quantification on CT images

    International Nuclear Information System (INIS)

    Maria A Zuluaga; Maciej Orkisz; Edgar J F Delgado; Vincent Dore; Alfredo Morales Pinzon; Marcela Hernandez Hoyos

    2010-01-01

    This paper describes the adaptations of Maracas algorithm to the segmentation and quantification of vascular structures in CTA images of the carotid artery. The maracas algorithm, which is based on an elastic model and on a multi-scale Eigen-analysis of the inertia matrix, was originally designed to segment a single artery in MRA images. The modifications are primarily aimed at addressing the specificities of CT images and the bifurcations. The algorithms implemented in this new version are classified into two levels. 1. The low-level processing (filtering of noise and directional artifacts, enhancement and pre-segmentation) to improve the quality of the image and to pre-segment it. These techniques are based on a priori information about noise, artifacts and typical gray levels ranges of lumen, background and calcifications. 2. The high-level processing to extract the centerline of the artery, to segment the lumen and to quantify the stenosis. At this level, we apply a priori knowledge of shape and anatomy of vascular structures. The method was evaluated on 31 datasets from the carotid lumen segmentation and stenosis grading grand challenge 2009. The segmentation results obtained an average of 80:4% dice similarity score, compared to reference segmentation, and the mean stenosis quantification error was 14.4%.

  16. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling

    Directory of Open Access Journals (Sweden)

    Hala Alshamlan

    2015-01-01

    Full Text Available An artificial bee colony (ABC is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR, and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO. The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.

  17. mRMR-ABC: A Hybrid Gene Selection Algorithm for Cancer Classification Using Microarray Gene Expression Profiling.

    Science.gov (United States)

    Alshamlan, Hala; Badr, Ghada; Alohali, Yousef

    2015-01-01

    An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.

  18. Video error concealment using block matching and frequency selective extrapolation algorithms

    Science.gov (United States)

    P. K., Rajani; Khaparde, Arti

    2017-06-01

    Error Concealment (EC) is a technique at the decoder side to hide the transmission errors. It is done by analyzing the spatial or temporal information from available video frames. It is very important to recover distorted video because they are used for various applications such as video-telephone, video-conference, TV, DVD, internet video streaming, video games etc .Retransmission-based and resilient-based methods, are also used for error removal. But these methods add delay and redundant data. So error concealment is the best option for error hiding. In this paper, the error concealment methods such as Block Matching error concealment algorithm is compared with Frequency Selective Extrapolation algorithm. Both the works are based on concealment of manually error video frames as input. The parameter used for objective quality measurement was PSNR (Peak Signal to Noise Ratio) and SSIM(Structural Similarity Index). The original video frames along with error video frames are compared with both the Error concealment algorithms. According to simulation results, Frequency Selective Extrapolation is showing better quality measures such as 48% improved PSNR and 94% increased SSIM than Block Matching Algorithm.

  19. Automatic selection of optimal systolic and diastolic reconstruction windows for dual-source CT coronary angiography

    International Nuclear Information System (INIS)

    Seifarth, H.; Puesken, M.; Wienbeck, S.; Maintz, D.; Heindel, W.; Juergens, K.U.; Fischbach, R.

    2009-01-01

    The aim of this study was to assess the performance of a motion-map algorithm that automatically determines optimal reconstruction windows for dual-source coronary CT angiography. In datasets from 50 consecutive patients, optimal systolic and diastolic reconstruction windows were determined using the motion-map algorithm. For manual determination of the optimal reconstruction window, datasets were reconstructed in 5% steps throughout the RR interval. Motion artifacts were rated for each major coronary vessel using a five-point scale. Mean motion scores using the motion-map algorithm were 2.4 ± 0.8 for systolic reconstructions and 1.9 ± 0.8 for diastolic reconstructions. Using the manual approach, overall motion scores were significantly better (1.9 ± 0.5 and 1.7 ± 0.6, p 90% of cases using either approach. Using the automated approach, there was a negative correlation between heart rate and motion scores for systolic reconstructions (ρ = -0.26, p 80 bpm (systolic reconstruction). (orig.)

  20. Internal respiratory surrogate in multislice 4D CT using a combination of Fourier transform and anatomical features

    International Nuclear Information System (INIS)

    Hui, Cheukkai; Suh, Yelin; Robertson, Daniel; Beddar, Sam; Pan, Tinsu; Das, Prajnan; Crane, Christopher H.

    2015-01-01

    Purpose: The purpose of this study was to develop a novel algorithm to create a robust internal respiratory signal (IRS) for retrospective sorting of four-dimensional (4D) computed tomography (CT) images. Methods: The proposed algorithm combines information from the Fourier transform of the CT images and from internal anatomical features to form the IRS. The algorithm first extracts potential respiratory signals from low-frequency components in the Fourier space and selected anatomical features in the image space. A clustering algorithm then constructs groups of potential respiratory signals with similar temporal oscillation patterns. The clustered group with the largest number of similar signals is chosen to form the final IRS. To evaluate the performance of the proposed algorithm, the IRS was computed and compared with the external respiratory signal from the real-time position management (RPM) system on 80 patients. Results: In 72 (90%) of the 4D CT data sets tested, the IRS computed by the authors’ proposed algorithm matched with the RPM signal based on their normalized cross correlation. For these data sets with matching respiratory signals, the average difference between the end inspiration times (Δt ins ) in the IRS and RPM signal was 0.11 s, and only 2.1% of Δt ins were more than 0.5 s apart. In the eight (10%) 4D CT data sets in which the IRS and the RPM signal did not match, the average Δt ins was 0.73 s in the nonmatching couch positions, and 35.4% of them had a Δt ins greater than 0.5 s. At couch positions in which IRS did not match the RPM signal, a correlation-based metric indicated poorer matching of neighboring couch positions in the RPM-sorted images. This implied that, when IRS did not match the RPM signal, the images sorted using the IRS showed fewer artifacts than the clinical images sorted using the RPM signal. Conclusions: The authors’ proposed algorithm can generate robust IRSs that can be used for retrospective sorting of 4D CT data

  1. Internal respiratory surrogate in multislice 4D CT using a combination of Fourier transform and anatomical features

    Energy Technology Data Exchange (ETDEWEB)

    Hui, Cheukkai; Suh, Yelin [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Robertson, Daniel; Beddar, Sam, E-mail: abeddar@mdanderson.org [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and Department of Radiation Physics, The University of Texas Graduate School of Biomedical Sciences, Houston, Texas 77030 (United States); Pan, Tinsu [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and Department of Imaging Physics, The University of Texas Graduate School of Biomedical Sciences, Houston, Texas 77030 (United States); Das, Prajnan; Crane, Christopher H. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)

    2015-07-15

    Purpose: The purpose of this study was to develop a novel algorithm to create a robust internal respiratory signal (IRS) for retrospective sorting of four-dimensional (4D) computed tomography (CT) images. Methods: The proposed algorithm combines information from the Fourier transform of the CT images and from internal anatomical features to form the IRS. The algorithm first extracts potential respiratory signals from low-frequency components in the Fourier space and selected anatomical features in the image space. A clustering algorithm then constructs groups of potential respiratory signals with similar temporal oscillation patterns. The clustered group with the largest number of similar signals is chosen to form the final IRS. To evaluate the performance of the proposed algorithm, the IRS was computed and compared with the external respiratory signal from the real-time position management (RPM) system on 80 patients. Results: In 72 (90%) of the 4D CT data sets tested, the IRS computed by the authors’ proposed algorithm matched with the RPM signal based on their normalized cross correlation. For these data sets with matching respiratory signals, the average difference between the end inspiration times (Δt{sub ins}) in the IRS and RPM signal was 0.11 s, and only 2.1% of Δt{sub ins} were more than 0.5 s apart. In the eight (10%) 4D CT data sets in which the IRS and the RPM signal did not match, the average Δt{sub ins} was 0.73 s in the nonmatching couch positions, and 35.4% of them had a Δt{sub ins} greater than 0.5 s. At couch positions in which IRS did not match the RPM signal, a correlation-based metric indicated poorer matching of neighboring couch positions in the RPM-sorted images. This implied that, when IRS did not match the RPM signal, the images sorted using the IRS showed fewer artifacts than the clinical images sorted using the RPM signal. Conclusions: The authors’ proposed algorithm can generate robust IRSs that can be used for retrospective

  2. SU-F-T-441: Dose Calculation Accuracy in CT Images Reconstructed with Artifact Reduction Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ng, C; Chan, S; Lee, F; Ngan, R [Queen Elizabeth Hospital (Hong Kong); Lee, V [University of Hong Kong, Hong Kong, HK (Hong Kong)

    2016-06-15

    Purpose: Accuracy of radiotherapy dose calculation in patients with surgical implants is complicated by two factors. First is the accuracy of CT number, second is the dose calculation accuracy. We compared measured dose with dose calculated on CT images reconstructed with FBP and an artifact reduction algorithm (OMAR, Philips) for a phantom with high density inserts. Dose calculation were done with Varian AAA and AcurosXB. Methods: A phantom was constructed with solid water in which 2 titanium or stainless steel rods could be inserted. The phantom was scanned with the Philips Brillance Big Bore CT. Image reconstruction was done with FBP and OMAR. Two 6 MV single field photon plans were constructed for each phantom. Radiochromic films were placed at different locations to measure the dose deposited. One plan has normal incidence on the titanium/steel rods. In the second plan, the beam is at almost glancing incidence on the metal rods. Measurements were then compared with dose calculated with AAA and AcurosXB. Results: The use of OMAR images slightly improved the dose calculation accuracy. The agreement between measured and calculated dose was best with AXB and image reconstructed with OMAR. Dose calculated on titanium phantom has better agreement with measurement. Large discrepancies were seen at points directly above and below the high density inserts. Both AAA and AXB underestimated the dose directly above the metal surface, while overestimated the dose below the metal surface. Doses measured downstream of metal were all within 3% of calculated values. Conclusion: When doing treatment planning for patients with metal implants, care must be taken to acquire correct CT images to improve dose calculation accuracy. Moreover, great discrepancies in measured and calculated dose were observed at metal/tissue interface. Care must be taken in estimating the dose in critical structures that come into contact with metals.

  3. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    International Nuclear Information System (INIS)

    Wang Xiaojia; Mao Qirong; Zhan Yongzhao

    2008-01-01

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction

  4. Histogram-driven cupping correction (HDCC) in CT

    Science.gov (United States)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  5. Improvement of the temporal resolution of cardiac CT reconstruction algorithms using an optimized filtering step

    International Nuclear Information System (INIS)

    Roux, S.; Desbat, L.; Koenig, A.; Grangeat, P.

    2005-01-01

    In this paper we study a property of the filtering step of multi-cycle reconstruction algorithm used in the field of cardiac CT. We show that the common filtering step procedure is not optimal in the case of divergent geometry and decrease slightly the temporal resolution. We propose to use the filtering procedure related to the work of Noo at al ( F.Noo, M. Defrise, R. Clakdoyle, and H. Kudo. Image reconstruction from fan-beam projections on less than a short-scan. Phys. Med.Biol., 47:2525-2546, July 2002)and show that this alternative allows to reach the optimal temporal resolution with the same computational effort. (N.C.)

  6. featsel: A framework for benchmarking of feature selection algorithms and cost functions

    OpenAIRE

    Marcelo S. Reis; Gustavo Estrela; Carlos Eduardo Ferreira; Junior Barrera

    2017-01-01

    In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and co...

  7. Diagnostic performance of reduced-dose CT with a hybrid iterative reconstruction algorithm for the detection of hypervascular liver lesions: a phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Nakamoto, Atsushi; Tanaka, Yoshikazu; Juri, Hiroshi; Nakai, Go; Narumi, Yoshifumi [Osaka Medical College, Department of Radiology, Takatsuki, Osaka (Japan); Yoshikawa, Shushi [Osaka Medical College Hospital, Central Radiology Department, Takatsuki, Osaka (Japan)

    2017-07-15

    To investigate the diagnostic performance of reduced-dose CT with a hybrid iterative reconstruction (IR) algorithm for the detection of hypervascular liver lesions. Thirty liver phantoms with or without simulated hypervascular lesions were scanned with a 320-slice CT scanner with control-dose (40 mAs) and reduced-dose (30 and 20 mAs) settings. Control-dose images were reconstructed with filtered back projection (FBP), and reduced-dose images were reconstructed with FBP and a hybrid IR algorithm. Objective image noise and the lesion to liver contrast-to-noise ratio (CNR) were evaluated quantitatively. Images were interpreted independently by 2 blinded radiologists, and jackknife alternative free-response receiver-operating characteristic (JAFROC) analysis was performed. Hybrid IR images with reduced-dose settings (both 30 and 20 mAs) yielded significantly lower objective image noise and higher CNR than control-dose FBP images (P <.05). However, hybrid IR images with reduced-dose settings had lower JAFROC1 figure of merit than control-dose FBP images, although only the difference between 20 mAs images and control-dose FBP images was significant for both readers (P <.01). An aggressive reduction of the radiation dose would impair the detectability of hypervascular liver lesions, although objective image noise and CNR would be preserved by a hybrid IR algorithm. (orig.)

  8. The admissible portfolio selection problem with transaction costs and an improved PSO algorithm

    Science.gov (United States)

    Chen, Wei; Zhang, Wei-Guo

    2010-05-01

    In this paper, we discuss the portfolio selection problem with transaction costs under the assumption that there exist admissible errors on expected returns and risks of assets. We propose a new admissible efficient portfolio selection model and design an improved particle swarm optimization (PSO) algorithm because traditional optimization algorithms fail to work efficiently for our proposed problem. Finally, we offer a numerical example to illustrate the proposed effective approaches and compare the admissible portfolio efficient frontiers under different constraints.

  9. Follow-up CT and CT angiography after intracranial aneurysm clipping and coiling - improved image quality by iterative metal artifact reduction

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Georg; Hempel, Johann-Martin; Oergel, Anja; Hauser, Till-Karsten; Ernemann, Ulrike; Hennersdorf, Florian [Eberhard Karls University Tuebingen, Department of Diagnostic and Interventional Neuroradiology, Tuebingen (Germany); Bongers, Malte Niklas [Eberhard Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany)

    2017-07-15

    This paper aims to evaluate a new iterative metal artifact reduction algorithm for post-interventional evaluation of brain tissue and intracranial arteries. The data of 20 patients that underwent follow-up cranial CT and cranial CT angiography after clipping or coiling of an intracranial aneurysm was retrospectively analyzed. After the images were processed using a novel iterative metal artifact reduction algorithm, images with and without metal artifact reduction were qualitatively evaluated by two readers, using a five-point Likert scale. Moreover, artifact strength was quantitatively assessed in terms of CT attenuation and standard deviation alterations. The qualitative analysis yielded a significant increase in image quality (p = 0.0057) in iteratively processed images with substantial inter-observer agreement (k = 0.72), while the CTA image quality did not differ (p = 0.864) and even showed vessel contrast reduction in six cases (30%). The mean relative attenuation difference was 27% without metal artifact reduction vs. 11% for iterative metal artifact reduction images (p = 0.0003). The new iterative metal artifact reduction algorithm enhances non-enhanced CT image quality after clipping or coiling, but in CT-angiography images, the contrast of adjacent vessels can be compromised. (orig.)

  10. Follow-up CT and CT angiography after intracranial aneurysm clipping and coiling - improved image quality by iterative metal artifact reduction

    International Nuclear Information System (INIS)

    Bier, Georg; Hempel, Johann-Martin; Oergel, Anja; Hauser, Till-Karsten; Ernemann, Ulrike; Hennersdorf, Florian; Bongers, Malte Niklas

    2017-01-01

    This paper aims to evaluate a new iterative metal artifact reduction algorithm for post-interventional evaluation of brain tissue and intracranial arteries. The data of 20 patients that underwent follow-up cranial CT and cranial CT angiography after clipping or coiling of an intracranial aneurysm was retrospectively analyzed. After the images were processed using a novel iterative metal artifact reduction algorithm, images with and without metal artifact reduction were qualitatively evaluated by two readers, using a five-point Likert scale. Moreover, artifact strength was quantitatively assessed in terms of CT attenuation and standard deviation alterations. The qualitative analysis yielded a significant increase in image quality (p = 0.0057) in iteratively processed images with substantial inter-observer agreement (k = 0.72), while the CTA image quality did not differ (p = 0.864) and even showed vessel contrast reduction in six cases (30%). The mean relative attenuation difference was 27% without metal artifact reduction vs. 11% for iterative metal artifact reduction images (p = 0.0003). The new iterative metal artifact reduction algorithm enhances non-enhanced CT image quality after clipping or coiling, but in CT-angiography images, the contrast of adjacent vessels can be compromised. (orig.)

  11. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    Energy Technology Data Exchange (ETDEWEB)

    Maier, Joscha, E-mail: joscha.maier@dkfz.de [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Sawall, Stefan; Kachelrieß, Marc [Medical Physics in Radiology, German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany and Institute of Medical Physics, University of Erlangen–Nürnberg, 91052 Erlangen (Germany)

    2014-05-15

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  12. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure

    International Nuclear Information System (INIS)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-01-01

    Purpose: Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Methods: Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Results: Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the

  13. Assessment of dedicated low-dose cardiac micro-CT reconstruction algorithms using the left ventricular volume of small rodents as a performance measure.

    Science.gov (United States)

    Maier, Joscha; Sawall, Stefan; Kachelrieß, Marc

    2014-05-01

    Phase-correlated microcomputed tomography (micro-CT) imaging plays an important role in the assessment of mouse models of cardiovascular diseases and the determination of functional parameters as the left ventricular volume. As the current gold standard, the phase-correlated Feldkamp reconstruction (PCF), shows poor performance in case of low dose scans, more sophisticated reconstruction algorithms have been proposed to enable low-dose imaging. In this study, the authors focus on the McKinnon-Bates (MKB) algorithm, the low dose phase-correlated (LDPC) reconstruction, and the high-dimensional total variation minimization reconstruction (HDTV) and investigate their potential to accurately determine the left ventricular volume at different dose levels from 50 to 500 mGy. The results were verified in phantom studies of a five-dimensional (5D) mathematical mouse phantom. Micro-CT data of eight mice, each administered with an x-ray dose of 500 mGy, were acquired, retrospectively gated for cardiac and respiratory motion and reconstructed using PCF, MKB, LDPC, and HDTV. Dose levels down to 50 mGy were simulated by using only a fraction of the projections. Contrast-to-noise ratio (CNR) was evaluated as a measure of image quality. Left ventricular volume was determined using different segmentation algorithms (Otsu, level sets, region growing). Forward projections of the 5D mouse phantom were performed to simulate a micro-CT scan. The simulated data were processed the same way as the real mouse data sets. Compared to the conventional PCF reconstruction, the MKB, LDPC, and HDTV algorithm yield images of increased quality in terms of CNR. While the MKB reconstruction only provides small improvements, a significant increase of the CNR is observed in LDPC and HDTV reconstructions. The phantom studies demonstrate that left ventricular volumes can be determined accurately at 500 mGy. For lower dose levels which were simulated for real mouse data sets, the HDTV algorithm shows the

  14. An Improved SPEA2 Algorithm with Adaptive Selection of Evolutionary Operators Scheme for Multiobjective Optimization Problems

    Directory of Open Access Journals (Sweden)

    Fuqing Zhao

    2016-01-01

    Full Text Available A fixed evolutionary mechanism is usually adopted in the multiobjective evolutionary algorithms and their operators are static during the evolutionary process, which causes the algorithm not to fully exploit the search space and is easy to trap in local optima. In this paper, a SPEA2 algorithm which is based on adaptive selection evolution operators (AOSPEA is proposed. The proposed algorithm can adaptively select simulated binary crossover, polynomial mutation, and differential evolution operator during the evolutionary process according to their contribution to the external archive. Meanwhile, the convergence performance of the proposed algorithm is analyzed with Markov chain. Simulation results on the standard benchmark functions reveal that the performance of the proposed algorithm outperforms the other classical multiobjective evolutionary algorithms.

  15. Core Business Selection Based on Ant Colony Clustering Algorithm

    Directory of Open Access Journals (Sweden)

    Yu Lan

    2014-01-01

    Full Text Available Core business is the most important business to the enterprise in diversified business. In this paper, we first introduce the definition and characteristics of the core business and then descript the ant colony clustering algorithm. In order to test the effectiveness of the proposed method, Tianjin Port Logistics Development Co., Ltd. is selected as the research object. Based on the current situation of the development of the company, the core business of the company can be acquired by ant colony clustering algorithm. Thus, the results indicate that the proposed method is an effective way to determine the core business for company.

  16. Accelerating statistical image reconstruction algorithms for fan-beam x-ray CT using cloud computing

    Science.gov (United States)

    Srivastava, Somesh; Rao, A. Ravishankar; Sheinin, Vadim

    2011-03-01

    Statistical image reconstruction algorithms potentially offer many advantages to x-ray computed tomography (CT), e.g. lower radiation dose. But, their adoption in practical CT scanners requires extra computation power, which is traditionally provided by incorporating additional computing hardware (e.g. CPU-clusters, GPUs, FPGAs etc.) into a scanner. An alternative solution is to access the required computation power over the internet from a cloud computing service, which is orders-of-magnitude more cost-effective. This is because users only pay a small pay-as-you-go fee for the computation resources used (i.e. CPU time, storage etc.), and completely avoid purchase, maintenance and upgrade costs. In this paper, we investigate the benefits and shortcomings of using cloud computing for statistical image reconstruction. We parallelized the most time-consuming parts of our application, the forward and back projectors, using MapReduce, the standard parallelization library on clouds. From preliminary investigations, we found that a large speedup is possible at a very low cost. But, communication overheads inside MapReduce can limit the maximum speedup, and a better MapReduce implementation might become necessary in the future. All the experiments for this paper, including development and testing, were completed on the Amazon Elastic Compute Cloud (EC2) for less than $20.

  17. Semi-automatic delineation using weighted CT-MRI registered images for radiotherapy of nasopharyngeal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Fitton, I. [European Georges Pompidou Hospital, Department of Radiology, 20 rue Leblanc, 75015, Paris (France); Cornelissen, S. A. P. [Image Sciences Institute, UMC, Department of Radiology, P.O. Box 85500, 3508 GA Utrecht (Netherlands); Duppen, J. C.; Rasch, C. R. N.; Herk, M. van [The Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Department of Radiotherapy, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Steenbakkers, R. J. H. M. [University Medical Center Groningen, Department of Radiation Oncology, Hanzeplein 1, 9713 GZ Groningen (Netherlands); Peeters, S. T. H. [UZ Gasthuisberg, Herestraat 49, 3000 Leuven, Belgique (Belgium); Hoebers, F. J. P. [Maastricht University Medical Center, Department of Radiation Oncology (MAASTRO clinic), GROW School for Oncology and Development Biology Maastricht, 6229 ET Maastricht (Netherlands); Kaanders, J. H. A. M. [UMC St-Radboud, Department of Radiotherapy, Geert Grooteplein 32, 6525 GA Nijmegen (Netherlands); Nowak, P. J. C. M. [ERASMUS University Medical Center, Department of Radiation Oncology,Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)

    2011-08-15

    Purpose: To develop a delineation tool that refines physician-drawn contours of the gross tumor volume (GTV) in nasopharynx cancer, using combined pixel value information from x-ray computed tomography (CT) and magnetic resonance imaging (MRI) during delineation. Methods: Operator-guided delineation assisted by a so-called ''snake'' algorithm was applied on weighted CT-MRI registered images. The physician delineates a rough tumor contour that is continuously adjusted by the snake algorithm using the underlying image characteristics. The algorithm was evaluated on five nasopharyngeal cancer patients. Different linear weightings CT and MRI were tested as input for the snake algorithm and compared according to contrast and tumor to noise ratio (TNR). The semi-automatic delineation was compared with manual contouring by seven experienced radiation oncologists. Results: A good compromise for TNR and contrast was obtained by weighing CT twice as strong as MRI. The new algorithm did not notably reduce interobserver variability, it did however, reduce the average delineation time by 6 min per case. Conclusions: The authors developed a user-driven tool for delineation and correction based a snake algorithm and registered weighted CT image and MRI. The algorithm adds morphological information from CT during the delineation on MRI and accelerates the delineation task.

  18. The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    He Yan

    2016-01-01

    Full Text Available The particle swarm optimization (PSO is an optimization algorithm based on intelligent optimization. Parameters selection of PSO will play an important role in performance and efficiency of the algorithm. In this paper, the performance of PSO is analyzed when the control parameters vary, including particle number, accelerate constant, inertia weight and maximum limited velocity. And then PSO with dynamic parameters has been applied on the neural network training for gearbox fault diagnosis, the results with different parameters of PSO are compared and analyzed. At last some suggestions for parameters selection are proposed to improve the performance of PSO.

  19. Automated tube potential selection for standard chest and abdominal CT in follow-up patients with testicular cancer: comparison with fixed tube potential

    Energy Technology Data Exchange (ETDEWEB)

    Gnannt, Ralph; Winklehner, Anna; Frauenfelder, Thomas; Alkadhi, Hatem [University Hospital Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); Eberli, Daniel [University Hospital Zurich, Clinic for Urology, Zurich (Switzerland); Knuth, Alexander [University Hospital Zurich, Clinic for Oncology, Zurich (Switzerland)

    2012-09-15

    To evaluate prospectively, in patients with testicular cancer, the radiation dose-saving potential and image quality of contrast-enhanced chest and abdominal CT with automated tube potential selection. Forty consecutive patients with testicular cancer underwent contrast-enhanced arterio-venous chest and portal-venous abdominal CT with automated tube potential selection (protocol B; tube potential 80-140 kVp), which is based on the attenuation of the CT topogram. All had a first CT at 120 kVp (protocol A) using the same 64-section CT machine and similar settings. Image quality was assessed; dose information (CTDI{sub vol}) was noted. Image noise and attenuation in the liver and spleen were significantly higher for protocol B (P < 0.05 each), whereas attenuation in the deltoid and erector spinae muscles was similar. In protocol B, tube potential was reduced to 100 kVp in 18 chest and 33 abdominal examinations, and to 80 kVp in 5 abdominal CT examinations; it increased to 140 kVp in one patient. Image quality of examinations using both CT protocols was rated as diagnostic. CTDI{sub vol} was significantly lower for protocol B compared to protocol A (reduction by 12%, P < 0.01). In patients with testicular cancer, radiation dose of chest and abdominal CT can be reduced with automated tube potential selection, while image quality is preserved. (orig.)

  20. Computation-aware algorithm selection approach for interlaced-to-progressive conversion

    Science.gov (United States)

    Park, Sang-Jun; Jeon, Gwanggil; Jeong, Jechang

    2010-05-01

    We discuss deinterlacing results in a computationally constrained and varied environment. The proposed computation-aware algorithm selection approach (CASA) for fast interlaced to progressive conversion algorithm consists of three methods: the line-averaging (LA) method for plain regions, the modified edge-based line-averaging (MELA) method for medium regions, and the proposed covariance-based adaptive deinterlacing (CAD) method for complex regions. The proposed CASA uses two criteria, mean-squared error (MSE) and CPU time, for assigning the method. We proposed a CAD method. The principle idea of CAD is based on the correspondence between the high and low-resolution covariances. We estimated the local covariance coefficients from an interlaced image using Wiener filtering theory and then used these optimal minimum MSE interpolation coefficients to obtain a deinterlaced image. The CAD method, though more robust than most known methods, was not found to be very fast compared to the others. To alleviate this issue, we proposed an adaptive selection approach using a fast deinterlacing algorithm rather than using only one CAD algorithm. The proposed hybrid approach of switching between the conventional schemes (LA and MELA) and our CAD was proposed to reduce the overall computational load. A reliable condition to be used for switching the schemes was presented after a wide set of initial training processes. The results of computer simulations showed that the proposed methods outperformed a number of methods presented in the literature.

  1. Lesion Detection in CT Images Using Deep Learning Semantic Segmentation Technique

    Science.gov (United States)

    Kalinovsky, A.; Liauchuk, V.; Tarasau, A.

    2017-05-01

    In this paper, the problem of automatic detection of tuberculosis lesion on 3D lung CT images is considered as a benchmark for testing out algorithms based on a modern concept of Deep Learning. For training and testing of the algorithms a domestic dataset of 338 3D CT scans of tuberculosis patients with manually labelled lesions was used. The algorithms which are based on using Deep Convolutional Networks were implemented and applied in three different ways including slice-wise lesion detection in 2D images using semantic segmentation, slice-wise lesion detection in 2D images using sliding window technique as well as straightforward detection of lesions via semantic segmentation in whole 3D CT scans. The algorithms demonstrate superior performance compared to algorithms based on conventional image analysis methods.

  2. Global left ventricular function in cardiac CT. Evaluation of an automated 3D region-growing segmentation algorithm

    International Nuclear Information System (INIS)

    Muehlenbruch, Georg; Das, Marco; Hohl, Christian; Wildberger, Joachim E.; Guenther, Rolf W.; Mahnken, Andreas H.; Rinck, Daniel; Flohr, Thomas G.; Koos, Ralf; Knackstedt, Christian

    2006-01-01

    The purpose was to evaluate a new semi-automated 3D region-growing segmentation algorithm for functional analysis of the left ventricle in multislice CT (MSCT) of the heart. Twenty patients underwent contrast-enhanced MSCT of the heart (collimation 16 x 0.75 mm; 120 kV; 550 mAseff). Multiphase image reconstructions with 1-mm axial slices and 8-mm short-axis slices were performed. Left ventricular volume measurements (end-diastolic volume, end-systolic volume, ejection fraction and stroke volume) from manually drawn endocardial contours in the short axis slices were compared to semi-automated region-growing segmentation of the left ventricle from the 1-mm axial slices. The post-processing-time for both methods was recorded. Applying the new region-growing algorithm in 13/20 patients (65%), proper segmentation of the left ventricle was feasible. In these patients, the signal-to-noise ratio was higher than in the remaining patients (3.2±1.0 vs. 2.6±0.6). Volume measurements of both segmentation algorithms showed an excellent correlation (all P≤0.0001); the limits of agreement for the ejection fraction were 2.3±8.3 ml. In the patients with proper segmentation the mean post-processing time using the region-growing algorithm was diminished by 44.2%. On the basis of a good contrast-enhanced data set, a left ventricular volume analysis using the new semi-automated region-growing segmentation algorithm is technically feasible, accurate and more time-effective. (orig.)

  3. CAnat: An algorithm for the automatic segmentation of anatomy of medical images

    International Nuclear Information System (INIS)

    Caon, M.; Gobert, L.; Mariusz, B.

    2011-01-01

    Full text: To develop a method to automatically categorise organs and tissues displayed in medical images. Dosimetry calculations using Monte Carlo methods require a mathematical representation of human anatomy e.g. a voxel phantom. For a whole body, their construction involves processing several hundred images to identify each organ and tissue-the process is very time-consuming. This project is developing a Computational Anatomy (CAnat) algorithm to automatically recognise and classify the different tissue in a tomographic image. Methods The algorithm utilizes the Statistical Region Merging technique (SRM). The SRM depends on one estimated parameter. The parameter is a measure of statistical complexity of the image and can be automatically adjusted to suit individual image features. This allows for automatic tuning of coarseness of the overall segmentation as well as object specific selection for further tasks. CAnat is tested on two CT images selected to represent different anatomical complexities. In the mid-thigh image, tissues/. regions of interest are air, fat, muscle, bone marrow and compact bone. In the pelvic image, fat, urinary bladder and anus/colon, muscle, cancellous bone, and compact bone. Segmentation results were evaluated using the Jaccard index which is a measure of set agreement. An index of one indicates perfect agreement between CAnat and manual segmentation. The Jaccard indices for the mid-thigh CT were 0.99, 0.89, 0.97, 0.63 and 0.88, respectively and for the pelvic CT were 0.99, 0.81, 0.77, 0.93, 0.53, 0.76, respectively. Conclusion The high accuracy preliminary segmentation results demonstrate the feasibility of the CAnat algorithm.

  4. Models and Algorithms for Tracking Target with Coordinated Turn Motion

    Directory of Open Access Journals (Sweden)

    Xianghui Yuan

    2014-01-01

    Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.

  5. The ANACONDA algorithm for deformable image registration in radiotherapy

    International Nuclear Information System (INIS)

    Weistrand, Ola; Svensson, Stina

    2015-01-01

    Purpose: The purpose of this work was to describe a versatile algorithm for deformable image registration with applications in radiotherapy and to validate it on thoracic 4DCT data as well as CT/cone beam CT (CBCT) data. Methods: ANAtomically CONstrained Deformation Algorithm (ANACONDA) combines image information (i.e., intensities) with anatomical information as provided by contoured image sets. The registration problem is formulated as a nonlinear optimization problem and solved with an in-house developed solver, tailored to this problem. The objective function, which is minimized during optimization, is a linear combination of four nonlinear terms: 1. image similarity term; 2. grid regularization term, which aims at keeping the deformed image grid smooth and invertible; 3. a shape based regularization term which works to keep the deformation anatomically reasonable when regions of interest are present in the reference image; and 4. a penalty term which is added to the optimization problem when controlling structures are used, aimed at deforming the selected structure in the reference image to the corresponding structure in the target image. Results: To validate ANACONDA, the authors have used 16 publically available thoracic 4DCT data sets for which target registration errors from several algorithms have been reported in the literature. On average for the 16 data sets, the target registration error is 1.17 ± 0.87 mm, Dice similarity coefficient is 0.98 for the two lungs, and image similarity, measured by the correlation coefficient, is 0.95. The authors have also validated ANACONDA using two pelvic cases and one head and neck case with planning CT and daily acquired CBCT. Each image has been contoured by a physician (radiation oncologist) or experienced radiation therapist. The results are an improvement with respect to rigid registration. However, for the head and neck case, the sample set is too small to show statistical significance. Conclusions: ANACONDA

  6. Multimaterial Decomposition Algorithm for the Quantification of Liver Fat Content by Using Fast-Kilovolt-Peak Switching Dual-Energy CT: Experimental Validation.

    Science.gov (United States)

    Hyodo, Tomoko; Hori, Masatoshi; Lamb, Peter; Sasaki, Kosuke; Wakayama, Tetsuya; Chiba, Yasutaka; Mochizuki, Teruhito; Murakami, Takamichi

    2017-02-01

    Purpose To assess the ability of fast-kilovolt-peak switching dual-energy computed tomography (CT) by using the multimaterial decomposition (MMD) algorithm to quantify liver fat. Materials and Methods Fifteen syringes that contained various proportions of swine liver obtained from an abattoir, lard in food products, and iron (saccharated ferric oxide) were prepared. Approval of this study by the animal care and use committee was not required. Solid cylindrical phantoms that consisted of a polyurethane epoxy resin 20 and 30 cm in diameter that held the syringes were scanned with dual- and single-energy 64-section multidetector CT. CT attenuation on single-energy CT images (in Hounsfield units) and MMD-derived fat volume fraction (FVF; dual-energy CT FVF) were obtained for each syringe, as were magnetic resonance (MR) spectroscopy measurements by using a 1.5-T imager (fat fraction [FF] of MR spectroscopy). Reference values of FVF (FVF ref ) were determined by using the Soxhlet method. Iron concentrations were determined by inductively coupled plasma optical emission spectroscopy and divided into three ranges (0 mg per 100 g, 48.1-55.9 mg per 100 g, and 92.6-103.0 mg per 100 g). Statistical analysis included Spearman rank correlation and analysis of covariance. Results Both dual-energy CT FVF (ρ = 0.97; P iron. Phantom size had a significant effect on dual-energy CT FVF after controlling for FVF ref (P iron concentrations, the linear coefficients of dual-energy CT FVF decreased and those of MR spectroscopy FF increased (P iron, dual-energy CT FVF led to underestimateion of FVF ref to a lesser degree than FF of MR spectroscopy led to overestimation of FVF ref . © RSNA, 2016 Online supplemental material is available for this article.

  7. Edge-oriented dual-dictionary guided enrichment (EDGE) for MRI-CT image reconstruction.

    Science.gov (United States)

    Li, Liang; Wang, Bigong; Wang, Ge

    2016-01-01

    In this paper, we formulate the joint/simultaneous X-ray CT and MRI image reconstruction. In particular, a novel algorithm is proposed for MRI image reconstruction from highly under-sampled MRI data and CT images. It consists of two steps. First, a training dataset is generated from a series of well-registered MRI and CT images on the same patients. Then, an initial MRI image of a patient can be reconstructed via edge-oriented dual-dictionary guided enrichment (EDGE) based on the training dataset and a CT image of the patient. Second, an MRI image is reconstructed using the dictionary learning (DL) algorithm from highly under-sampled k-space data and the initial MRI image. Our algorithm can establish a one-to-one correspondence between the two imaging modalities, and obtain a good initial MRI estimation. Both noise-free and noisy simulation studies were performed to evaluate and validate the proposed algorithm. The results with different under-sampling factors show that the proposed algorithm performed significantly better than those reconstructed using the DL algorithm from MRI data alone.

  8. Handoff Triggering and Network Selection Algorithms for Load-Balancing Handoff in CDMA-WLAN Integrated Networks

    Directory of Open Access Journals (Sweden)

    Khalid Qaraqe

    2008-10-01

    Full Text Available This paper proposes a novel vertical handoff algorithm between WLAN and CDMA networks to enable the integration of these networks. The proposed vertical handoff algorithm assumes a handoff decision process (handoff triggering and network selection. The handoff trigger is decided based on the received signal strength (RSS. To reduce the likelihood of unnecessary false handoffs, the distance criterion is also considered. As a network selection mechanism, based on the wireless channel assignment algorithm, this paper proposes a context-based network selection algorithm and the corresponding communication algorithms between WLAN and CDMA networks. This paper focuses on a handoff triggering criterion which uses both the RSS and distance information, and a network selection method which uses context information such as the dropping probability, blocking probability, GoS (grade of service, and number of handoff attempts. As a decision making criterion, the velocity threshold is determined to optimize the system performance. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations using four handoff strategies. The four handoff strategies are evaluated and compared with each other in terms of GOS. Finally, the proposed scheme is validated by computer simulations.

  9. Handoff Triggering and Network Selection Algorithms for Load-Balancing Handoff in CDMA-WLAN Integrated Networks

    Directory of Open Access Journals (Sweden)

    Kim Jang-Sub

    2008-01-01

    Full Text Available This paper proposes a novel vertical handoff algorithm between WLAN and CDMA networks to enable the integration of these networks. The proposed vertical handoff algorithm assumes a handoff decision process (handoff triggering and network selection. The handoff trigger is decided based on the received signal strength (RSS. To reduce the likelihood of unnecessary false handoffs, the distance criterion is also considered. As a network selection mechanism, based on the wireless channel assignment algorithm, this paper proposes a context-based network selection algorithm and the corresponding communication algorithms between WLAN and CDMA networks. This paper focuses on a handoff triggering criterion which uses both the RSS and distance information, and a network selection method which uses context information such as the dropping probability, blocking probability, GoS (grade of service, and number of handoff attempts. As a decision making criterion, the velocity threshold is determined to optimize the system performance. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations. The optimal velocity threshold is adjusted to assign the available channels to the mobile stations using four handoff strategies. The four handoff strategies are evaluated and compared with each other in terms of GOS. Finally, the proposed scheme is validated by computer simulations.

  10. A novel computer algorithm for modeling and treating mandibular fractures: A pilot study.

    Science.gov (United States)

    Rizzi, Christopher J; Ortlip, Timothy; Greywoode, Jewel D; Vakharia, Kavita T; Vakharia, Kalpesh T

    2017-02-01

    To describe a novel computer algorithm that can model mandibular fracture repair. To evaluate the algorithm as a tool to model mandibular fracture reduction and hardware selection. Retrospective pilot study combined with cross-sectional survey. A computer algorithm utilizing Aquarius Net (TeraRecon, Inc, Foster City, CA) and Adobe Photoshop CS6 (Adobe Systems, Inc, San Jose, CA) was developed to model mandibular fracture repair. Ten different fracture patterns were selected from nine patients who had already undergone mandibular fracture repair. The preoperative computed tomography (CT) images were processed with the computer algorithm to create virtual images that matched the actual postoperative three-dimensional CT images. A survey comparing the true postoperative image with the virtual postoperative images was created and administered to otolaryngology resident and attending physicians. They were asked to rate on a scale from 0 to 10 (0 = completely different; 10 = identical) the similarity between the two images in terms of the fracture reduction and fixation hardware. Ten mandible fracture cases were analyzed and processed. There were 15 survey respondents. The mean score for overall similarity between the images was 8.41 ± 0.91; the mean score for similarity of fracture reduction was 8.61 ± 0.98; and the mean score for hardware appearance was 8.27 ± 0.97. There were no significant differences between attending and resident responses. There were no significant differences based on fracture location. This computer algorithm can accurately model mandibular fracture repair. Images created by the algorithm are highly similar to true postoperative images. The algorithm can potentially assist a surgeon planning mandibular fracture repair. 4. Laryngoscope, 2016 127:331-336, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  11. Human activity recognition based on feature selection in smart home using back-propagation algorithm.

    Science.gov (United States)

    Fang, Hongqing; He, Lei; Si, Hao; Liu, Peng; Xie, Xiaolei

    2014-09-01

    In this paper, Back-propagation(BP) algorithm has been used to train the feed forward neural network for human activity recognition in smart home environments, and inter-class distance method for feature selection of observed motion sensor events is discussed and tested. And then, the human activity recognition performances of neural network using BP algorithm have been evaluated and compared with other probabilistic algorithms: Naïve Bayes(NB) classifier and Hidden Markov Model(HMM). The results show that different feature datasets yield different activity recognition accuracy. The selection of unsuitable feature datasets increases the computational complexity and degrades the activity recognition accuracy. Furthermore, neural network using BP algorithm has relatively better human activity recognition performances than NB classifier and HMM. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  12. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections

    Energy Technology Data Exchange (ETDEWEB)

    Pokhrel, Damodar; Murphy, Martin J.; Todor, Dorin A.; Weiss, Elisabeth; Williamson, Jeffrey F. [Department of Radiation Oncology, School of Medicine, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2010-09-15

    Purpose: To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. Methods: The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four {sup 103}Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. Results: For the phantom study, seed localization error is (0.58{+-}0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/iteration on a 1 GHz processor. Conclusions: The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate {approx}1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  13. Clinical application and validation of an iterative forward projection matching algorithm for permanent brachytherapy seed localization from conebeam-CT x-ray projections.

    Science.gov (United States)

    Pokhrel, Damodar; Murphy, Martin J; Todor, Dorin A; Weiss, Elisabeth; Williamson, Jeffrey F

    2010-09-01

    To experimentally validate a new algorithm for reconstructing the 3D positions of implanted brachytherapy seeds from postoperatively acquired 2D conebeam-CT (CBCT) projection images. The iterative forward projection matching (IFPM) algorithm finds the 3D seed geometry that minimizes the sum of the squared intensity differences between computed projections of an initial estimate of the seed configuration and radiographic projections of the implant. In-house machined phantoms, containing arrays of 12 and 72 seeds, respectively, are used to validate this method. Also, four 103Pd postimplant patients are scanned using an ACUITY digital simulator. Three to ten x-ray images are selected from the CBCT projection set and processed to create binary seed-only images. To quantify IFPM accuracy, the reconstructed seed positions are forward projected and overlaid on the measured seed images to find the nearest-neighbor distance between measured and computed seed positions for each image pair. Also, the estimated 3D seed coordinates are compared to known seed positions in the phantom and clinically obtained VariSeed planning coordinates for the patient data. For the phantom study, seed localization error is (0.58 +/- 0.33) mm. For all four patient cases, the mean registration error is better than 1 mm while compared against the measured seed projections. IFPM converges in 20-28 iterations, with a computation time of about 1.9-2.8 min/ iteration on a 1 GHz processor. The IFPM algorithm avoids the need to match corresponding seeds in each projection as required by standard back-projection methods. The authors' results demonstrate approximately 1 mm accuracy in reconstructing the 3D positions of brachytherapy seeds from the measured 2D projections. This algorithm also successfully localizes overlapping clustered and highly migrated seeds in the implant.

  14. SeLeCT: a lexical cohesion based news story segmentation system

    OpenAIRE

    Stokes, Nicola; Carthy, Joe; Smeaton, Alan F.

    2004-01-01

    In this paper we compare the performance of three distinct approaches to lexical cohesion based text segmentation. Most work in this area has focused on the discovery of textual units that discuss subtopic structure within documents. In contrast our segmentation task requires the discovery of topical units of text i.e., distinct news stories from broadcast news programmes. Our approach to news story segmentation (the SeLeCT system) is based on an analysis of lexical cohesive strength between ...

  15. Improved CT-detection of acute bowel ischemia using frequency selective non-linear image blending.

    Science.gov (United States)

    Schneeweiss, Sven; Esser, Michael; Thaiss, Wolfgang; Boesmueller, Hans; Ditt, Hendrik; Nikolau, Konstantin; Horger, Marius

    2017-07-01

    Computed tomography (CT) as a fast and reliable diagnostic technique is the imaging modality of choice for acute bowel ischemia. However, diagnostic is often difficult mainly due to low attenuation differences between ischemic and perfused segments. To compare the diagnostic efficacy of a new post-processing tool based on frequency selective non-linear blending with that of conventional linear contrast-enhanced CT (CECT) image blending for the detection of bowel ischemia. Twenty-seven consecutive patients (19 women; mean age = 73.7 years, age range = 50-94 years) with acute bowel ischemia were scanned using multidetector CT (120 kV; 100-200 mAs). Pre-contrast and portal venous scans (65-70 s delay) were acquired. All patients underwent surgery for acute bowel ischemia and intraoperative diagnosis as well as histologic evaluation of explanted bowel segments was considered "gold standard." First, two radiologists read the conventional CECT images in which linear blending was adapted for optimal contrast, and second (three weeks later) the frequency selective non-linear blending (F-NLB) image. Attenuation values were compared, both in the involved and non-involved bowel segments creating ratios between unenhanced and CECT. The mean attenuation difference between ischemic and non-ischemic wall in the portal venous scan was 69.54 HU (reader 2 = 69.01 HU) higher for F-NLB compared with conventional CECT. Also, the attenuation ratio between contrast-enhanced and pre-contrast CT data for the non-ischemic walls showed significantly higher values for the F-NLB image (CECT: reader 1 = 2.11 (reader 2 = 3.36), F-NLB: reader 1 = 4.46 (reader 2 = 4.98)]. Sensitivity in detecting ischemic areas increased significantly for both readers using F-NLB (CECT: reader 1/2 = 53%/65% versus F-NLB: reader 1/2 = 62%/75%). Frequency selective non-linear blending improves detection of bowel ischemia compared with conventional CECT by increasing

  16. Hybrid ECG-gated versus non-gated 512-slice CT angiography of the aorta and coronary artery: image quality and effect of a motion correction algorithm.

    Science.gov (United States)

    Lee, Ji Won; Kim, Chang Won; Lee, Geewon; Lee, Han Cheol; Kim, Sang-Pil; Choi, Bum Sung; Jeong, Yeon Joo

    2018-02-01

    Background Using the hybrid electrocardiogram (ECG)-gated computed tomography (CT) technique, assessment of entire aorta, coronary arteries, and aortic valve can be possible using single-bolus contrast administration within a single acquisition. Purpose To compare the image quality of hybrid ECG-gated and non-gated CT angiography of the aorta and evaluate the effect of a motion correction algorithm (MCA) on coronary artery image quality in a hybrid ECG-gated aorta CT group. Material and Methods In total, 104 patients (76 men; mean age = 65.8 years) prospectively randomized into two groups (Group 1 = hybrid ECG-gated CT; Group 2 = non-gated CT) underwent wide-detector array aorta CT. Image quality, assessed using a four-point scale, was compared between the groups. Coronary artery image quality was compared between the conventional reconstruction and motion correction reconstruction subgroups in Group 1. Results Group 1 showed significant advantages over Group 2 in aortic wall, cardiac chamber, aortic valve, coronary ostia, and main coronary arteries image quality (all P ECG-gated CT significantly improved the heart and aortic wall image quality and the MCA can further improve the image quality and interpretability of coronary arteries.

  17. The diagnostic value of CT scan and selective venous sampling in Cushing's syndrome

    International Nuclear Information System (INIS)

    Negoro, Makoto; Kuwayama, Akio; Yamamoto, Naoto; Nakane, Toshichi; Yokoe, Toshio; Kageyama, Naoki; Ichihara, Kaoru; Ishiguchi, Tsuneo; Sakuma, Sadayuki

    1986-01-01

    We studied 24 patients with Cushing's syndrome in order to find the best way to confirm the pituitary adenoma preoperatively. At first, the sellar content was studied by means of a high-resolution CT scan in each patient. Second, by selective catheterization in the bilateral internal jugular vein and the inferior petrosal sinus, venous samples (c) were obtained for ACTH assay. Simultaneously, peripheral blood sampling (P) was made at the anterior cubital vein for the same purpose, and the C/P ratio was carefully calculated in each patient. If the C/P ratio exceeded 2, it was highly suggestive of the presence of pituitary adenoma. Even by an advanced high-resolution CT scan with a thickness of 2 mm, pituitary adenomas were detected in only 32 % of the patients studied. The result of image diagnosis in Cushing disease was discouraging. As for the chemical diagnosis, the results were as follows. At the early stage of this study, the catheterization was terminated in the jugular veins of nine patients. Among these, in five patients the presence of pituitary adenoma was predicted correctly in the preoperative stage. Later, by means of inferior petrosal sinus samplings, pituitary microadenomas were detected in ten patients among the twelve. Selective venous sampling for ACTH in the inferior petrosal sinus or jugular vein proved to be useful for the differential diagnosis of Cushing's syndrome when other diagnostic measures such as CT scan were inconclusive. (author)

  18. An enhanced block matching algorithm for fast elastic registration in adaptive radiotherapy

    International Nuclear Information System (INIS)

    Malsch, U; Thieke, C; Huber, P E; Bendl, R

    2006-01-01

    Image registration has many medical applications in diagnosis, therapy planning and therapy. Especially for time-adaptive radiotherapy, an efficient and accurate elastic registration of images acquired for treatment planning, and at the time of the actual treatment, is highly desirable. Therefore, we developed a fully automatic and fast block matching algorithm which identifies a set of anatomical landmarks in a 3D CT dataset and relocates them in another CT dataset by maximization of local correlation coefficients in the frequency domain. To transform the complete dataset, a smooth interpolation between the landmarks is calculated by modified thin-plate splines with local impact. The concept of the algorithm allows separate processing of image discontinuities like temporally changing air cavities in the intestinal track or rectum. The result is a fully transformed 3D planning dataset (planning CT as well as delineations of tumour and organs at risk) to a verification CT, allowing evaluation and, if necessary, changes of the treatment plan based on the current patient anatomy without time-consuming manual re-contouring. Typically the total calculation time is less than 5 min, which allows the use of the registration tool between acquiring the verification images and delivering the dose fraction for online corrections. We present verifications of the algorithm for five different patient datasets with different tumour locations (prostate, paraspinal and head-and-neck) by comparing the results with manually selected landmarks, visual assessment and consistency testing. It turns out that the mean error of the registration is better than the voxel resolution (2 x 2 x 3 mm 3 ). In conclusion, we present an algorithm for fully automatic elastic image registration that is precise and fast enough for online corrections in an adaptive fractionated radiation treatment course

  19. A Hybrid Feature Subset Selection Algorithm for Analysis of High Correlation Proteomic Data

    Science.gov (United States)

    Kordy, Hussain Montazery; Baygi, Mohammad Hossein Miran; Moradi, Mohammad Hassan

    2012-01-01

    Pathological changes within an organ can be reflected as proteomic patterns in biological fluids such as plasma, serum, and urine. The surface-enhanced laser desorption and ionization time-of-flight mass spectrometry (SELDI-TOF MS) has been used to generate proteomic profiles from biological fluids. Mass spectrometry yields redundant noisy data that the most data points are irrelevant features for differentiating between cancer and normal cases. In this paper, we have proposed a hybrid feature subset selection algorithm based on maximum-discrimination and minimum-correlation coupled with peak scoring criteria. Our algorithm has been applied to two independent SELDI-TOF MS datasets of ovarian cancer obtained from the NCI-FDA clinical proteomics databank. The proposed algorithm has used to extract a set of proteins as potential biomarkers in each dataset. We applied the linear discriminate analysis to identify the important biomarkers. The selected biomarkers have been able to successfully diagnose the ovarian cancer patients from the noncancer control group with an accuracy of 100%, a sensitivity of 100%, and a specificity of 100% in the two datasets. The hybrid algorithm has the advantage that increases reproducibility of selected biomarkers and able to find a small set of proteins with high discrimination power. PMID:23717808

  20. Adaptive iterative dose reduction algorithm in CT: Effect on image quality compared with filtered back projection in body phantoms of different sizes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn [College of Medicine, Seoul National University, Seoul (Korea, Republic of); Yoon, Jeong Hee; Choi, Jin Woo [Dept. of Radiology, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.

  1. Adaptive iterative dose reduction algorithm in CT: Effect on image quality compared with filtered back projection in body phantoms of different sizes

    International Nuclear Information System (INIS)

    Kim, Milim; Lee, Jeong Min; Son, Hyo Shin; Han, Joon Koo; Choi, Byung Ihn; Yoon, Jeong Hee; Choi, Jin Woo

    2014-01-01

    To evaluate the impact of the adaptive iterative dose reduction (AIDR) three-dimensional (3D) algorithm in CT on noise reduction and the image quality compared to the filtered back projection (FBP) algorithm and to compare the effectiveness of AIDR 3D on noise reduction according to the body habitus using phantoms with different sizes. Three different-sized phantoms with diameters of 24 cm, 30 cm, and 40 cm were built up using the American College of Radiology CT accreditation phantom and layers of pork belly fat. Each phantom was scanned eight times using different mAs. Images were reconstructed using the FBP and three different strengths of the AIDR 3D. The image noise, the contrast-to-noise ratio (CNR) and the signal-to-noise ratio (SNR) of the phantom were assessed. Two radiologists assessed the image quality of the 4 image sets in consensus. The effectiveness of AIDR 3D on noise reduction compared with FBP were also compared according to the phantom sizes. Adaptive iterative dose reduction 3D significantly reduced the image noise compared with FBP and enhanced the SNR and CNR (p < 0.05) with improved image quality (p < 0.05). When a stronger reconstruction algorithm was used, greater increase of SNR and CNR as well as noise reduction was achieved (p < 0.05). The noise reduction effect of AIDR 3D was significantly greater in the 40-cm phantom than in the 24-cm or 30-cm phantoms (p < 0.05). The AIDR 3D algorithm is effective to reduce the image noise as well as to improve the image-quality parameters compared by FBP algorithm, and its effectiveness may increase as the phantom size increases.

  2. Qualitative and quantitative evaluation of rigid and deformable motion correction algorithms using dual-energy CT images in view of application to CT perfusion measurements in abdominal organs affected by breathing motion.

    Science.gov (United States)

    Skornitzke, S; Fritz, F; Klauss, M; Pahn, G; Hansen, J; Hirsch, J; Grenacher, L; Kauczor, H-U; Stiller, W

    2015-02-01

    To compare six different scenarios for correcting for breathing motion in abdominal dual-energy CT (DECT) perfusion measurements. Rigid [RRComm(80 kVp)] and non-rigid [NRComm(80 kVp)] registration of commercially available CT perfusion software, custom non-rigid registration [NRCustom(80 kVp], demons algorithm) and a control group [CG(80 kVp)] without motion correction were evaluated using 80 kVp images. Additionally, NRCustom was applied to dual-energy (DE)-blended [NRCustom(DE)] and virtual non-contrast [NRCustom(VNC)] images, yielding six evaluated scenarios. After motion correction, perfusion maps were calculated using a combined maximum slope/Patlak model. For qualitative evaluation, three blinded radiologists independently rated motion correction quality and resulting perfusion maps on a four-point scale (4 = best, 1 = worst). For quantitative evaluation, relative changes in metric values, R(2) and residuals of perfusion model fits were calculated. For motion-corrected images, mean ratings differed significantly [NRCustom(80 kVp) and NRCustom(DE), 3.3; NRComm(80 kVp), 3.1; NRCustom(VNC), 2.9; RRComm(80 kVp), 2.7; CG(80 kVp), 2.7; all p VNC), 22.8%; RRComm(80 kVp), 0.6%; CG(80 kVp), 0%]. Regarding perfusion maps, NRCustom(80 kVp) and NRCustom(DE) were rated highest [NRCustom(80 kVp), 3.1; NRCustom(DE), 3.0; NRComm(80 kVp), 2.8; NRCustom(VNC), 2.6; CG(80 kVp), 2.5; RRComm(80 kVp), 2.4] and had significantly higher R(2) and lower residuals. Correlation between qualitative and quantitative evaluation was low to moderate. Non-rigid motion correction improves spatial alignment of the target region and fit of CT perfusion models. Using DE-blended and DE-VNC images for deformable registration offers no significant improvement. Non-rigid algorithms improve the quality of abdominal CT perfusion measurements but do not benefit from DECT post processing.

  3. CT of portal vein tumor thrombosis. Usefulness of dynamic CT

    Energy Technology Data Exchange (ETDEWEB)

    Takemoto, Kazumasa; Inoue, Yuichi; Tanaka, Masahiro; Nemoto, Yutaka; Nakamura, Kenji [Osaka City Univ. (Japan). Faculty of Medicine

    1983-08-01

    We evaluated CT findings of portal vein tumor thrombosis in 16 hepatomas by plain, contrast and dynamic CT. Plain and contrast CT findings were an enlargement of the portal vein (81%), intraluminal low density area (63%). Dynamic CT enhanced the diagnostic capability of the tumor thrombus as a relatively low density area because of the marked enhancement of the portal vein. In addition, dynamic CT newly demonstrated hyperdense peripheral ring (35%) and arterio portal shunt (35%). It is advisable to select the scan level to include the portal vein when dynamic CT is performed in the patient of hepatocellular carcinoma.

  4. Evaluation of living liver donors using contrast enhanced multidetector CT – The radiologists impact on donor selection

    International Nuclear Information System (INIS)

    Ringe, Kristina Imeen; Ringe, Bastian Paul; Falck, Christian von; Shin, Hoen-oh; Becker, Thomas; Pfister, Eva-Doreen; Wacker, Frank; Ringe, Burckhardt

    2012-01-01

    Living donor liver transplantation (LDLT) is a valuable and legitimate treatment for patients with end-stage liver disease. Computed tomography (CT) has proven to be an important tool in the process of donor evaluation. The purpose of this study was to evaluate the significance of CT in the donor selection process. Between May 1999 and October 2010 170 candidate donors underwent biphasic CT. We retrospectively reviewed the results of the CT and liver volumetry, and assessed reasons for rejection. 89 candidates underwent partial liver resection (52.4%). Based on the results of liver CT and volumetry 22 candidates were excluded as donors (31% of the cases). Reasons included fatty liver (n = 9), vascular anatomical variants (n = 4), incidental finding of hemangioma and focal nodular hyperplasia (n = 1) and small (n = 5) or large for size (n = 5) graft volume. CT based imaging of the liver in combination with dedicated software plays a key role in the process of evaluation of candidates for LDLT. It may account for up to 1/3 of the contraindications for LDLT

  5. Blind deblurring of spiral CT images - comparative studies on edge-to-noise ratios

    International Nuclear Information System (INIS)

    Jiang Ming; Wan Ge; Skinner, Margaret W.; Rubinstein, Jay T.; Vannier, Michael W.

    2002-01-01

    A recently developed blind deblurring algorithm based on the edge-to-noise ratio has been applied to improve the quality of spiral CT images. Since the discrepancy measure used to quantify the edge and noise effects is not symmetric, there are several ways to formulate the edge-to-noise ratio. This article is to investigate the performance of those ratios with phantom and patient data. In the phantom study, it is shown that all the ratios share similar properties, validating the blind deblurring algorithm. The image fidelity improvement varies from 29% to 33% for different ratios, according to the root mean square error (RMSE) criterion; the optimal iteration number determined for each ratio varies from 25 to 35. Those ratios that are associated with most satisfactory performance are singled out for the image fidelity improvement of about 33% in the numerical simulation. After automatic blind deblurring with the selected ratios, the spatial resolution of CT is substantially refined in all the cases tested

  6. Use of the CT component of PET-CT to improve PET-MR registration: demonstration in soft-tissue sarcoma

    International Nuclear Information System (INIS)

    Somer, Edward J; Benatar, Nigel A; O'Doherty, Michael J; Smith, Mike A; Marsden, Paul K

    2007-01-01

    We have investigated improvements to PET-MR image registration offered by PET-CT scanning. Ten subjects with suspected soft-tissue sarcomas were scanned with an in-line PET-CT and a clinical MR scanner. PET to CT, CT to MR and PET to MR image registrations were performed using a rigid-body external marker technique and rigid and non-rigid voxel-similarity algorithms. PET-MR registration was also performed using transformations derived from the registration of CT to MR. The external marker technique gave fiducial registration errors of 2.1 mm, 5.1 mm and 5.3 mm for PET-CT, PET-MR and CT-MR registration. Target registration errors were 3.9 mm, 9.0 mm and 9.3 mm, respectively. Voxel-based algorithms were evaluated by measuring the distance between corresponding fiducials after registration. Registration errors of 6.4 mm, 14.5 mm and 9.5 mm, respectively, for PET-CT, PET-MR and CT-MR were observed for rigid-body registration while non-rigid registration gave errors of 6.8 mm, 16.3 mm and 7.6 mm for the same modality combinations. The application of rigid and non-rigid CT to MR transformations to accompanying PET data gives significantly reduced PET-MR errors of 10.0 mm and 8.5 mm, respectively. Visual comparison by two independent observers confirmed the improvement over direct PET-MR registration. We conclude that PET-MR registration can be more accurately and reliably achieved using the hybrid technique described than through direct rigid-body registration of PET to MR

  7. SU-F-J-214: Dose Reduction by Spatially Optimized Image Quality Via Fluence Modulated Proton CT (FMpCT)

    International Nuclear Information System (INIS)

    De Angelis, L; Landry, G; Dedes, G; Parodi, K; Hansen, D; Rit, S; Belka, C

    2016-01-01

    Purpose: Proton CT (pCT) is a promising imaging modality for reducing range uncertainty in image-guided proton therapy. Range uncertainties partially originate from X-ray CT number conversion to stopping power ratio (SPR) and are limiting the exploitation of the full potential of proton therapy. In this study we explore the concept of spatially dependent fluence modulated proton CT (FMpCT), for achieving optimal image quality in a clinical region of interest (ROI), while reducing significantly the imaging dose to the patient. Methods: The study was based on simulated ideal pCT using pencil beam (PB) scanning. A set of 250 MeV protons PBs was used to create 360 projections of a cylindrical water phantom and a head and neck cancer patient. The tomographic images were reconstructed using a filtered backprojection (FBP) as well as an iterative algorithm (ITR). Different fluence modulation levels were investigated and their impact on the image was quantified in terms of SPR accuracy as well as noise within and outside selected ROIs, as a function of imaging dose. The unmodulated image served as reference. Results: Both FBP reconstruction and ITR without total variation (TV) yielded image quality in the ROIs similar to the reference images, for modulation down to 0.1 of the full proton fluence. The average dose was reduced by 75% for the water phantom and by 40% for the patient. FMpCT does not improve the noise for ITR with TV and modulation 0.1. Conclusion: This is the first work proposing and investigating FMpCT for producing optimal image quality for treatment planning and image guidance, while simultaneously reducing imaging dose. Future work will address spatial resolution effects and the impact of FMpCT on the quality of proton treatment plans for a prototype pCT scanner capable of list mode data acquisition. Acknowledgement: DFG-MAP DFG - Munich-Centre for Advanced Photonics (MAP)

  8. SU-F-J-214: Dose Reduction by Spatially Optimized Image Quality Via Fluence Modulated Proton CT (FMpCT)

    Energy Technology Data Exchange (ETDEWEB)

    De Angelis, L; Landry, G; Dedes, G; Parodi, K [Ludwig-Maximilians-Universitaet Muenchen (LMU Munich), Garching b. Muenchen (Germany); Hansen, D [Aarhus University Hospital, Aarhus, Jutland (Denmark); Rit, S [University Lyon, Lyon, Auvergne-Rhone-Alpes (France); Belka, C [LMU Munich, Munich (Germany)

    2016-06-15

    Purpose: Proton CT (pCT) is a promising imaging modality for reducing range uncertainty in image-guided proton therapy. Range uncertainties partially originate from X-ray CT number conversion to stopping power ratio (SPR) and are limiting the exploitation of the full potential of proton therapy. In this study we explore the concept of spatially dependent fluence modulated proton CT (FMpCT), for achieving optimal image quality in a clinical region of interest (ROI), while reducing significantly the imaging dose to the patient. Methods: The study was based on simulated ideal pCT using pencil beam (PB) scanning. A set of 250 MeV protons PBs was used to create 360 projections of a cylindrical water phantom and a head and neck cancer patient. The tomographic images were reconstructed using a filtered backprojection (FBP) as well as an iterative algorithm (ITR). Different fluence modulation levels were investigated and their impact on the image was quantified in terms of SPR accuracy as well as noise within and outside selected ROIs, as a function of imaging dose. The unmodulated image served as reference. Results: Both FBP reconstruction and ITR without total variation (TV) yielded image quality in the ROIs similar to the reference images, for modulation down to 0.1 of the full proton fluence. The average dose was reduced by 75% for the water phantom and by 40% for the patient. FMpCT does not improve the noise for ITR with TV and modulation 0.1. Conclusion: This is the first work proposing and investigating FMpCT for producing optimal image quality for treatment planning and image guidance, while simultaneously reducing imaging dose. Future work will address spatial resolution effects and the impact of FMpCT on the quality of proton treatment plans for a prototype pCT scanner capable of list mode data acquisition. Acknowledgement: DFG-MAP DFG - Munich-Centre for Advanced Photonics (MAP)

  9. Optimization of Protocol CT, PET-CT, whole body; Optimizacion de protocolo CT, en PET-CT, de cuerpo entero

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez, Fredys Santos, E-mail: fsantos@ccss.sa.cr [Caja Costarricense de Seguro Social (ACCPR/CCSS), San Jose (Costa Rica). Area Control de Calidade Y Proteccion Radiologica; Namias, Mauro, E-mail: mnamias@gmail.com [Comision Nacional de Energia Atomica (FCDN/CNEA), Buenos Aires (Argentina). Fundacion Centro Diagnostico Nuclear

    2013-11-01

    The objective of this study was to optimize the acquisition protocols and processing existing of the CT PET/CT scanner for clinical use of Nuclear Diagnostic Center Foundation, a way to minimize the radiation dose while maintaining diagnostic image quality properly. Dosimetric data of PET / CT service were surveyed and obtained the baseline against which we compare and define strategies and modifications to develop online. We selected transaxial up to the pulmonary hilum and liver slices as the anatomical regions of interest that led to the standardization of the study.

  10. Versatility of the CFR algorithm for limited angle reconstruction

    International Nuclear Information System (INIS)

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V.

    1990-01-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant

  11. Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1

    Science.gov (United States)

    Park, Thomas; Smith, Austin; Oliver, T. Emerson

    2018-01-01

    The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.

  12. Comparison of air space measurement imaged by CT, small-animal CT, and hyperpolarized Xe MRI

    Science.gov (United States)

    Madani, Aniseh; White, Steven; Santyr, Giles; Cunningham, Ian

    2005-04-01

    Lung disease is the third leading cause of death in the western world. Lung air volume measurements are thought to be early indicators of lung disease and markers in pharmaceutical research. The purpose of this work is to develop a lung phantom for assessing and comparing the quantitative accuracy of hyperpolarized xenon 129 magnetic resonance imaging (HP 129Xe MRI), conventional computed tomography (HRCT), and highresolution small-animal CTCT) in measuring lung gas volumes. We developed a lung phantom consisting of solid cellulose acetate spheres (1, 2, 3, 4 and 5 mm diameter) uniformly packed in circulated air or HP 129Xe gas. Air volume is estimated based on simple thresholding algorithm. Truth is calculated from the sphere diameters and validated using μCT. While this phantom is not anthropomorphic, it enables us to directly measure air space volume and compare these imaging methods as a function of sphere diameter for the first time. HP 129Xe MRI requires partial volume analysis to distinguish regions with and without 129Xe gas and results are within %5 of truth but settling of the heavy 129Xe gas complicates this analysis. Conventional CT demonstrated partial-volume artifacts for the 1mm spheres. μCT gives the most accurate air-volume results. Conventional CT and HP 129Xe MRI give similar results although non-uniform densities of 129Xe require more sophisticated algorithms than simple thresholding. The threshold required to give the true air volume in both HRCT and μCT, varies with sphere diameters calling into question the validity of thresholding method.

  13. Inter-algorithm lesion volumetry comparison of real and 3D simulated lung lesions in CT

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Hoye, Jocelyn; Smith, Taylor; Ebner, Lukas; Samei, Ehsan

    2017-03-01

    The purpose of this study was to establish volumetric exchangeability between real and computational lung lesions in CT. We compared the overall relative volume estimation performance of segmentation tools when used to measure real lesions in actual patient CT images and computational lesions virtually inserted into the same patient images (i.e., hybrid datasets). Pathologically confirmed malignancies from 30 thoracic patient cases from Reference Image Database to Evaluate Therapy Response (RIDER) were modeled and used as the basis for the comparison. Lesions included isolated nodules as well as those attached to the pleura or other lung structures. Patient images were acquired using a 16 detector row or 64 detector row CT scanner (Lightspeed 16 or VCT; GE Healthcare). Scans were acquired using standard chest protocols during a single breath-hold. Virtual 3D lesion models based on real lesions were developed in Duke Lesion Tool (Duke University), and inserted using a validated image-domain insertion program. Nodule volumes were estimated using multiple commercial segmentation tools (iNtuition, TeraRecon, Inc., Syngo.via, Siemens Healthcare, and IntelliSpace, Philips Healthcare). Consensus based volume comparison showed consistent trends in volume measurement between real and virtual lesions across all software. The average percent bias (+/- standard error) shows -9.2+/-3.2% for real lesions versus -6.7+/-1.2% for virtual lesions with tool A, 3.9+/-2.5% and 5.0+/-0.9% for tool B, and 5.3+/-2.3% and 1.8+/-0.8% for tool C, respectively. Virtual lesion volumes were statistically similar to those of real lesions (.05 in most cases. Results suggest that hybrid datasets had similar inter-algorithm variability compared to real datasets.

  14. Region-of-interest reconstruction for a cone-beam dental CT with a circular trajectory

    International Nuclear Information System (INIS)

    Hu, Zhanli; Zou, Jing; Gui, Jianbao; Zheng, Hairong; Xia, Dan

    2013-01-01

    Dental CT is the most appropriate and accurate device for preoperative evaluation of dental implantation. It can demonstrate the quantity of bone in three dimensions (3D), the location of important adjacent anatomic structures and the quality of available bone with minimal geometric distortion. Nevertheless, with the rapid increase of dental CT examinations, we are facing the problem of dose reduction without loss of image quality. In this work, backprojection-filtration (BPF) and Feldkamp–Davis–Kress (FDK) algorithm was applied to reconstruct the 3D full image and region-of-interest (ROI) image from complete and truncated circular cone-beam data respectively by computer-simulation. In addition, the BPF algorithm was evaluated based on the 3D ROI-image reconstruction from real data, which was acquired from our developed circular cone-beam prototype dental CT system. The results demonstrated that the ROI-image quality reconstructed from truncated data using the BPF algorithm was comparable to that reconstructed from complete data. The FDK algorithm, however, created artifacts while reconstructing ROI-image. Thus it can be seen, for circular cone-beam dental CT, reducing scanning angular range of the BPF algorithm used for ROI-image reconstruction are helpful for reducing the radiation dose and scanning time. Finally, an analytical method was developed for estimation of the ROI projection area on the detector before CT scanning, which would help doctors to roughly estimate the total radiation dose before the CT examination. -- Highlights: ► BPF algorithm was applied by using dental CT for the first time. ► A method was developed for estimation of projection region before CT scanning. ► Roughly predict the total radiation dose before CT scans. ► Potential reduce imaging radiation dose, scatter, and scanning time

  15. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  16. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed

    2013-01-07

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  17. EMD self-adaptive selecting relevant modes algorithm for FBG spectrum signal

    Science.gov (United States)

    Chen, Yong; Wu, Chun-ting; Liu, Huan-lin

    2017-07-01

    Noise may reduce the demodulation accuracy of fiber Bragg grating (FBG) sensing signal so as to affect the quality of sensing detection. Thus, the recovery of a signal from observed noisy data is necessary. In this paper, a precise self-adaptive algorithm of selecting relevant modes is proposed to remove the noise of signal. Empirical mode decomposition (EMD) is first used to decompose a signal into a set of modes. The pseudo modes cancellation is introduced to identify and eliminate false modes, and then the Mutual Information (MI) of partial modes is calculated. MI is used to estimate the critical point of high and low frequency components. Simulation results show that the proposed algorithm estimates the critical point more accurately than the traditional algorithms for FBG spectral signal. While, compared to the similar algorithms, the signal noise ratio of the signal can be improved more than 10 dB after processing by the proposed algorithm, and correlation coefficient can be increased by 0.5, so it demonstrates better de-noising effect.

  18. Optimization of input parameters of supra-threshold stochastic resonance image processing algorithm for the detection of abdomino-pelvic tumors on PET/CT scan

    International Nuclear Information System (INIS)

    Pandey, Anil Kumar; Saroha, Kartik; Patel, C.D.; Bal, C.S.; Kumar, Rakesh

    2016-01-01

    Administration of diuretics increases the urine output to clear radioactive urine from kidneys and bladder. Hence post-diuretic pelvic PET/CT scan enhances the probability of detection of abdomino-pelvic tumor. However, it causes discomfort in patients and has some side effects also. Application of supra threshold stochastic resonance (SSR) image processing algorithm on Pre-diuretic PET/CT scan may also increase the probability of detection of these tumors. Amount of noise and threshold are two variable parameters that effect the final image quality. This study was conducted to investigate the effect of these two variable parameters on the detection of abdomen-pelvic tumor

  19. Segmentation algorithm of colon based on multi-slice CT colonography

    Science.gov (United States)

    Hu, Yizhong; Ahamed, Mohammed Shabbir; Takahashi, Eiji; Suzuki, Hidenobu; Kawata, Yoshiki; Niki, Noboru; Suzuki, Masahiro; Iinuma, Gen; Moriyama, Noriyuki

    2012-02-01

    CT colonography is a radiology test that looks at people's large intestines(colon). CT colonography can screen many options of colon cancer. This test is used to detect polyps or cancers of the colon. CT colonography is safe and reliable. It can be used if people are too sick to undergo other forms of colon cancer screening. In our research, we proposed a method for automatic segmentation of the colon from abdominal computed Tomography (CT) images. Our multistage detection method extracted colon and spited colon into different parts according to the colon anatomy information. We found that among the five segmented parts of the colon, sigmoid (20%) and rectum (50%) are more sensitive toward polyps and masses than the other three parts. Our research focused on detecting the colon by the individual diagnosis of sigmoid and rectum. We think it would make the rapid and easy diagnosis of colon in its earlier stage and help doctors for analysis of correct position of each part and detect the colon rectal cancer much easier.

  20. Extraction of airways from CT (EXACT’09)

    DEFF Research Database (Denmark)

    Lo, Pechin Chien Pau; Ginneken, Bram van; Reinhardt, Joseph M.

    2012-01-01

    or not it is a correctly segmented part of the airway tree. Finally, the reference airway trees are constructed by taking the union of all correctly extracted branch segments. Fifteen airway tree extraction algorithms from different research groups are evaluated on a diverse set of 20 chest computed tomography (CT) scans...... of subjects ranging from healthy volunteers to patients with severe pathologies, scanned at different sites, with different CT scanner brands, models, and scanning protocols. Three performance measures covering different aspects of segmentation quality were computed for all participating algorithms. Results...

  1. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems...... for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application...

  2. Naturally selecting solutions: the use of genetic algorithms in bioinformatics.

    Science.gov (United States)

    Manning, Timmy; Sleator, Roy D; Walsh, Paul

    2013-01-01

    For decades, computer scientists have looked to nature for biologically inspired solutions to computational problems; ranging from robotic control to scheduling optimization. Paradoxically, as we move deeper into the post-genomics era, the reverse is occurring, as biologists and bioinformaticians look to computational techniques, to solve a variety of biological problems. One of the most common biologically inspired techniques are genetic algorithms (GAs), which take the Darwinian concept of natural selection as the driving force behind systems for solving real world problems, including those in the bioinformatics domain. Herein, we provide an overview of genetic algorithms and survey some of the most recent applications of this approach to bioinformatics based problems.

  3. A theoretically exact reconstruction algorithm for helical cone-beam differential phase-contrast computed tomography

    International Nuclear Information System (INIS)

    Li Jing; Sun Yi; Zhu Peiping

    2013-01-01

    Differential phase-contrast computed tomography (DPC-CT) reconstruction problems are usually solved by using parallel-, fan- or cone-beam algorithms. For rod-shaped objects, the x-ray beams cannot recover all the slices of the sample at the same time. Thus, if a rod-shaped sample is required to be reconstructed by the above algorithms, one should alternately perform translation and rotation on this sample, which leads to lower efficiency. The helical cone-beam CT may significantly improve scanning efficiency for rod-shaped objects over other algorithms. In this paper, we propose a theoretically exact filter-backprojection algorithm for helical cone-beam DPC-CT, which can be applied to reconstruct the refractive index decrement distribution of the samples directly from two-dimensional differential phase-contrast images. Numerical simulations are conducted to verify the proposed algorithm. Our work provides a potential solution for inspecting the rod-shaped samples using DPC-CT, which may be applicable with the evolution of DPC-CT equipments. (paper)

  4. Optimal and Suboptimal Finger Selection Algorithms for MMSE Rake Receivers in Impulse Radio Ultra-Wideband Systems

    Directory of Open Access Journals (Sweden)

    Chiang Mung

    2006-01-01

    Full Text Available The problem of choosing the optimal multipath components to be employed at a minimum mean square error (MMSE selective Rake receiver is considered for an impulse radio ultra-wideband system. First, the optimal finger selection problem is formulated as an integer programming problem with a nonconvex objective function. Then, the objective function is approximated by a convex function and the integer programming problem is solved by means of constraint relaxation techniques. The proposed algorithms are suboptimal due to the approximate objective function and the constraint relaxation steps. However, they perform better than the conventional finger selection algorithm, which is suboptimal since it ignores the correlation between multipath components, and they can get quite close to the optimal scheme that cannot be implemented in practice due to its complexity. In addition to the convex relaxation techniques, a genetic-algorithm- (GA- based approach is proposed, which does not need any approximations or integer relaxations. This iterative algorithm is based on the direct evaluation of the objective function, and can achieve near-optimal performance with a reasonable number of iterations. Simulation results are presented to compare the performance of the proposed finger selection algorithms with that of the conventional and the optimal schemes.

  5. Dual-Source Dual-Energy CT Angiography of the Supra-Aortic Arteries with Tin Filter: Impact of Tube Voltage Selection.

    Science.gov (United States)

    Korn, Andreas; Bender, Benjamin; Schabel, Christoph; Bongers, Malte; Ernemann, Ulrike; Claussen, Claus; Thomas, Christoph

    2015-06-01

    Automatic bone and plaque subtraction (BPS) in computed tomographic angiographic (CTA) examinations using dual-energy CT (DECT) remains challenging because of beam-hardening artifacts in the shoulder region and close proximity of the internal carotid artery to the base of the skull. The selection of the tube voltage combination in dual-source CT influences the spectral separation and the susceptibility for artifacts. The purpose of this study was to assess which tube voltage combination leads to an optimal image quality of head and neck DECT angiograms after bone subtraction. Fifty-one patients received tin-filter-enhanced DECT angiograms of the supra-aortic arteries using two voltage protocols: 24 patients were studied using 80/Sn140 kV and 27 using a 100/Sn140 kV protocol, both protocols with an additional tin filter. A commercially available DE-CTA BPS algorithm was used. Artificial vessel erosions in BPS maximum intensity projections (four-level Likert scale with CTA source data as reference) and vessel signal-to-noise ratio (SNR) were assessed in the level of the shoulders and the base of the skull in each patient and compared. At the level of the shoulder, 100/Sn140 kV achieved higher SNR (23.4 ± 6.4 at 80/Sn140 kV vs. 35.1 ± 11.8 at 100/Sn140 kV; P supra-aortic arteries than the 80/Sn140 kV protocol. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  6. A Convergent Differential Evolution Algorithm with Hidden Adaptation Selection for Engineering Optimization

    Directory of Open Access Journals (Sweden)

    Zhongbo Hu

    2014-01-01

    Full Text Available Many improved differential Evolution (DE algorithms have emerged as a very competitive class of evolutionary computation more than a decade ago. However, few improved DE algorithms guarantee global convergence in theory. This paper developed a convergent DE algorithm in theory, which employs a self-adaptation scheme for the parameters and two operators, that is, uniform mutation and hidden adaptation selection (haS operators. The parameter self-adaptation and uniform mutation operator enhance the diversity of populations and guarantee ergodicity. The haS can automatically remove some inferior individuals in the process of the enhancing population diversity. The haS controls the proposed algorithm to break the loop of current generation with a small probability. The breaking probability is a hidden adaptation and proportional to the changes of the number of inferior individuals. The proposed algorithm is tested on ten engineering optimization problems taken from IEEE CEC2011.

  7. Analysis of Different Feature Selection Criteria Based on a Covariance Convergence Perspective for a SLAM Algorithm

    Science.gov (United States)

    Auat Cheein, Fernando A.; Carelli, Ricardo

    2011-01-01

    This paper introduces several non-arbitrary feature selection techniques for a Simultaneous Localization and Mapping (SLAM) algorithm. The feature selection criteria are based on the determination of the most significant features from a SLAM convergence perspective. The SLAM algorithm implemented in this work is a sequential EKF (Extended Kalman filter) SLAM. The feature selection criteria are applied on the correction stage of the SLAM algorithm, restricting it to correct the SLAM algorithm with the most significant features. This restriction also causes a decrement in the processing time of the SLAM. Several experiments with a mobile robot are shown in this work. The experiments concern the map reconstruction and a comparison between the different proposed techniques performance. The experiments were carried out at an outdoor environment composed by trees, although the results shown herein are not restricted to a special type of features. PMID:22346568

  8. Iterative image reconstruction algorithms in coronary CT angiography improve the detection of lipid-core plaque - a comparison with histology

    International Nuclear Information System (INIS)

    Puchner, Stefan B.; Ferencik, Maros; Maurovich-Horvat, Pal; Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu; Kauczor, Hans-Ulrich; Hoffmann, Udo; Schlett, Christopher L.

    2015-01-01

    To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area 2 : 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)

  9. Objective assessment of image quality and dose reduction in CT iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Vaishnav, J. Y., E-mail: jay.vaishnav@fda.hhs.gov; Jung, W. C. [Diagnostic X-Ray Systems Branch, Office of In Vitro Diagnostic Devices and Radiological Health, Center for Devices and Radiological Health, United States Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993 (United States); Popescu, L. M.; Zeng, R.; Myers, K. J. [Division of Imaging and Applied Mathematics, Office of Science and Engineering Laboratories, Center for Devices and Radiological Health, United States Food and Drug Administration, 10903 New Hampshire Avenue, Silver Spring, Maryland 20993 (United States)

    2014-07-15

    Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve. Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction. Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance. Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality.

  10. Objective assessment of image quality and dose reduction in CT iterative reconstruction

    International Nuclear Information System (INIS)

    Vaishnav, J. Y.; Jung, W. C.; Popescu, L. M.; Zeng, R.; Myers, K. J.

    2014-01-01

    Purpose: Iterative reconstruction (IR) algorithms have the potential to reduce radiation dose in CT diagnostic imaging. As these algorithms become available on the market, a standardizable method of quantifying the dose reduction that a particular IR method can achieve would be valuable. Such a method would assist manufacturers in making promotional claims about dose reduction, buyers in comparing different devices, physicists in independently validating the claims, and the United States Food and Drug Administration in regulating the labeling of CT devices. However, the nonlinear nature of commercially available IR algorithms poses challenges to objectively assessing image quality, a necessary step in establishing the amount of dose reduction that a given IR algorithm can achieve without compromising that image quality. This review paper seeks to consolidate information relevant to objectively assessing the quality of CT IR images, and thereby measuring the level of dose reduction that a given IR algorithm can achieve. Methods: The authors discuss task-based methods for assessing the quality of CT IR images and evaluating dose reduction. Results: The authors explain and review recent literature on signal detection and localization tasks in CT IR image quality assessment, the design of an appropriate phantom for these tasks, possible choices of observers (including human and model observers), and methods of evaluating observer performance. Conclusions: Standardizing the measurement of dose reduction is a problem of broad interest to the CT community and to public health. A necessary step in the process is the objective assessment of CT image quality, for which various task-based methods may be suitable. This paper attempts to consolidate recent literature that is relevant to the development and implementation of task-based methods for the assessment of CT IR image quality

  11. Advanced single-slice rebinning for tilted spiral cone-beam CT

    International Nuclear Information System (INIS)

    Kachelriess, Marc; Fuchs, Theo; Schaller, Stefan; Kalender, Willi A.

    2001-01-01

    Future medical CT scanners and today's micro CT scanners demand cone-beam reconstruction algorithms that are capable of reconstructing data acquired from a tilted spiral trajectory where the vector of rotation is not necessarily parallel to the vector of table increment. For the medical CT scanner this case of nonparallel object motion is met for nonzero gantry tilt: the table moves into a direction that is not perpendicular to the plane of rotation. Since this is not a special application of medical CT but rather a daily routine in head exams, there is a strong need for corresponding reconstruction algorithms. In contrast to medical CT, where the special case of nonperpendicular motion is used on purpose, micro CT scanners cannot avoid aberrations of the rotational axis and the table increment vector due to alignment problems. Especially for those micro CT scanners that have the lifting stage mounted on the rotation table (in contrast to setups where the lifting stage holds the rotation table), this kind of misalignment is equivalent to a gantry tilt. We therefore generalize the advanced single-slice rebinning algorithm (ASSR), which is considered a very promising approach for medical cone-beam reconstruction due to its high image quality and its high reconstruction speed [Med. Phys. 27, 754-772 (2000)], to the case of tilted gantries. We evaluate this extended ASSR approach (which we will denote as ASSR + , for convenience) in comparison to the original ASSR algorithm using simulated phantom data for reconstruction. For the case of nonparallel object motion ASSR + shows significant improvements over ASSR, however, its computational complexity is slightly increased due to the broken symmetry of the spiral trajectory

  12. An artificial bee colony algorithm for uncertain portfolio selection.

    Science.gov (United States)

    Chen, Wei

    2014-01-01

    Portfolio selection is an important issue for researchers and practitioners. In this paper, under the assumption that security returns are given by experts' evaluations rather than historical data, we discuss the portfolio adjusting problem which takes transaction costs and diversification degree of portfolio into consideration. Uncertain variables are employed to describe the security returns. In the proposed mean-variance-entropy model, the uncertain mean value of the return is used to measure investment return, the uncertain variance of the return is used to measure investment risk, and the entropy is used to measure diversification degree of portfolio. In order to solve the proposed model, a modified artificial bee colony (ABC) algorithm is designed. Finally, a numerical example is given to illustrate the modelling idea and the effectiveness of the proposed algorithm.

  13. Guidance, navigation, and control subsystem equipment selection algorithm using expert system methods

    Science.gov (United States)

    Allen, Cheryl L.

    1991-01-01

    Enhanced engineering tools can be obtained through the integration of expert system methodologies and existing design software. The application of these methodologies to the spacecraft design and cost model (SDCM) software provides an improved technique for the selection of hardware for unmanned spacecraft subsystem design. The knowledge engineering system (KES) expert system development tool was used to implement a smarter equipment section algorithm than that which is currently achievable through the use of a standard data base system. The guidance, navigation, and control subsystems of the SDCM software was chosen as the initial subsystem for implementation. The portions of the SDCM code which compute the selection criteria and constraints remain intact, and the expert system equipment selection algorithm is embedded within this existing code. The architecture of this new methodology is described and its implementation is reported. The project background and a brief overview of the expert system is described, and once the details of the design are characterized, an example of its implementation is demonstrated.

  14. A modified genetic algorithm with fuzzy roulette wheel selection for job-shop scheduling problems

    Science.gov (United States)

    Thammano, Arit; Teekeng, Wannaporn

    2015-05-01

    The job-shop scheduling problem is one of the most difficult production planning problems. Since it is in the NP-hard class, a recent trend in solving the job-shop scheduling problem is shifting towards the use of heuristic and metaheuristic algorithms. This paper proposes a novel metaheuristic algorithm, which is a modification of the genetic algorithm. This proposed algorithm introduces two new concepts to the standard genetic algorithm: (1) fuzzy roulette wheel selection and (2) the mutation operation with tabu list. The proposed algorithm has been evaluated and compared with several state-of-the-art algorithms in the literature. The experimental results on 53 JSSPs show that the proposed algorithm is very effective in solving the combinatorial optimization problems. It outperforms all state-of-the-art algorithms on all benchmark problems in terms of the ability to achieve the optimal solution and the computational time.

  15. Evaluation of living liver donors using contrast enhanced multidetector CT – The radiologists impact on donor selection

    Directory of Open Access Journals (Sweden)

    Ringe Kristina

    2012-07-01

    Full Text Available Abstract Background Living donor liver transplantation (LDLT is a valuable and legitimate treatment for patients with end-stage liver disease. Computed tomography (CT has proven to be an important tool in the process of donor evaluation. The purpose of this study was to evaluate the significance of CT in the donor selection process. Methods Between May 1999 and October 2010 170 candidate donors underwent biphasic CT. We retrospectively reviewed the results of the CT and liver volumetry, and assessed reasons for rejection. Results 89 candidates underwent partial liver resection (52.4%. Based on the results of liver CT and volumetry 22 candidates were excluded as donors (31% of the cases. Reasons included fatty liver (n = 9, vascular anatomical variants (n = 4, incidental finding of hemangioma and focal nodular hyperplasia (n = 1 and small (n = 5 or large for size (n = 5 graft volume. Conclusion CT based imaging of the liver in combination with dedicated software plays a key role in the process of evaluation of candidates for LDLT. It may account for up to 1/3 of the contraindications for LDLT.

  16. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    Science.gov (United States)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  17. Reduction of metal artifacts from hip prostheses on CT images of the pelvis: value of iterative reconstructions.

    Science.gov (United States)

    Morsbach, Fabian; Bickelhaupt, Sebastian; Wanner, Guido A; Krauss, Andreas; Schmidt, Bernhard; Alkadhi, Hatem

    2013-07-01

    To assess the value of iterative frequency split-normalized (IFS) metal artifact reduction (MAR) for computed tomography (CT) of hip prostheses. This study had institutional review board and local ethics committee approval. First, a hip phantom with steel and titanium prostheses that had inlays of water, fat, and contrast media in the pelvis was used to optimize the IFS algorithm. Second, 41 consecutive patients with hip prostheses who were undergoing CT were included. Data sets were reconstructed with filtered back projection, the IFS algorithm, and a linear interpolation MAR algorithm. Two blinded, independent readers evaluated axial, coronal, and sagittal CT reformations for overall image quality, image quality of pelvic organs, and assessment of pelvic abnormalities. CT attenuation and image noise were measured. Statistical analysis included the Friedman test, Wilcoxon signed-rank test, and Levene test. Ex vivo experiments demonstrated an optimized IFS algorithm by using a threshold of 2200 HU with four iterations for both steel and titanium prostheses. Measurements of CT attenuation of the inlays were significantly (P algorithm for CT image reconstruction significantly reduces metal artifacts from hip prostheses, improves the reliability of CT number measurements, and improves the confidence for depicting pelvic abnormalities.

  18. Autocalibration method for non-stationary CT bias correction.

    Science.gov (United States)

    Vegas-Sánchez-Ferrero, Gonzalo; Ledesma-Carbayo, Maria J; Washko, George R; Estépar, Raúl San José

    2018-02-01

    Computed tomography (CT) is a widely used imaging modality for screening and diagnosis. However, the deleterious effects of radiation exposure inherent in CT imaging require the development of image reconstruction methods which can reduce exposure levels. The development of iterative reconstruction techniques is now enabling the acquisition of low-dose CT images whose quality is comparable to that of CT images acquired with much higher radiation dosages. However, the characterization and calibration of the CT signal due to changes in dosage and reconstruction approaches is crucial to provide clinically relevant data. Although CT scanners are calibrated as part of the imaging workflow, the calibration is limited to select global reference values and does not consider other inherent factors of the acquisition that depend on the subject scanned (e.g. photon starvation, partial volume effect, beam hardening) and result in a non-stationary noise response. In this work, we analyze the effect of reconstruction biases caused by non-stationary noise and propose an autocalibration methodology to compensate it. Our contributions are: 1) the derivation of a functional relationship between observed bias and non-stationary noise, 2) a robust and accurate method to estimate the local variance, 3) an autocalibration methodology that does not necessarily rely on a calibration phantom, attenuates the bias caused by noise and removes the systematic bias observed in devices from different vendors. The validation of the proposed methodology was performed with a physical phantom and clinical CT scans acquired with different configurations (kernels, doses, algorithms including iterative reconstruction). The results confirmed the suitability of the proposed methods for removing the intra-device and inter-device reconstruction biases. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Modified automatic term selection v2: A faster algorithm to calculate inelastic scattering cross-sections

    Energy Technology Data Exchange (ETDEWEB)

    Rusz, Ján, E-mail: jan.rusz@fysik.uu.se

    2017-06-15

    Highlights: • New algorithm for calculating double differential scattering cross-section. • Shown good convergence properties. • Outperforms older MATS algorithm, particularly in zone axis calculations. - Abstract: We present a new algorithm for calculating inelastic scattering cross-section for fast electrons. Compared to the previous Modified Automatic Term Selection (MATS) algorithm (Rusz et al. [18]), it has far better convergence properties in zone axis calculations and it allows to identify contributions of individual atoms. One can think of it as a blend of MATS algorithm and a method described by Weickenmeier and Kohl [10].

  20. Spinal endoscopy combined with selective CT myelography for dural closure of the spinal dural defect with superficial siderosis: technical note.

    Science.gov (United States)

    Arishima, Hidetaka; Higashino, Yoshifumi; Yamada, Shinsuke; Akazawa, Ayumi; Arai, Hiroshi; Tsunetoshi, Kenzo; Matsuda, Ken; Kodera, Toshiaki; Kitai, Ryuhei; Awara, Kousuke; Kikuta, Ken-Ichiro

    2018-01-01

    The authors describe a new procedure to detect the tiny dural hole in patients with superficial siderosis (SS) and CSF leakage using a coronary angioscope system for spinal endoscopy and selective CT myelography using a spinal drainage tube. Under fluoroscopy, surgeons inserted the coronary angioscope into the spinal subarachnoid space, similar to the procedure of spinal drainage, and slowly advanced it to the cervical spine. The angioscope clearly showed the small dural hole and injured arachnoid membrane. One week later, the spinal drainage tube was inserted, and the tip of the drainage tube was located just below the level of the dural defect found by the spinal endoscopic examination. This selective CT myelography clarifies the location of the dural defect. During surgery, the small dural hole could be easily located, and it was securely sutured. It is sometimes difficult to detect the actual location of the small dural hole even with thin-slice MRI or dynamic CT myelography in patients with SS. The use of a coronary angioscope for the spinal endoscopy combined with selective CT myelography may provide an effective examination to assess dural closure of the spinal dural defect with SS in cases without obvious dural defects on conventional imaging.

  1. A MRI-CT prostate registration using sparse representation technique

    Science.gov (United States)

    Yang, Xiaofeng; Jani, Ashesh B.; Rossi, Peter J.; Mao, Hui; Curran, Walter J.; Liu, Tian

    2016-03-01

    Purpose: To develop a new MRI-CT prostate registration using patch-based deformation prediction framework to improve MRI-guided prostate radiotherapy by incorporating multiparametric MRI into planning CT images. Methods: The main contribution is to estimate the deformation between prostate MRI and CT images in a patch-wise fashion by using the sparse representation technique. We assume that two image patches should follow the same deformation if their patch-wise appearance patterns are similar. Specifically, there are two stages in our proposed framework, i.e., the training stage and the application stage. In the training stage, each prostate MR images are carefully registered to the corresponding CT images and all training MR and CT images are carefully registered to a selected CT template. Thus, we obtain the dense deformation field for each training MR and CT image. In the application stage, for registering a new subject MR image with the same subject CT image, we first select a small number of key points at the distinctive regions of this subject CT image. Then, for each key point in the subject CT image, we extract the image patch, centered at the underlying key point. Then, we adaptively construct the coupled dictionary for the underlying point where each atom in the dictionary consists of image patches and the respective deformations obtained from training pair-wise MRI-CT images. Next, the subject image patch can be sparsely represented by a linear combination of training image patches in the dictionary, where we apply the same sparse coefficients to the respective deformations in the dictionary to predict the deformation for the subject MR image patch. After we repeat the same procedure for each subject CT key point, we use B-splines to interpolate a dense deformation field, which is used as the initialization to allow the registration algorithm estimating the remaining small segment of deformations from MRI to CT image. Results: Our MRI-CT registration

  2. Preconditioned dynamic mode decomposition and mode selection algorithms for large datasets using incremental proper orthogonal decomposition

    Science.gov (United States)

    Ohmichi, Yuya

    2017-07-01

    In this letter, we propose a simple and efficient framework of dynamic mode decomposition (DMD) and mode selection for large datasets. The proposed framework explicitly introduces a preconditioning step using an incremental proper orthogonal decomposition (POD) to DMD and mode selection algorithms. By performing the preconditioning step, the DMD and mode selection can be performed with low memory consumption and therefore can be applied to large datasets. Additionally, we propose a simple mode selection algorithm based on a greedy method. The proposed framework is applied to the analysis of three-dimensional flow around a circular cylinder.

  3. A New Feature Selection Algorithm Based on the Mean Impact Variance

    Directory of Open Access Journals (Sweden)

    Weidong Cheng

    2014-01-01

    Full Text Available The selection of fewer or more representative features from multidimensional features is important when the artificial neural network (ANN algorithm is used as a classifier. In this paper, a new feature selection method called the mean impact variance (MIVAR method is proposed to determine the feature that is more suitable for classification. Moreover, this method is constructed on the basis of the training process of the ANN algorithm. To verify the effectiveness of the proposed method, the MIVAR value is used to rank the multidimensional features of the bearing fault diagnosis. In detail, (1 70-dimensional all waveform features are extracted from a rolling bearing vibration signal with four different operating states, (2 the corresponding MIVAR values of all 70-dimensional features are calculated to rank all features, (3 14 groups of 10-dimensional features are separately generated according to the ranking results and the principal component analysis (PCA algorithm and a back propagation (BP network is constructed, and (4 the validity of the ranking result is proven by training this BP network with these seven groups of 10-dimensional features and by comparing the corresponding recognition rates. The results prove that the features with larger MIVAR value can lead to higher recognition rates.

  4. Dynamic angle selection in X-ray computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Dabravolski, Andrei, E-mail: andrei.dabravolski@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, Kees Joost, E-mail: joost.batenburg@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica (CWI), Science Park 123, 1098 XG Amsterdam (Netherlands); Sijbers, Jan, E-mail: jan.sijbers@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)

    2014-04-01

    Highlights: • We propose the dynamic angle selection algorithm for CT scanning. • The approach is based on the concept of information gain over a set of solutions. • Projection angles are selected based on the already available projection data. • The approach can lead to more accurate results from fewer projections. - Abstract: In X-ray tomography, a number of radiographs (projections) are recorded from which a tomogram is then reconstructed. Conventionally, these projections are acquired equiangularly, resulting in an unbiased sampling of the Radon space. However, especially in case when only a limited number of projections can be acquired, the selection of the angles has a large impact on the quality of the reconstructed image. In this paper, a dynamic algorithm is proposed, in which new projection angles are selected by maximizing the information gain about the object, given the set of possible new angles. Experiments show that this approach can select projection angles for which the accuracy of the reconstructed image is significantly higher compared to the standard angle selections schemes.

  5. Dynamic angle selection in X-ray computed tomography

    International Nuclear Information System (INIS)

    Dabravolski, Andrei; Batenburg, Kees Joost; Sijbers, Jan

    2014-01-01

    Highlights: • We propose the dynamic angle selection algorithm for CT scanning. • The approach is based on the concept of information gain over a set of solutions. • Projection angles are selected based on the already available projection data. • The approach can lead to more accurate results from fewer projections. - Abstract: In X-ray tomography, a number of radiographs (projections) are recorded from which a tomogram is then reconstructed. Conventionally, these projections are acquired equiangularly, resulting in an unbiased sampling of the Radon space. However, especially in case when only a limited number of projections can be acquired, the selection of the angles has a large impact on the quality of the reconstructed image. In this paper, a dynamic algorithm is proposed, in which new projection angles are selected by maximizing the information gain about the object, given the set of possible new angles. Experiments show that this approach can select projection angles for which the accuracy of the reconstructed image is significantly higher compared to the standard angle selections schemes

  6. Optimization of Protocol CT, PET-CT, whole body

    International Nuclear Information System (INIS)

    Gutierrez, Fredys Santos; Namias, Mauro

    2013-01-01

    The objective of this study was to optimize the acquisition protocols and processing existing of the CT PET/CT scanner for clinical use of Nuclear Diagnostic Center Foundation, a way to minimize the radiation dose while maintaining diagnostic image quality properly. Dosimetric data of PET / CT service were surveyed and obtained the baseline against which we compare and define strategies and modifications to develop online. We selected transaxial up to the pulmonary hilum and liver slices as the anatomical regions of interest that led to the standardization of the study

  7. Road network selection for small-scale maps using an improved centrality-based algorithm

    Directory of Open Access Journals (Sweden)

    Roy Weiss

    2014-12-01

    Full Text Available The road network is one of the key feature classes in topographic maps and databases. In the task of deriving road networks for products at smaller scales, road network selection forms a prerequisite for all other generalization operators, and is thus a fundamental operation in the overall process of topographic map and database production. The objective of this work was to develop an algorithm for automated road network selection from a large-scale (1:10,000 to a small-scale database (1:200,000. The project was pursued in collaboration with swisstopo, the national mapping agency of Switzerland, with generic mapping requirements in mind. Preliminary experiments suggested that a selection algorithm based on betweenness centrality performed best for this purpose, yet also exposed problems. The main contribution of this paper thus consists of four extensions that address deficiencies of the basic centrality-based algorithm and lead to a significant improvement of the results. The first two extensions improve the formation of strokes concatenating the road segments, which is crucial since strokes provide the foundation upon which the network centrality measure is computed. Thus, the first extension ensures that roundabouts are detected and collapsed, thus avoiding interruptions of strokes by roundabouts, while the second introduces additional semantics in the process of stroke formation, allowing longer and more plausible strokes to built. The third extension detects areas of high road density (i.e., urban areas using density-based clustering and then locally increases the threshold of the centrality measure used to select road segments, such that more thinning takes place in those areas. Finally, since the basic algorithm tends to create dead-ends—which however are not tolerated in small-scale maps—the fourth extension reconnects these dead-ends to the main network, searching for the best path in the main heading of the dead-end.

  8. WE-G-207-05: Relationship Between CT Image Quality, Segmentation Performance, and Quantitative Image Feature Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J; Nishikawa, R [University of Pittsburgh, Pittsburgh, PA (United States); Reiser, I [The University of Chicago, Chicago, IL (United States); Boone, J [UC Davis Medical Center, Sacramento, CA (United States)

    2015-06-15

    Purpose: Segmentation quality can affect quantitative image feature analysis. The objective of this study is to examine the relationship between computed tomography (CT) image quality, segmentation performance, and quantitative image feature analysis. Methods: A total of 90 pathology proven breast lesions in 87 dedicated breast CT images were considered. An iterative image reconstruction (IIR) algorithm was used to obtain CT images with different quality. With different combinations of 4 variables in the algorithm, this study obtained a total of 28 different qualities of CT images. Two imaging tasks/objectives were considered: 1) segmentation and 2) classification of the lesion as benign or malignant. Twenty-three image features were extracted after segmentation using a semi-automated algorithm and 5 of them were selected via a feature selection technique. Logistic regression was trained and tested using leave-one-out-cross-validation and its area under the ROC curve (AUC) was recorded. The standard deviation of a homogeneous portion and the gradient of a parenchymal portion of an example breast were used as an estimate of image noise and sharpness. The DICE coefficient was computed using a radiologist’s drawing on the lesion. Mean DICE and AUC were used as performance metrics for each of the 28 reconstructions. The relationship between segmentation and classification performance under different reconstructions were compared. Distributions (median, 95% confidence interval) of DICE and AUC for each reconstruction were also compared. Results: Moderate correlation (Pearson’s rho = 0.43, p-value = 0.02) between DICE and AUC values was found. However, the variation between DICE and AUC values for each reconstruction increased as the image sharpness increased. There was a combination of IIR parameters that resulted in the best segmentation with the worst classification performance. Conclusion: There are certain images that yield better segmentation or classification

  9. Optimal Parameter Selection of Power System Stabilizer using Genetic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Hyeng Hwan; Chung, Dong Il; Chung, Mun Kyu [Dong-AUniversity (Korea); Wang, Yong Peel [Canterbury Univeristy (New Zealand)

    1999-06-01

    In this paper, it is suggested that the selection method of optimal parameter of power system stabilizer (PSS) with robustness in low frequency oscillation for power system using real variable elitism genetic algorithm (RVEGA). The optimal parameters were selected in the case of power system stabilizer with one lead compensator, and two lead compensator. Also, the frequency responses characteristics of PSS, the system eigenvalues criterion and the dynamic characteristics were considered in the normal load and the heavy load, which proved usefulness of RVEGA compare with Yu's compensator design theory. (author). 20 refs., 15 figs., 8 tabs.

  10. Automated CT-based segmentation and quantification of total intracranial volume

    Energy Technology Data Exchange (ETDEWEB)

    Aguilar, Carlos; Wahlund, Lars-Olof; Westman, Eric [Karolinska Institute, Department of Neurobiology, Care Sciences and Society (NVS), Division of Clinical Geriatrics, Stockholm (Sweden); Edholm, Kaijsa; Cavallin, Lena; Muller, Susanne; Axelsson, Rimma [Karolinska Institute, Department of Clinical Science, Intervention and Technology, Division of Medical Imaging and Technology, Stockholm (Sweden); Karolinska University Hospital in Huddinge, Department of Radiology, Stockholm (Sweden); Simmons, Andrew [King' s College London, Institute of Psychiatry, London (United Kingdom); NIHR Biomedical Research Centre for Mental Health and Biomedical Research Unit for Dementia, London (United Kingdom); Skoog, Ingmar [Gothenburg University, Department of Psychiatry and Neurochemistry, The Sahlgrenska Academy, Gothenburg (Sweden); Larsson, Elna-Marie [Uppsala University, Department of Surgical Sciences, Radiology, Akademiska Sjukhuset, Uppsala (Sweden)

    2015-11-15

    To develop an algorithm to segment and obtain an estimate of total intracranial volume (tICV) from computed tomography (CT) images. Thirty-six CT examinations from 18 patients were included. Ten patients were examined twice the same day and eight patients twice six months apart (these patients also underwent MRI). The algorithm combines morphological operations, intensity thresholding and mixture modelling. The method was validated against manual delineation and its robustness assessed from repeated imaging examinations. Using automated MRI software, the comparability with MRI was investigated. Volumes were compared based on average relative volume differences and their magnitudes; agreement was shown by a Bland-Altman analysis graph. We observed good agreement between our algorithm and manual delineation of a trained radiologist: the Pearson's correlation coefficient was r = 0.94, tICVml[manual] = 1.05 x tICVml[automated] - 33.78 (R{sup 2} = 0.88). Bland-Altman analysis showed a bias of 31 mL and a standard deviation of 30 mL over a range of 1265 to 1526 mL. tICV measurements derived from CT using our proposed algorithm have shown to be reliable and consistent compared to manual delineation. However, it appears difficult to directly compare tICV measures between CT and MRI. (orig.)

  11. Automated CT-based segmentation and quantification of total intracranial volume

    International Nuclear Information System (INIS)

    Aguilar, Carlos; Wahlund, Lars-Olof; Westman, Eric; Edholm, Kaijsa; Cavallin, Lena; Muller, Susanne; Axelsson, Rimma; Simmons, Andrew; Skoog, Ingmar; Larsson, Elna-Marie

    2015-01-01

    To develop an algorithm to segment and obtain an estimate of total intracranial volume (tICV) from computed tomography (CT) images. Thirty-six CT examinations from 18 patients were included. Ten patients were examined twice the same day and eight patients twice six months apart (these patients also underwent MRI). The algorithm combines morphological operations, intensity thresholding and mixture modelling. The method was validated against manual delineation and its robustness assessed from repeated imaging examinations. Using automated MRI software, the comparability with MRI was investigated. Volumes were compared based on average relative volume differences and their magnitudes; agreement was shown by a Bland-Altman analysis graph. We observed good agreement between our algorithm and manual delineation of a trained radiologist: the Pearson's correlation coefficient was r = 0.94, tICVml[manual] = 1.05 x tICVml[automated] - 33.78 (R 2 = 0.88). Bland-Altman analysis showed a bias of 31 mL and a standard deviation of 30 mL over a range of 1265 to 1526 mL. tICV measurements derived from CT using our proposed algorithm have shown to be reliable and consistent compared to manual delineation. However, it appears difficult to directly compare tICV measures between CT and MRI. (orig.)

  12. Scout-view assisted interior micro-CT

    International Nuclear Information System (INIS)

    Sharma, Kriti Sen; Narayanan, Shree; Agah, Masoud; Holzner, Christian; Vasilescu, Dragoş M; Jin, Xin; Hoffman, Eric A; Yu, Hengyong; Wang, Ge

    2013-01-01

    Micro computed tomography (micro-CT) is a widely-used imaging technique. A challenge of micro-CT is to quantitatively reconstruct a sample larger than the field-of-view (FOV) of the detector. This scenario is characterized by truncated projections and associated image artifacts. However, for such truncated scans, a low resolution scout scan with an increased FOV is frequently acquired so as to position the sample properly. This study shows that the otherwise discarded scout scans can provide sufficient additional information to uniquely and stably reconstruct the interior region of interest. Two interior reconstruction methods are designed to utilize the multi-resolution data without significant computational overhead. While most previous studies used numerically truncated global projections as interior data, this study uses truly hybrid scans where global and interior scans were carried out at different resolutions. Additionally, owing to the lack of standard interior micro-CT phantoms, we designed and fabricated novel interior micro-CT phantoms for this study to provide means of validation for our algorithms. Finally, two characteristic samples from separate studies were scanned to show the effect of our reconstructions. The presented methods show significant improvements over existing reconstruction algorithms. (paper)

  13. Image reconstruction design of industrial CT instrument for teaching

    International Nuclear Information System (INIS)

    Zou Yongning; Cai Yufang

    2009-01-01

    Industrial CT instrument for teaching is applied to teaching and study in field of physics and radiology major, image reconstruction is an important part of software on CT instrument. The paper expatiate on CT physical theory and first generation CT reconstruction algorithm, describe scan process of industrial CT instrument for teaching; analyze image artifact as result of displacement of rotation center, implement method of center displacement correcting, design and complete image reconstruction software, application shows that reconstructed image is very clear and qualitatively high. (authors)

  14. A Local Asynchronous Distributed Privacy Preserving Feature Selection Algorithm for Large Peer-to-Peer Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — In this paper we develop a local distributed privacy preserving algorithm for feature selection in a large peer-to-peer environment. Feature selection is often used...

  15. New CT-aided stereotactic neurosurgery technique

    International Nuclear Information System (INIS)

    Shao, H.M.; Truong, T.K.; Reed, I.S.; Slater, R.A.

    1985-01-01

    In this communication, a new technique for CT-aided stereotactic neurosurgery is presented. The combination of specially designed hardware and software provides a fast, simple, and versatile tool for the accurate insertion of a probe into the human brain. This system is portable and can be implemented on any CT computer system. The complete procedure to perform the CT-aided stereotactic neurosurgery technique is presented. Experimental results are given which demonstrate the power of the method. Finally, the key algorithms for realizing this technique are described in the Appendix

  16. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  17. Mass preserving image registration for lung CT

    DEFF Research Database (Denmark)

    Gorbunova, Vladlena; Sporring, Jon; Lo, Pechin Chien Pau

    2012-01-01

    This paper presents a mass preserving image registration algorithm for lung CT images. To account for the local change in lung tissue intensity during the breathing cycle, a tissue appearance model based on the principle of preservation of total lung mass is proposed. This model is incorporated...... on four groups of data: 44 pairs of longitudinal inspiratory chest CT scans with small difference in lung volume; 44 pairs of longitudinal inspiratory chest CT scans with large difference in lung volume; 16 pairs of expiratory and inspiratory CT scans; and 5 pairs of images extracted at end exhale and end...

  18. Optimum location of external markers using feature selection algorithms for real-time tumor tracking in external-beam radiotherapy: a virtual phantom study.

    Science.gov (United States)

    Nankali, Saber; Torshabi, Ahmad Esmaili; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-08

    In external-beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation-based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two "Genetic" and "Ranker" searching procedures. The performance of these algorithms has been evaluated using four-dimensional extended cardiac-torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro-fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F-test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation-based feature selection algorithm, in

  19. Ultrasensitive electrochemical aptasensor based on sandwich architecture for selective label-free detection of colorectal cancer (CT26) cells.

    Science.gov (United States)

    Hashkavayi, Ayemeh Bagheri; Raoof, Jahan Bakhsh; Ojani, Reza; Kavoosian, Saeid

    2017-06-15

    Colorectal cancer is one of the most common cancers in the world and has no effective treatment. Therefore, development of new methods for early diagnosis is instantly required. Biological recognition probes such as synthetic receptor and aptamer is one of the candidate recognition layers to detect important biomolecules. In this work, an electrochemical aptasensor was developed by fabricating an aptamer-cell-aptamer sandwich architecture on an SBA-15-3-aminopropyltriethoxysilane (SBA-15-pr-NH 2 ) and Au nanoparticles (AuNPs) modified graphite screen printed electrode (GSPE) surface for the selective, label-free detection of CT26 cancer cells. Based on the incubation of the thiolated aptamer with CT26 cells, the electron-transfer resistance of Fe (CN) 6 3-/4- redox couple increased considerably on the aptasensor surface. The results obtained from cyclic voltammetry and electrochemical impedance spectroscopy studies showed that the fabricated aptasensor can specifically identify CT26 cells in the concentration ranges of 10-1.0×10 5 cells/mL and 1.0×10 5 -6.0×10 6 cells/mL, respectively, with a detection limit of 2cells/mL. Applying the thiol terminated aptamer (5TR1) as a recognition layer led to a sensor with high affinity for CT26 cancer cells, compared to control cancer cells of AGS cells, VERO Cells, PC3 cells and SKOV-3 cells. Therefore a simple, rapid, label free, inexpensive, excellent, sensitive and selective electrochemical aptasensor based on sandwich architecture was developed for detection of CT26 Cells. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Ultra-low dose CT attenuation correction for PET/CT

    International Nuclear Information System (INIS)

    Xia Ting; Kinahan, Paul E; Alessio, Adam M; De Man, Bruno; Manjeshwar, Ravindra; Asma, Evren

    2012-01-01

    A challenge for positron emission tomography/computed tomography (PET/CT) quantitation is patient respiratory motion, which can cause an underestimation of lesion activity uptake and an overestimation of lesion volume. Several respiratory motion correction methods benefit from longer duration CT scans that are phase matched with PET scans. However, even with the currently available, lowest dose CT techniques, extended duration cine CT scans impart a substantially high radiation dose. This study evaluates methods designed to reduce CT radiation dose in PET/CT scanning. We investigated selected combinations of dose reduced acquisition and noise suppression methods that take advantage of the reduced requirement of CT for PET attenuation correction (AC). These include reducing CT tube current, optimizing CT tube voltage, adding filtration, CT sinogram smoothing and clipping. We explored the impact of these methods on PET quantitation via simulations on different digital phantoms. CT tube current can be reduced much lower for AC than that in low dose CT protocols. Spectra that are higher energy and narrower are generally more dose efficient with respect to PET image quality. Sinogram smoothing could be used to compensate for the increased noise and artifacts at radiation dose reduced CT images, which allows for a further reduction of CT dose with no penalty for PET image quantitation. When CT is not used for diagnostic and anatomical localization purposes, we showed that ultra-low dose CT for PET/CT is feasible. The significant dose reduction strategies proposed here could enable respiratory motion compensation methods that require extended duration CT scans and reduce radiation exposure in general for all PET/CT imaging. (paper)

  1. Ultra-low dose CT attenuation correction for PET/CT

    Science.gov (United States)

    Xia, Ting; Alessio, Adam M.; De Man, Bruno; Manjeshwar, Ravindra; Asma, Evren; Kinahan, Paul E.

    2012-01-01

    A challenge for PET/CT quantitation is patient respiratory motion, which can cause an underestimation of lesion activity uptake and an overestimation of lesion volume. Several respiratory motion correction methods benefit from longer duration CT scans that are phase matched with PET scans. However, even with the currently-available, lowest dose CT techniques, extended duration CINE CT scans impart a substantially high radiation dose. This study evaluates methods designed to reduce CT radiation dose in PET/CT scanning. Methods We investigated selected combinations of dose reduced acquisition and noise suppression methods that take advantage of the reduced requirement of CT for PET attenuation correction (AC). These include reducing CT tube current, optimizing CT tube voltage, adding filtration, CT sinogram smoothing and clipping. We explored the impact of these methods on PET quantitation via simulations on different digital phantoms. Results CT tube current can be reduced much lower for AC than that in low dose CT protocols. Spectra that are higher energy and narrower are generally more dose efficient with respect to PET image quality. Sinogram smoothing could be used to compensate for the increased noise and artifacts at radiation dose reduced CT images, which allows for a further reduction of CT dose with no penalty for PET image quantitation. Conclusion When CT is not used for diagnostic and anatomical localization purposes, we showed that ultra-low dose CT for PET/CT is feasible. The significant dose reduction strategies proposed here could enable respiratory motion compensation methods that require extended duration CT scans and reduce radiation exposure in general for all PET/CT imaging. PMID:22156174

  2. MO-PIS-Exhibit Hall-01: Imaging: CT Dose Optimization Technologies I

    Energy Technology Data Exchange (ETDEWEB)

    Denison, K; Smith, S [GE Healthcare, Waukesha, WI (United States)

    2014-06-15

    DICOM Radiation Dose Structured Report (RDSR) generates a dose report at the conclusion of every examination. Dose Check preemptively notifies CT operators when scan parameters exceed user-defined dose thresholds. DoseWatch is an information technology application providing vendor-agnostic dose tracking and analysis for CT (and all other diagnostic x-ray modalities) SnapShot Pulse improves coronary CTA dose management. VolumeShuttle uses two acquisitions to increase coverage, decrease dose, and conserve on contrast administration. Color-Coding for Kids applies the Broselow-Luten Pediatric System to facilitate pediatric emergency care and reduce medical errors. FeatherLight achieves dose optimization through pediatric procedure-based protocols. Adventure Series scanners provide a child-friendly imaging environment promoting patient cooperation with resultant reduction in retakes and patient motion. Philips CT Dose Optimization Tools and Advanced Reconstruction Presentation Time: 11:45 ‘ 12:15 PM The first part of the talk will cover “Dose Reduction and Dose Optimization Technologies” present in Philips CT Scanners. The main Technologies to be presented include: DoseRight and tube current modulation (DoseRight, Z-DOM, 3D-DOM, DoseRight Cardiac) Special acquisition modes Beam filtration and beam shapers Eclipse collimator and ClearRay collimator NanoPanel detector DoseRight will cover automatic tube current selection that automatically adjusts the dose for the individual patient. The presentation will explore the modulation techniques currently employed in Philips CT scanners and will include the algorithmic concepts as well as illustrative examples. Modulation and current selection technologies to be covered include the Automatic Current Selection component of DoseRight, ZDOM longitudinal dose modulation, 3D-DOM (combination of longitudinal and rotational dose modulation), Cardiac Dose right (an ECG based dose modulation scheme), and the DoseRight Index (DRI) IQ

  3. MO-PIS-Exhibit Hall-01: Imaging: CT Dose Optimization Technologies I

    International Nuclear Information System (INIS)

    Denison, K; Smith, S

    2014-01-01

    DICOM Radiation Dose Structured Report (RDSR) generates a dose report at the conclusion of every examination. Dose Check preemptively notifies CT operators when scan parameters exceed user-defined dose thresholds. DoseWatch is an information technology application providing vendor-agnostic dose tracking and analysis for CT (and all other diagnostic x-ray modalities) SnapShot Pulse improves coronary CTA dose management. VolumeShuttle uses two acquisitions to increase coverage, decrease dose, and conserve on contrast administration. Color-Coding for Kids applies the Broselow-Luten Pediatric System to facilitate pediatric emergency care and reduce medical errors. FeatherLight achieves dose optimization through pediatric procedure-based protocols. Adventure Series scanners provide a child-friendly imaging environment promoting patient cooperation with resultant reduction in retakes and patient motion. Philips CT Dose Optimization Tools and Advanced Reconstruction Presentation Time: 11:45 ‘ 12:15 PM The first part of the talk will cover “Dose Reduction and Dose Optimization Technologies” present in Philips CT Scanners. The main Technologies to be presented include: DoseRight and tube current modulation (DoseRight, Z-DOM, 3D-DOM, DoseRight Cardiac) Special acquisition modes Beam filtration and beam shapers Eclipse collimator and ClearRay collimator NanoPanel detector DoseRight will cover automatic tube current selection that automatically adjusts the dose for the individual patient. The presentation will explore the modulation techniques currently employed in Philips CT scanners and will include the algorithmic concepts as well as illustrative examples. Modulation and current selection technologies to be covered include the Automatic Current Selection component of DoseRight, ZDOM longitudinal dose modulation, 3D-DOM (combination of longitudinal and rotational dose modulation), Cardiac Dose right (an ECG based dose modulation scheme), and the DoseRight Index (DRI) IQ

  4. Adaptive patch-based POCS approach for super resolution reconstruction of 4D-CT lung data

    International Nuclear Information System (INIS)

    Wang, Tingting; Cao, Lei; Yang, Wei; Feng, Qianjin; Chen, Wufan; Zhang, Yu

    2015-01-01

    Image enhancement of lung four-dimensional computed tomography (4D-CT) data is highly important because image resolution remains a crucial point in lung cancer radiotherapy. In this paper, we proposed a method for lung 4D-CT super resolution (SR) by using an adaptive-patch-based projection onto convex sets (POCS) approach, which is in contrast with the global POCS SR algorithm, to recover fine details with lesser artifacts in images. The main contribution of this patch-based approach is that the interfering local structure from other phases can be rejected by employing a similar patch adaptive selection strategy. The effectiveness of our approach is demonstrated through experiments on simulated images and real lung 4D-CT datasets. A comparison with previously published SR reconstruction methods highlights the favorable characteristics of the proposed method. (paper)

  5. Local anesthesia selection algorithm in patients with concomitant somatic diseases.

    Science.gov (United States)

    Anisimova, E N; Sokhov, S T; Letunova, N Y; Orekhova, I V; Gromovik, M V; Erilin, E A; Ryazantsev, N A

    2016-01-01

    The paper presents basic principles of local anesthesia selection in patients with concomitant somatic diseases. These principles are history taking; analysis of drugs interaction with local anesthetic and sedation agents; determination of the functional status of the patient; patient anxiety correction; dental care with monitoring of hemodynamics parameters. It was found that adhering to this algorithm promotes prevention of urgent conditions in patients in outpatient dentistry.

  6. Low dose reconstruction algorithm for differential phase contrast imaging.

    Science.gov (United States)

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  7. ALGORITHM OF SELECTION EFFECTIVE SOLUTIONS FOR REPROFILING OF INDUSTRIAL BUILDINGS

    Directory of Open Access Journals (Sweden)

    MENEJLJUK A. I.

    2016-08-01

    Full Text Available Raising of problem.Non-compliance requirements of today's industrial enterprises, which were built during the Soviet period, as well as significant technical progress, economic reform and transition to market principles of performance evaluation leading to necessity to change their target and functionality. The technical condition of many industrial buildings in Ukraine allows to exploit them for decades.Redesigning manufacturing enterprises allows not only to reduce the cost of construction, but also to obtain new facilities in the city. Despite the large number of industrial buildings that have lost their effectiveness and relevance, as well as a significant investor interest in these objects, the scope of redevelopment in the construction remains unexplored. Analysis researches on the topic. The problem of reconstruction of industrial buildings considered in Topchy D. [3], Travin V. [9], as well as in the work of other scientists. However, there are no rules in regulatory documents and system studies for improving the organization of the reconstruction of buildings at realigning. The purpose of this work is the development an algorithm of actions for selection of effective organizational decisions at the planning stage of a reprofiling project of industrial buildings. The proposed algorithm allows you to select an effective organizational and technological solution for the re-profiling of industrial buildings, taking into account features of the building, its location, its state of structures and existing restrictions. The most effective organizational solution allows realize the reprofiling project of an industrial building in the most possible short terms and with the lowest possible use of material resources, taking into account the available features and restrictions. Conclusion. Each object has a number of unique features that necessary for considering at choosing an effective reprofiling variant. The developed algorithm for selecting

  8. A motion algorithm to extract physical and motion parameters of mobile targets from cone-beam computed tomographic images.

    Science.gov (United States)

    Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad

    2016-05-17

    A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern

  9. A Genetic Algorithm-based Antenna Selection Approach for Large-but-Finite MIMO Networks

    KAUST Repository

    Makki, Behrooz

    2016-12-29

    We study the performance of antenna selectionbased multiple-input-multiple-output (MIMO) networks with large but finite number of transmit antennas and receivers. Considering the continuous and bursty communication scenarios with different users’ data request probabilities, we develop an efficient antenna selection scheme using genetic algorithms (GA). As demonstrated, the proposed algorithm is generic in the sense that it can be used in the cases with different objective functions, precoding methods, levels of available channel state information and channel models. Our results show that the proposed GAbased algorithm reaches (almost) the same throughput as the exhaustive search-based optimal approach, with substantially less implementation complexity.

  10. A Genetic Algorithm-based Antenna Selection Approach for Large-but-Finite MIMO Networks

    KAUST Repository

    Makki, Behrooz; Ide, Anatole; Svensson, Tommy; Eriksson, Thomas; Alouini, Mohamed-Slim

    2016-01-01

    We study the performance of antenna selectionbased multiple-input-multiple-output (MIMO) networks with large but finite number of transmit antennas and receivers. Considering the continuous and bursty communication scenarios with different users’ data request probabilities, we develop an efficient antenna selection scheme using genetic algorithms (GA). As demonstrated, the proposed algorithm is generic in the sense that it can be used in the cases with different objective functions, precoding methods, levels of available channel state information and channel models. Our results show that the proposed GAbased algorithm reaches (almost) the same throughput as the exhaustive search-based optimal approach, with substantially less implementation complexity.

  11. SU-E-J-209: Verification of 3D Surface Registration Between Stereograms and CT Images

    Energy Technology Data Exchange (ETDEWEB)

    Han, T; Gifford, K [UT MD Anderson Cancer Center, Houston, TX (United States); Smith, B [MD Anderson Cancer Center, Houston, TX (United States); Salehpour, M [M.D. Anderson Cancer Center, Houston, TX (United States)

    2014-06-01

    Purpose: Stereography can provide a visualization of the skin surface for radiation therapy patients. The aim of this study was to verify the registration algorithm in a commercial image analysis software, 3dMDVultus, for the fusion of stereograms and CT images. Methods: CT and stereographic scans were acquired of a head phantom and a deformable phantom. CT images were imported in 3dMDVultus and the surface contours were generated by threshold segmentation. Stereograms were reconstructed in 3dMDVultus. The resulting surfaces were registered with Vultus algorithm and then exported to in-house registration software and compared with four algorithms: rigid, affine, non-rigid iterative closest point (ICP) and b-spline algorithm. RMS (root-mean-square residuals of the surface point distances) error between the registered CT and stereogram surfaces was calculated and analyzed. Results: For the head phantom, the maximum RMS error between registered CT surfaces to stereogram was 6.6 mm for Vultus algorithm, whereas the mean RMS error was 0.7 mm. For the deformable phantom, the maximum RMS error was 16.2 mm for Vultus algorithm, whereas the mean RMS error was 4.4 mm. Non-rigid ICP demonstrated the best registration accuracy, as the mean of RMS errors were both within 1 mm. Conclusion: The accuracy of registration algorithm in 3dMDVultus was verified and exceeded RMS of 2 mm for deformable cases. Non-rigid ICP and b-spline algorithms improve the registration accuracy for both phantoms, especially in deformable one. For those patients whose body habitus deforms during radiation therapy, more advanced nonrigid algorithms need to be used.

  12. SU-E-I-81: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Adult Anthropomorphic and ACR Phantoms

    International Nuclear Information System (INIS)

    Mahmood, U; Erdi, Y; Wang, W

    2014-01-01

    Purpose: To assess the impact of General Electrics (GE) automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of an adult anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, Auto mA (180 to 380 mA), noise index (NI) = 14, adaptive iterative statistical reconstruction (ASiR) of 20%, 0.8s rotation time. Image quality was evaluated by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: The CNR for the adult male was found to decrease from CNR = 0.912 ± 0.045 for the baseline protocol without kVa to a CNR = 0.756 ± 0.049 with kVa activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.903 ± 0.023. The difference in the central liver dose with and without kVa was found to be 0.07%. Conclusion: Dose reduction was insignificant in the adult phantom. As determined by NPS analysis, ASiR of 40% produced images with similar noise texture to the baseline protocol. However, the CNR at ASiR of 40% with kVa fails to meet the current ACR CNR passing requirement of 1.0

  13. SU-E-I-81: Assessment of CT Radiation Dose and Image Quality for An Automated Tube Potential Selection Algorithm Using Adult Anthropomorphic and ACR Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Mahmood, U; Erdi, Y; Wang, W [Memorial Sloan Kettering Cancer Center, NY, NY (United States)

    2014-06-01

    Purpose: To assess the impact of General Electrics (GE) automated tube potential algorithm, kV assist (kVa) on radiation dose and image quality, with an emphasis on optimizing protocols based on noise texture. Methods: Radiation dose was assessed by inserting optically stimulated luminescence dosimeters (OSLs) throughout the body of an adult anthropomorphic phantom (CIRS). The baseline protocol was: 120 kVp, Auto mA (180 to 380 mA), noise index (NI) = 14, adaptive iterative statistical reconstruction (ASiR) of 20%, 0.8s rotation time. Image quality was evaluated by calculating the contrast to noise ratio (CNR) and noise power spectrum (NPS) from the ACR CT accreditation phantom. CNRs were calculated according to the steps described in ACR CT phantom testing document. NPS was determined by taking the 3D FFT of the uniformity section of the ACR phantom. NPS and CNR were evaluated with and without kVa and for all available adaptive iterative statistical reconstruction (ASiR) settings, ranging from 0 to 100%. Each NPS was also evaluated for its peak frequency difference (PFD) with respect to the baseline protocol. Results: The CNR for the adult male was found to decrease from CNR = 0.912 ± 0.045 for the baseline protocol without kVa to a CNR = 0.756 ± 0.049 with kVa activated. When compared against the baseline protocol, the PFD at ASiR of 40% yielded a decrease in noise magnitude as realized by the increase in CNR = 0.903 ± 0.023. The difference in the central liver dose with and without kVa was found to be 0.07%. Conclusion: Dose reduction was insignificant in the adult phantom. As determined by NPS analysis, ASiR of 40% produced images with similar noise texture to the baseline protocol. However, the CNR at ASiR of 40% with kVa fails to meet the current ACR CNR passing requirement of 1.0.

  14. Iterative CT shading correction with no prior information

    Science.gov (United States)

    Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye

    2015-11-01

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical

  15. A comparison of an algorithm for automated sequential beam orientation selection (Cycle) with simulated annealing

    International Nuclear Information System (INIS)

    Woudstra, Evert; Heijmen, Ben J M; Storchi, Pascal R M

    2008-01-01

    Some time ago we developed and published a new deterministic algorithm (called Cycle) for automatic selection of beam orientations in radiotherapy. This algorithm is a plan generation process aiming at the prescribed PTV dose within hard dose and dose-volume constraints. The algorithm allows a large number of input orientations to be used and selects only the most efficient orientations, surviving the selection process. Efficiency is determined by a score function and is more or less equal to the extent of uninhibited access to the PTV for a specific beam during the selection process. In this paper we compare the capabilities of fast-simulated annealing (FSA) and Cycle for cases where local optima are supposed to be present. Five pancreas and five oesophagus cases previously treated in our institute were selected for this comparison. Plans were generated for FSA and Cycle, using the same hard dose and dose-volume constraints, and the largest possible achieved PTV doses as obtained from these algorithms were compared. The largest achieved PTV dose values were generally very similar for the two algorithms. In some cases FSA resulted in a slightly higher PTV dose than Cycle, at the cost of switching on substantially more beam orientations than Cycle. In other cases, when Cycle generated the solution with the highest PTV dose using only a limited number of non-zero weight beams, FSA seemed to have some difficulty in switching off the unfavourable directions. Cycle was faster than FSA, especially for large-dimensional feasible spaces. In conclusion, for the cases studied in this paper, we have found that despite the inherent drawback of sequential search as used by Cycle (where Cycle could probably get trapped in a local optimum), Cycle is nevertheless able to find comparable or sometimes slightly better treatment plans in comparison with FSA (which in theory finds the global optimum) especially in large-dimensional beam weight spaces

  16. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction.

    Science.gov (United States)

    Kang, Eunhee; Min, Junhong; Ye, Jong Chul

    2017-10-01

    Due to the potential risk of inducing cancer, radiation exposure by X-ray CT devices should be reduced for routine patient scanning. However, in low-dose X-ray CT, severe artifacts typically occur due to photon starvation, beam hardening, and other causes, all of which decrease the reliability of the diagnosis. Thus, a high-quality reconstruction method from low-dose X-ray CT data has become a major research topic in the CT community. Conventional model-based de-noising approaches are, however, computationally very expensive, and image-domain de-noising approaches cannot readily remove CT-specific noise patterns. To tackle these problems, we want to develop a new low-dose X-ray CT algorithm based on a deep-learning approach. We propose an algorithm which uses a deep convolutional neural network (CNN) which is applied to the wavelet transform coefficients of low-dose CT images. More specifically, using a directional wavelet transform to extract the directional component of artifacts and exploit the intra- and inter- band correlations, our deep network can effectively suppress CT-specific noise. In addition, our CNN is designed with a residual learning architecture for faster network training and better performance. Experimental results confirm that the proposed algorithm effectively removes complex noise patterns from CT images derived from a reduced X-ray dose. In addition, we show that the wavelet-domain CNN is efficient when used to remove noise from low-dose CT compared to existing approaches. Our results were rigorously evaluated by several radiologists at the Mayo Clinic and won second place at the 2016 "Low-Dose CT Grand Challenge." To the best of our knowledge, this work is the first deep-learning architecture for low-dose CT reconstruction which has been rigorously evaluated and proven to be effective. In addition, the proposed algorithm, in contrast to existing model-based iterative reconstruction (MBIR) methods, has considerable potential to benefit from

  17. SU-E-I-13: Evaluation of Metal Artifact Reduction (MAR) Software On Computed Tomography (CT) Images

    International Nuclear Information System (INIS)

    Huang, V; Kohli, K

    2015-01-01

    Purpose: A new commercially available metal artifact reduction (MAR) software in computed tomography (CT) imaging was evaluated with phantoms in the presence of metals. The goal was to assess the ability of the software to restore the CT number in the vicinity of the metals without impacting the image quality. Methods: A Catphan 504 was scanned with a GE Optima RT 580 CT scanner (GE Healthcare, Milwaukee, WI) and the images were reconstructed with and without the MAR software. Both datasets were analyzed with Image Owl QA software (Image Owl Inc, Greenwich, NY). CT number sensitometry, MTF, low contrast, uniformity, noise and spatial accuracy were compared for scans with and without MAR software. In addition, an in-house made phantom was scanned with and without a stainless steel insert at three different locations. The accuracy of the CT number and metal insert dimension were investigated as well. Results: Comparisons between scans with and without MAR algorithm on the Catphan phantom demonstrate similar results for image quality. However, noise was slightly higher for the MAR algorithm. Evaluation of the CT number at various locations of the in-house made phantom was also performed. The baseline HU, obtained from the scan without metal insert, was compared to scans with the stainless steel insert at 3 different locations. The HU difference between the baseline scan versus metal scan was improved when the MAR algorithm was applied. In addition, the physical diameter of the stainless steel rod was over-estimated by the MAR algorithm by 0.9 mm. Conclusion: This work indicates with the presence of metal in CT scans, the MAR algorithm is capable of providing a more accurate CT number without compromising the overall image quality. Future work will include the dosimetric impact on the MAR algorithm

  18. Perfusion CT in acute stroke

    International Nuclear Information System (INIS)

    Eckert, Bernd; Roether, Joachim; Fiehler, Jens; Thomalla, Goetz

    2015-01-01

    Modern multislice CT scanners enable multimodal protocols including non-enhanced CT, CT angiography, and CT perfusion. A 64-slice CT scanner provides 4-cm coverage. To cover the whole brain, a 128 - 256-slice scanner is needed. The use of perfusion CT requires an optimized scan protocol in order to reduce exposure to radiation. As compared to non-enhanced CT and CT angiography, the use of CT perfusion increases detection rates of cerebral ischemia, especially small cortical ischemic lesions, while the detection of lacunar and infratentorial stroke lesions remains limited. Perfusion CT enables estimation of collateral flow in acute occlusion of large intra- or extracranial arteries. Currently, no established reliable thresholds are available for determining infarct core and penumbral tissue by CT perfusion. Moreover, perfusion parameters depend on the processing algorithms and the software used for calculation. However, a number of studies point towards a reduction of cerebral blood volume (CBV) below 2 ml/100 g as a critical threshold that identifies infarct core. Large CBV lesions are associated with poor outcome even in the context of recanalization. The extent of early ischemic signs on non-enhanced CT remains the main parameter from CT imaging to guide acute reperfusion treatment. Nevertheless, perfusion CT increases diagnostic and therapeutic certainty in the acute setting. Similar to stroke MRI, perfusion CT enables the identification of tissue at risk of infarction by the mismatch between infarct core and the larger area of critical hypoperfusion. Further insights into the validity of perfusion parameters are expected from ongoing trials of mechanical thrombectomy in stroke.

  19. Spontaneous Intramuscular Hematomas of the Abdomen and Pelvis: A New Multilevel Algorithm to Direct Transarterial Embolization and Patient Management

    Energy Technology Data Exchange (ETDEWEB)

    Popov, Milen [Lausanne University Hospital, Department of Medicine (Switzerland); Sotiriadis, Charalampos; Gay, Frederique; Jouannic, Anne-Marie; Lachenal, Yann; Hajdu, Steven D.; Doenz, Francesco; Qanadli, Salah D., E-mail: salah.qanadli@chuv.ch [Lausanne University Hospital, Cardio-Thoracic and Vascular Unit, Department of Radiology (Switzerland)

    2017-04-15

    PurposeTo report our experience using a multilevel patient management algorithm to direct transarterial embolization (TAE) in managing spontaneous intramuscular hematoma (SIMH).Materials and MethodsFrom May 2006 to January 2014, twenty-seven patients with SIMH had been referred for TAE to our Radiology department. Clinical status and coagulation characteristics of the patients are analyzed. An algorithm integrating CT findings is suggested to manage SIMH. Patients were classified into three groups: Type I, SIMH with no active bleeding (AB); Type II, SIMH with AB and no muscular fascia rupture (MFR); and Type III, SIMH with MFR and AB. Type II is furthermore subcategorized as IIa, IIb and IIc. Types IIb, IIc and III were considered for TAE. The method of embolization as well as the material been used are described. Continuous variables are presented as mean ± SD. Categorical variables are reported as percentages. Technical success, clinical success, complications and 30-day mortality (d30 M) were analyzed.ResultsTwo patients (7.5%) had Type IIb, four (15%) Type IIc and 21 (77.5%) presented Type III. The detailed CT and CTA findings, embolization procedure and materials used are described. Technical success was 96% with a complication rate of 4%. Clinical success was 88%. The bleeding-related thirty-day mortality was 15% (all with Type III).ConclusionTAE is a safe and efficient technique to control bleeding that should be considered in selected SIMH as soon as possible. The proposed algorithm integrating CT features provides a comprehensive chart to select patients for TAE.Level of Evidence4.

  20. Comparison of imaging selection criteria for intra-arterial thrombectomy in acute ischemic stroke with advanced CT

    International Nuclear Information System (INIS)

    Kim, Eung Yeop; Goh, Byeong Ho; Shin, Dong Hoon; Noh, Young; Lee, Yeong-Bae

    2016-01-01

    To compare two selection criteria (noncontrast CT [NCCT] with multi-phase CT Angiography [MPCTA] and CT perfusion [CTP]) for the determination of eligibility for thrombectomy. We retrospectively enrolled 71 patients who underwent head NCCT, 9.6-cm CTP, and craniocervical single-phase CTA (SPCTA) within 6 hours of onset. The simulated MPCTA was reconstructed from 1-mm CTP images for assessment of collateral circulation. Infarct core (relative CBF < 30 %) and penumbra (Tmax > 6 seconds) volumes were measured. The infarct core < 70 mL with a mismatch ratio > 1.2 (CTP-A), infarct core ≤ 40 mL with a mismatch ratio > 1.8 (CTP-B), and ASPECTS > 5 with good collaterals (50 % ≥ MCA territory) were used to determine eligibility for thrombectomy. SPCTA was compared with the simulated MPCTA for assessment of collaterals. CTP-B determined that 11 patients were ineligible for thrombectomy, of which three were eligible by NCCT with MPCTA and 6 by CTP-A. CTP-A and CTP-B showed discrepancy in determining eligibility for thrombectomy between NCCT with MPCTA in three patients each, rendering no significant statistical difference (P > 0.05). The number of patients with poor collaterals was significantly higher on SPCTA than MPCTA (n = 22 and 6 respectively; P < 0.0001). The two imaging selection criteria (NCCT with MPCTA and CTP) were statistically comparable for determining eligibility for thrombectomy. (orig.)

  1. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    Science.gov (United States)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  2. Comparison of distribution of lung aeration measured with EIT and CT in spontaneously breathing, awake patients1.

    Science.gov (United States)

    Radke, Oliver C; Schneider, Thomas; Braune, Anja; Pirracchio, Romain; Fischer, Felix; Koch, Thea

    2016-09-28

    Both Electrical Impedance Tomography (EIT) and Computed Tomography (CT) allow the estimation of the lung area. We compared two algorithms for the detection of the lung area per quadrant from the EIT images with the lung areas derived from the CT images. 39 outpatients who were scheduled for an elective CT scan of the thorax were included in the study. For each patient we recorded EIT images immediately before the CT scan. The lung area per quadrant was estimated from both CT and EIT data using two different algorithms for the EIT data. Data showed considerable variation during spontaneous breathing of the patients. Overall correlation between EIT and CT was poor (0.58-0.77), the correlation between the two EIT algorithms was better (0.90-0.92). Bland-Altmann analysis revealed absence of bias, but wide limits of agreement. Lung area estimation from CT and EIT differs significantly, most probably because of the fundamental difference in image generation.

  3. The production route selection algorithm in virtual manufacturing networks

    Science.gov (United States)

    Krenczyk, D.; Skolud, B.; Olender, M.

    2017-08-01

    The increasing requirements and competition in the global market are challenges for the companies profitability in production and supply chain management. This situation became the basis for construction of virtual organizations, which are created in response to temporary needs. The problem of the production flow planning in virtual manufacturing networks is considered. In the paper the algorithm of the production route selection from the set of admissible routes, which meets the technology and resource requirements and in the context of the criterion of minimum cost is proposed.

  4. Comparison of low-contrast detectability between two CT reconstruction algorithms using voxel-based 3D printed textured phantoms.

    Science.gov (United States)

    Solomon, Justin; Ba, Alexandre; Bochud, François; Samei, Ehsan

    2016-12-01

    To use novel voxel-based 3D printed textured phantoms in order to compare low-contrast detectability between two reconstruction algorithms, FBP (filtered-backprojection) and SAFIRE (sinogram affirmed iterative reconstruction) and determine what impact background texture (i.e., anatomical noise) has on estimating the dose reduction potential of SAFIRE. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find CLB textures that were reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, four cylindrical phantoms (Textures A-C and uniform, 165 mm in diameter, and 30 mm height) were designed, each containing 20 low-contrast spherical signals (6 mm diameter at nominal contrast levels of ∼3.2, 5.2, 7.2, 10, and 14 HU with four repeats per signal). The phantoms were voxelized and input into a commercial multimaterial 3D printer (Object Connex 350), with custom software for voxel-based printing (using principles of digital dithering). Images of the textured phantoms and a corresponding uniform phantom were acquired at six radiation dose levels (SOMATOM Flash, Siemens Healthcare) and observer model detection performance (detectability index of a multislice channelized Hotelling observer) was estimated for each condition (5 contrasts × 6 doses × 2 reconstructions × 4 backgrounds = 240 total conditions). A multivariate generalized regression analysis was performed (linear terms, no interactions, random error term, log link function) to assess whether dose, reconstruction algorithm, signal contrast, and background type have statistically significant effects on detectability. Also, fitted curves of detectability (averaged across contrast levels

  5. Optimum location of external markers using feature selection algorithms for real‐time tumor tracking in external‐beam radiotherapy: a virtual phantom study

    Science.gov (United States)

    Nankali, Saber; Miandoab, Payam Samadi; Baghizadeh, Amin

    2016-01-01

    In external‐beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation‐based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two “Genetic” and “Ranker” searching procedures. The performance of these algorithms has been evaluated using four‐dimensional extended cardiac‐torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro‐fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F‐test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation‐based feature

  6. Derivation and validation of two decision instruments for selective chest CT in blunt trauma: a multicenter prospective observational study (NEXUS Chest CT).

    Science.gov (United States)

    Rodriguez, Robert M; Langdorf, Mark I; Nishijima, Daniel; Baumann, Brigitte M; Hendey, Gregory W; Medak, Anthony J; Raja, Ali S; Allen, Isabel E; Mower, William R

    2015-10-01

    Unnecessary diagnostic imaging leads to higher costs, longer emergency department stays, and increased patient exposure to ionizing radiation. We sought to prospectively derive and validate two decision instruments (DIs) for selective chest computed tomography (CT) in adult blunt trauma patients. From September 2011 to May 2014, we prospectively enrolled blunt trauma patients over 14 y of age presenting to eight US, urban level 1 trauma centers in this observational study. During the derivation phase, physicians recorded the presence or absence of 14 clinical criteria before viewing chest imaging results. We determined injury outcomes by CT radiology readings and categorized injuries as major or minor according to an expert-panel-derived clinical classification scheme. We then employed recursive partitioning to derive two DIs: Chest CT-All maximized sensitivity for all injuries, and Chest CT-Major maximized sensitivity for only major thoracic injuries (while increasing specificity). In the validation phase, we employed similar methodology to prospectively test the performance of both DIs. We enrolled 11,477 patients-6,002 patients in the derivation phase and 5,475 patients in the validation phase. The derived Chest CT-All DI consisted of (1) abnormal chest X-ray, (2) rapid deceleration mechanism, (3) distracting injury, (4) chest wall tenderness, (5) sternal tenderness, (6) thoracic spine tenderness, and (7) scapular tenderness. The Chest CT-Major DI had the same criteria without rapid deceleration mechanism. In the validation phase, Chest CT-All had a sensitivity of 99.2% (95% CI 95.4%-100%), a specificity of 20.8% (95% CI 19.2%-22.4%), and a negative predictive value (NPV) of 99.8% (95% CI 98.9%-100%) for major injury, and a sensitivity of 95.4% (95% CI 93.6%-96.9%), a specificity of 25.5% (95% CI 23.5%-27.5%), and a NPV of 93.9% (95% CI 91.5%-95.8%) for either major or minor injury. Chest CT-Major had a sensitivity of 99.2% (95% CI 95.4%-100%), a specificity of

  7. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  8. A Novel Sensor Selection and Power Allocation Algorithm for Multiple-Target Tracking in an LPI Radar Network

    Directory of Open Access Journals (Sweden)

    Ji She

    2016-12-01

    Full Text Available Radar networks are proven to have numerous advantages over traditional monostatic and bistatic radar. With recent developments, radar networks have become an attractive platform due to their low probability of intercept (LPI performance for target tracking. In this paper, a joint sensor selection and power allocation algorithm for multiple-target tracking in a radar network based on LPI is proposed. It is found that this algorithm can minimize the total transmitted power of a radar network on the basis of a predetermined mutual information (MI threshold between the target impulse response and the reflected signal. The MI is required by the radar network system to estimate target parameters, and it can be calculated predictively with the estimation of target state. The optimization problem of sensor selection and power allocation, which contains two variables, is non-convex and it can be solved by separating power allocation problem from sensor selection problem. To be specific, the optimization problem of power allocation can be solved by using the bisection method for each sensor selection scheme. Also, the optimization problem of sensor selection can be solved by a lower complexity algorithm based on the allocated powers. According to the simulation results, it can be found that the proposed algorithm can effectively reduce the total transmitted power of a radar network, which can be conducive to improving LPI performance.

  9. Whole-body CT. Spiral and multislice CT. 2. tot. rev. and enl. ed.; Ganzkoerper-Computertomographie. Spiral- und Multislice-CT

    Energy Technology Data Exchange (ETDEWEB)

    Prokop, M.; Galanski, M.; Schaefer-Prokop, C.; Molen, A.J. van der

    2007-07-01

    Spiral and multidetector techniques have improved the diagnostic possibilities of CT, so that image analysis and interpretation have become increasingly complex. This book represents the current state of the art in CT imaging, including the most recent technical scanner developments. The second edition comprises the current state of knowledge in cT imaging. There are new chapters on image processing, application of contrasting agents and radiation dose. All organ-specific pathological findings are discussed in full. There are hints for optimum use and interpretation of CT, including CT angiography, CT colonography, CT-IVPL, and 3D imaging. There is an introduction to cardio-CT, from calcium scoring and CTA of the coronary arteries to judgement of cardiac morphology. There are detailed scan protocols with descriptions of how to go about parameter selection. Practical hints are given for better image quality and lower radiation exposure of patients, guidelines for patient preparation and complication management, and more than 1900 images in optimum RRR quality. (orig.)

  10. A Semiautomatic Segmentation Algorithm for Extracting the Complete Structure of Acini from Synchrotron Micro-CT Images

    Directory of Open Access Journals (Sweden)

    Luosha Xiao

    2013-01-01

    Full Text Available Pulmonary acinus is the largest airway unit provided with alveoli where blood/gas exchange takes place. Understanding the complete structure of acinus is necessary to measure the pathway of gas exchange and to simulate various mechanical phenomena in the lungs. The usual manual segmentation of a complete acinus structure from their experimentally obtained images is difficult and extremely time-consuming, which hampers the statistical analysis. In this study, we develop a semiautomatic segmentation algorithm for extracting the complete structure of acinus from synchrotron micro-CT images of the closed chest of mouse lungs. The algorithm uses a combination of conventional binary image processing techniques based on the multiscale and hierarchical nature of lung structures. Specifically, larger structures are removed, while smaller structures are isolated from the image by repeatedly applying erosion and dilation operators in order, adjusting the parameter referencing to previously obtained morphometric data. A cluster of isolated acini belonging to the same terminal bronchiole is obtained without floating voxels. The extracted acinar models above 98% agree well with those extracted manually. The run time is drastically shortened compared with manual methods. These findings suggest that our method may be useful for taking samples used in the statistical analysis of acinus.

  11. Algorithm-enabled partial-angular-scan configurations for dual-energy CT.

    Science.gov (United States)

    Chen, Buxin; Zhang, Zheng; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan

    2018-05-01

    We seek to investigate an optimization-based one-step method for image reconstruction that explicitly compensates for nonlinear spectral response (i.e., the beam-hardening effect) in dual-energy CT, to investigate the feasibility of the one-step method for enabling two dual-energy partial-angular-scan configurations, referred to as the short- and half-scan configurations, on standard CT scanners without involving additional hardware, and to investigate the potential of the short- and half-scan configurations in reducing imaging dose and scan time in a single-kVp-switch full-scan configuration in which two full rotations are made for collection of dual-energy data. We use the one-step method to reconstruct images directly from dual-energy data through solving a nonconvex optimization program that specifies the images to be reconstructed in dual-energy CT. Dual-energy full-scan data are generated from numerical phantoms and collected from physical phantoms with the standard single-kVp-switch full-scan configuration, whereas dual-energy short- and half-scan data are extracted from the corresponding full-scan data. Besides visual inspection and profile-plot comparison, the reconstructed images are analyzed also in quantitative studies based upon tasks of linear-attenuation-coefficient and material-concentration estimation and of material differentiation. Following the performance of a computer-simulation study to verify that the one-step method can reconstruct numerically accurately basis and monochromatic images of numerical phantoms, we reconstruct basis and monochromatic images by using the one-step method from real data of physical phantoms collected with the full-, short-, and half-scan configurations. Subjective inspection based upon visualization and profile-plot comparison reveals that monochromatic images, which are used often in practical applications, reconstructed from the full-, short-, and half-scan data are largely visually comparable except for some

  12. Skeletal scintigraphy and SPECT/CT in orthopedic imaging; Knochenszintigrafie und SPECT/CT bei orthopaedischen Fragestellungen

    Energy Technology Data Exchange (ETDEWEB)

    Klaeser, B.; Walter, M.; Krause, T. [Inselspital Bern (Switzerland). Universitaetsklinik fuer Nuklearmedizin

    2011-03-15

    Multi-modality imaging with SPECT-CT in orthopaedics combines the excellent sensitivity of scintigraphy with the morphological information of CT as a key for specific interpretation of findings in bone scans. The result is an imaging modality with the clear potential to prove of value even in a competitive setting dominated by MRI, and to significantly add to diagnostic imaging in orthopaedics. SPECT-CT is of great value in the diagnostic evaluation after fractures, and - in contrast to MRI - it is well suited for imaging in patients with osteosyntheses and metallic implants. In sports medicine, SPECT-CT allows for a sensitive and specific detection of osseous stress reactions before morphological changes become detectable by CT or MRI. In patients with osseous pain syndromes, actively evolving degenerative changes as a cause of pain can be identified and accurately localized. Further, particularly prospective diagnostic studies providing comparative data are needed to strengthen the position of nuclear imaging in orthopaedics and sports medicine and to help implementing SPECT/CT in diagnostic algorithms. (orig.)

  13. Evaluation of the use of automatic exposure control and automatic tube potential selection in low-dose cerebrospinal fluid shunt head CT

    Energy Technology Data Exchange (ETDEWEB)

    Wallace, Adam N.; Bagade, Swapnil; Chatterjee, Arindam; Hicks, Brandon; McKinstry, Robert C. [Barnes Jewish Hospital, Mallinckrodt Institute of Radiology, St. Louis, MO (United States); Washington University School of Medicine, St. Louis, MO (United States); Vyhmeister, Ross [Washington University School of Medicine, St. Louis, MO (United States); Ramirez-Giraldo, Juan Carlos [Siemens Healthcare, Malvern, PA (United States)

    2015-03-17

    Cerebrospinal fluid shunts are primarily used for the treatment of hydrocephalus. Shunt complications may necessitate multiple non-contrast head CT scans resulting in potentially high levels of radiation dose starting at an early age. A new head CT protocol using automatic exposure control and automated tube potential selection has been implemented at our institution to reduce radiation exposure. The purpose of this study was to evaluate the reduction in radiation dose achieved by this protocol compared with a protocol with fixed parameters. A retrospective sample of 60 non-contrast head CT scans assessing for cerebrospinal fluid shunt malfunction was identified, 30 of which were performed with each protocol. The radiation doses of the two protocols were compared using the volume CT dose index and dose length product. The diagnostic acceptability and quality of each scan were evaluated by three independent readers. The new protocol lowered the average volume CT dose index from 15.2 to 9.2 mGy representing a 39 % reduction (P < 0.01; 95 % CI 35-44 %) and lowered the dose length product from 259.5 to 151.2 mGy/cm representing a 42 % reduction (P < 0.01; 95 % CI 34-50 %). The new protocol produced diagnostically acceptable scans with comparable image quality to the fixed parameter protocol. A pediatric shunt non-contrast head CT protocol using automatic exposure control and automated tube potential selection reduced patient radiation dose compared with a fixed parameter protocol while producing diagnostic images of comparable quality. (orig.)

  14. Clinical applications of PET/CT

    International Nuclear Information System (INIS)

    Le Ngoc Ha

    2011-01-01

    The purpose of this article is to review the evolution of PET, PET/CT focusing on the technical aspects, PET radiopharmaceutical developments and current clinical applications as well. The newest technologic advances have been reviewed, including improved crystal design, acquisition modes, reconstruction algorithms, etc. These advancements will continue to improve contrast, decrease noise, and increase resolution. Combined PET/CT system provides faster attenuation correction and useful anatomic correlation to PET functional information. A number of new radiopharmaceuticals used for PET imaging have been developed, however, FDG have been considered as the principal PET radiotracer. The current clinical applications of PET and PET/CT are widespread and include oncology, cardiology and neurology. (author)

  15. Active contour based segmentation of resected livers in CT images

    Science.gov (United States)

    Oelmann, Simon; Oyarzun Laura, Cristina; Drechsler, Klaus; Wesarg, Stefan

    2015-03-01

    The majority of state of the art segmentation algorithms are able to give proper results in healthy organs but not in pathological ones. However, many clinical applications require an accurate segmentation of pathological organs. The determination of the target boundaries for radiotherapy or liver volumetry calculations are examples of this. Volumetry measurements are of special interest after tumor resection for follow up of liver regrow. The segmentation of resected livers presents additional challenges that were not addressed by state of the art algorithms. This paper presents a snakes based algorithm specially developed for the segmentation of resected livers. The algorithm is enhanced with a novel dynamic smoothing technique that allows the active contour to propagate with different speeds depending on the intensities visible in its neighborhood. The algorithm is evaluated in 6 clinical CT images as well as 18 artificial datasets generated from additional clinical CT images.

  16. SU-F-J-23: Field-Of-View Expansion in Cone-Beam CT Reconstruction by Use of Prior Information

    Energy Technology Data Exchange (ETDEWEB)

    Haga, A; Magome, T; Nakano, M; Nakagawa, K [University of Tokyo Hospital, Tokyo (Japan); Kotoku, J [Teikyo University, Tokyo (Japan)

    2016-06-15

    Purpose: Cone-beam CT (CBCT) has become an integral part of online patient setup in an image-guided radiation therapy (IGRT). In addition, the utility of CBCT for dose calculation has actively been investigated. However, the limited size of field-of-view (FOV) and resulted CBCT image with a lack of peripheral area of patient body prevents the reliability of dose calculation. In this study, we aim to develop an FOV expanded CBCT in IGRT system to allow the dose calculation. Methods: Three lung cancer patients were selected in this study. We collected the cone-beam projection images in the CBCT-based IGRT system (X-ray volume imaging unit, ELEKTA), where FOV size of the provided CBCT with these projections was 410 × 410 mm{sup 2} (normal FOV). Using these projections, CBCT with a size of 728 × 728 mm{sup 2} was reconstructed by a posteriori estimation algorithm including a prior image constrained compressed sensing (PICCS). The treatment planning CT was used as a prior image. To assess the effectiveness of FOV expansion, a dose calculation was performed on the expanded CBCT image with region-of-interest (ROI) density mapping method, and it was compared with that of treatment planning CT as well as that of CBCT reconstructed by filtered back projection (FBP) algorithm. Results: A posteriori estimation algorithm with PICCS clearly visualized an area outside normal FOV, whereas the FBP algorithm yielded severe streak artifacts outside normal FOV due to under-sampling. The dose calculation result using the expanded CBCT agreed with that using treatment planning CT very well; a maximum dose difference was 1.3% for gross tumor volumes. Conclusion: With a posteriori estimation algorithm, FOV in CBCT can be expanded. Dose comparison results suggested that the use of expanded CBCTs is acceptable for dose calculation in adaptive radiation therapy. This study has been supported by KAKENHI (15K08691).

  17. A temporal interpolation approach for dynamic reconstruction in perfusion CT

    International Nuclear Information System (INIS)

    Montes, Pau; Lauritsch, Guenter

    2007-01-01

    This article presents a dynamic CT reconstruction algorithm for objects with time dependent attenuation coefficient. Projection data acquired over several rotations are interpreted as samples of a continuous signal. Based on this idea, a temporal interpolation approach is proposed which provides the maximum temporal resolution for a given rotational speed of the CT scanner. Interpolation is performed using polynomial splines. The algorithm can be adapted to slow signals, reducing the amount of data acquired and the computational cost. A theoretical analysis of the approximations made by the algorithm is provided. In simulation studies, the temporal interpolation approach is compared with three other dynamic reconstruction algorithms based on linear regression, linear interpolation, and generalized Parker weighting. The presented algorithm exhibits the highest temporal resolution for a given sampling interval. Hence, our approach needs less input data to achieve a certain quality in the reconstruction than the other algorithms discussed or, equivalently, less x-ray exposure and computational complexity. The proposed algorithm additionally allows the possibility of using slow rotating scanners for perfusion imaging purposes

  18. An ILP based Algorithm for Optimal Customer Selection for Demand Response in SmartGrids

    Energy Technology Data Exchange (ETDEWEB)

    Kuppannagari, Sanmukh R. [Univ. of Southern California, Los Angeles, CA (United States); Kannan, Rajgopal [Louisiana State Univ., Baton Rouge, LA (United States); Prasanna, Viktor K. [Univ. of Southern California, Los Angeles, CA (United States)

    2015-12-07

    Demand Response (DR) events are initiated by utilities during peak demand periods to curtail consumption. They ensure system reliability and minimize the utility’s expenditure. Selection of the right customers and strategies is critical for a DR event. An effective DR scheduling algorithm minimizes the curtailment error which is the absolute difference between the achieved curtailment value and the target. State-of-the-art heuristics exist for customer selection, however their curtailment errors are unbounded and can be as high as 70%. In this work, we develop an Integer Linear Programming (ILP) formulation for optimally selecting customers and curtailment strategies that minimize the curtailment error during DR events in SmartGrids. We perform experiments on real world data obtained from the University of Southern California’s SmartGrid and show that our algorithm achieves near exact curtailment values with errors in the range of 10-7 to 10-5, which are within the range of numerical errors. We compare our results against the state-of-the-art heuristic being deployed in practice in the USC SmartGrid. We show that for the same set of available customer strategy pairs our algorithm performs 103 to 107 times better in terms of the curtailment errors incurred.

  19. Computer-aided detection of early interstitial lung diseases using low-dose CT images

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sang Cheol; Kim, Soo Hyung [School of Electronics and Computer Engineering, Chonnam National University, Gwangju 500-757 (Korea, Republic of); Tan, Jun; Wang Xingwei; Lederman, Dror; Leader, Joseph K; Zheng Bin, E-mail: zhengb@upmc.edu [Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213 (United States)

    2011-02-21

    This study aims to develop a new computer-aided detection (CAD) scheme to detect early interstitial lung disease (ILD) using low-dose computed tomography (CT) examinations. The CAD scheme classifies each pixel depicted on the segmented lung areas into positive or negative groups for ILD using a mesh-grid-based region growth method and a multi-feature-based artificial neural network (ANN). A genetic algorithm was applied to select optimal image features and the ANN structure. In testing each CT examination, only pixels selected by the mesh-grid region growth method were analyzed and classified by the ANN to improve computational efficiency. All unselected pixels were classified as negative for ILD. After classifying all pixels into the positive and negative groups, CAD computed a detection score based on the ratio of the number of positive pixels to all pixels in the segmented lung areas, which indicates the likelihood of the test case being positive for ILD. When applying to an independent testing dataset of 15 positive and 15 negative cases, the CAD scheme yielded the area under receiver operating characteristic curve (AUC = 0.884 {+-} 0.064) and 80.0% sensitivity at 85.7% specificity. The results demonstrated the feasibility of applying the CAD scheme to automatically detect early ILD using low-dose CT examinations.

  20. A reconstruction algorithm for coherent scatter computed tomography based on filtered back-projection

    International Nuclear Information System (INIS)

    Stevendaal, U. van; Schlomka, J.-P.; Harding, A.; Grass, M.

    2003-01-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter form factor of the investigated object. Reconstruction from coherently scattered x-rays is commonly done using algebraic reconstruction techniques (ART). In this paper, we propose an alternative approach based on filtered back-projection. For the first time, a three-dimensional (3D) filtered back-projection technique using curved 3D back-projection lines is applied to two-dimensional coherent scatter projection data. The proposed algorithm is tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. While yielding comparable image quality as ART reconstruction, the modified 3D filtered back-projection algorithm is about two orders of magnitude faster. In contrast to iterative reconstruction schemes, it has the advantage that subfield-of-view reconstruction becomes feasible. This allows a selective reconstruction of the coherent-scatter form factor for a region of interest. The proposed modified 3D filtered back-projection algorithm is a powerful reconstruction technique to be implemented in a CSCT scanning system. This method gives coherent scatter CT the potential of becoming a competitive modality for medical imaging or nondestructive testing

  1. Activity concentration measurements using a conjugate gradient (Siemens xSPECT) reconstruction algorithm in SPECT/CT.

    Science.gov (United States)

    Armstrong, Ian S; Hoffmann, Sandra A

    2016-11-01

    The interest in quantitative single photon emission computer tomography (SPECT) shows potential in a number of clinical applications and now several vendors are providing software and hardware solutions to allow 'SUV-SPECT' to mirror metrics used in PET imaging. This brief technical report assesses the accuracy of activity concentration measurements using a new algorithm 'xSPECT' from Siemens Healthcare. SPECT/CT data were acquired from a uniform cylinder with 5, 10, 15 and 20 s/projection and NEMA image quality phantom with 25 s/projection. The NEMA phantom had hot spheres filled with an 8 : 1 activity concentration relative to the background compartment. Reconstructions were performed using parameters defined by manufacturer presets available with the algorithm. The accuracy of activity concentration measurements was assessed. A dose calibrator-camera cross-calibration factor (CCF) was derived from the uniform phantom data. In uniform phantom images, a positive bias was observed, ranging from ∼6% in the lower count images to ∼4% in the higher-count images. On the basis of the higher-count data, a CCF of 0.96 was derived. As expected, considerable negative bias was measured in the NEMA spheres using region mean values whereas positive bias was measured in the four largest NEMA spheres. Nonmonotonically increasing recovery curves for the hot spheres suggested the presence of Gibbs edge enhancement from resolution modelling. Sufficiently accurate activity concentration measurements can easily be measured on images reconstructed with the xSPECT algorithm without a CCF. However, the use of a CCF is likely to improve accuracy further. A manual conversion of voxel values into SUV should be possible, provided that the patient weight, injected activity and time between injection and imaging are all known accurately.

  2. Feature Selection of Network Intrusion Data using Genetic Algorithm and Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Iwan Syarif

    2016-12-01

    Full Text Available This paper describes the advantages of using Evolutionary Algorithms (EA for feature selection on network intrusion dataset. Most current Network Intrusion Detection Systems (NIDS are unable to detect intrusions in real time because of high dimensional data produced during daily operation. Extracting knowledge from huge data such as intrusion data requires new approach. The more complex the datasets, the higher computation time and the harder they are to be interpreted and analyzed. This paper investigates the performance of feature selection algoritms in network intrusiona data. We used Genetic Algorithms (GA and Particle Swarm Optimizations (PSO as feature selection algorithms. When applied to network intrusion datasets, both GA and PSO have significantly reduces the number of features. Our experiments show that GA successfully reduces the number of attributes from 41 to 15 while PSO reduces the number of attributes from 41 to 9. Using k Nearest Neighbour (k-NN as a classifier,the GA-reduced dataset which consists of 37% of original attributes, has accuracy improvement from 99.28% to 99.70% and its execution time is also 4.8 faster than the execution time of original dataset. Using the same classifier, PSO-reduced dataset which consists of 22% of original attributes, has the fastest execution time (7.2 times faster than the execution time of original datasets. However, its accuracy is slightly reduced 0.02% from 99.28% to 99.26%. Overall, both GA and PSO are good solution as feature selection techniques because theyhave shown very good performance in reducing the number of features significantly while still maintaining and sometimes improving the classification accuracy as well as reducing the computation time.

  3. Image quality in thoracic 4D cone-beam CT: A sensitivity analysis of respiratory signal, binning method, reconstruction algorithm, and projection angular spacing

    OpenAIRE

    Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.

    2014-01-01

    Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An inv...

  4. Simulation of the radiography formation process from CT patient volume

    International Nuclear Information System (INIS)

    Bifulco, P.; Cesarelli, M.; Verso, E.; Roccasalva Firenze, M.; Sansone, M.; Bracale, M.

    1998-01-01

    The aim of this work is to develop an algorithm to simulate the radiographic image formation process using volumetric anatomical data of the patient, obtained from 3D diagnostic CT images. Many applications, including radiographic driven surgery, virtual reality in medicine and radiologist teaching and training, may take advantage of such technique. The designed algorithm has been developed to simulate a generic radiographic equipment, whatever oriented respect to the patient. The simulated radiography is obtained considering a discrete number of X-ray paths departing from the focus, passing through the patient volume and reaching the radiographic plane. To evaluate a generic pixel of the simulated radiography, the cumulative absorption along the corresponding X-ray is computed. To estimate X-ray absorption in a generic point of the patient volume, 3D interpolation of CT data has been adopted. The proposed technique is quite similar to those employed in Ray Tracing. A computer designed test volume has been used to assess the reliability of the radiography simulation algorithm as a measuring tool. From the errors analysis emerges that the accuracy achieved by the radiographic simulation algorithm is largely confined within the sampling step of the CT volume. (authors)

  5. Real-time interactive three-dimensional display of CT and MR imaging volume data

    International Nuclear Information System (INIS)

    Yla-Jaaski, J.; Kubler, O.; Kikinis, R.

    1987-01-01

    Real-time reconstruction of surfaces from CT and MR imaging volume data is demonstrated using a new algorithm and implementation in a parallel computer system. The display algorithm accepts noncubic 16-bit voxels directly as input. Operations such as interpolation, classification by thresholding, depth coding, simple lighting effects, and removal of parts of the volume by clipping planes are all supported on-line. An eight-processor implementation of the algorithm renders surfaces from typical CT data sets in real time to allow interactive rotation of the volume

  6. Cancer microarray data feature selection using multi-objective binary particle swarm optimization algorithm

    Science.gov (United States)

    Annavarapu, Chandra Sekhara Rao; Dara, Suresh; Banka, Haider

    2016-01-01

    Cancer investigations in microarray data play a major role in cancer analysis and the treatment. Cancer microarray data consists of complex gene expressed patterns of cancer. In this article, a Multi-Objective Binary Particle Swarm Optimization (MOBPSO) algorithm is proposed for analyzing cancer gene expression data. Due to its high dimensionality, a fast heuristic based pre-processing technique is employed to reduce some of the crude domain features from the initial feature set. Since these pre-processed and reduced features are still high dimensional, the proposed MOBPSO algorithm is used for finding further feature subsets. The objective functions are suitably modeled by optimizing two conflicting objectives i.e., cardinality of feature subsets and distinctive capability of those selected subsets. As these two objective functions are conflicting in nature, they are more suitable for multi-objective modeling. The experiments are carried out on benchmark gene expression datasets, i.e., Colon, Lymphoma and Leukaemia available in literature. The performance of the selected feature subsets with their classification accuracy and validated using 10 fold cross validation techniques. A detailed comparative study is also made to show the betterment or competitiveness of the proposed algorithm. PMID:27822174

  7. Partial volume and aliasing artefacts in helical cone-beam CT

    International Nuclear Information System (INIS)

    Zou Yu; Sidky, Emil Y; Pan, Xiaochuan

    2004-01-01

    A generalization of the quasi-exact algorithms of Kudo et al (2000 IEEE Trans. Med. Imaging 19 902-21) is developed that allows for data acquisition in a 'practical' frame for clinical diagnostic helical, cone-beam computed tomography (CT). The algorithm is investigated using data that model nonlinear partial volume averaging. This investigation leads to an understanding of aliasing artefacts in helical, cone-beam CT image reconstruction. An ad hoc scheme is proposed to mitigate artefacts due to the nonlinear partial volume and aliasing artefacts

  8. Selection and Penalty Strategies for Genetic Algorithms Designed to Solve Spatial Forest Planning Problems

    International Nuclear Information System (INIS)

    Thompson, M.P.; Sessions, J.; Hamann, J.D.

    2009-01-01

    Genetic algorithms (GAs) have demonstrated success in solving spatial forest planning problems. We present an adaptive GA that incorporates population-level statistics to dynamically update penalty functions, a process analogous to strategic oscillation from the tabu search literature. We also explore performance of various selection strategies. The GA identified feasible solutions within 96%, 98%, and 93% of a non spatial relaxed upper bound calculated for landscapes of 100, 500, and 1000 units, respectively. The problem solved includes forest structure constraints limiting harvest opening sizes and requiring minimally sized patches of mature forest. Results suggest that the dynamic penalty strategy is superior to the more standard static penalty implementation. Results also suggest that tournament selection can be superior to the more standard implementation of proportional selection for smaller problems, but becomes susceptible to premature convergence as problem size increases. It is therefore important to balance selection pressure with appropriate disruption. We conclude that integrating intelligent search strategies into the context of genetic algorithms can yield improvements and should be investigated for future use in spatial planning with ecological goals.

  9. NUFFT-Based Iterative Image Reconstruction via Alternating Direction Total Variation Minimization for Sparse-View CT

    Directory of Open Access Journals (Sweden)

    Bin Yan

    2015-01-01

    Full Text Available Sparse-view imaging is a promising scanning method which can reduce the radiation dose in X-ray computed tomography (CT. Reconstruction algorithm for sparse-view imaging system is of significant importance. The adoption of the spatial iterative algorithm for CT image reconstruction has a low operation efficiency and high computation requirement. A novel Fourier-based iterative reconstruction technique that utilizes nonuniform fast Fourier transform is presented in this study along with the advanced total variation (TV regularization for sparse-view CT. Combined with the alternating direction method, the proposed approach shows excellent efficiency and rapid convergence property. Numerical simulations and real data experiments are performed on a parallel beam CT. Experimental results validate that the proposed method has higher computational efficiency and better reconstruction quality than the conventional algorithms, such as simultaneous algebraic reconstruction technique using TV method and the alternating direction total variation minimization approach, with the same time duration. The proposed method appears to have extensive applications in X-ray CT imaging.

  10. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  11. Differences between head CT and MRI for selecting patients for intravenous rt-PA during hyperacute brain infarction. Comparative study of intracranial bleeding complications and prognosis

    International Nuclear Information System (INIS)

    Deguchi, Ichiro; Takeda, Hidetaka; Furuya, Daisuke

    2010-01-01

    The objective of this study was to investigate the differences in usefulness between head CT and MRI for selecting patients for intravenous injection of recombinant tissue plasminogen activator (rt-PA) during hyperacute brain infarction. Of a total of 1280 brain infarction patients who were admitted from October 2005 to March 2009, 45 patients (33 men and 12 women with an average age of 69.2±11.6 years) received intravenous rt-PA. Of these, 16 patients in whom only head CT was performed (593 inpatients from October 2005 to March 2007, CT standard group, 11 men and 5 women, average age 67.4±15.4 years) and 29 patients in whom head CT and MRI were performed (687 inpatients from April 2007 to March 2009, MRI standard group, 21 men and 7 women, average age 70.1±9.0 years) were studied. The median National Institutes of Health Stroke Scale (NIHSS) scores immediately before intravenous rt-PA for the CT and MRI standard groups were 19 and 11, respectively; disease severity was lower for the MRI standard group. Three months later, the modified Rankin Scale (mRS) score for the MRI standard group (0-1: 31%, 2-3: 38%, 4-5: 24%, and 6: 12%) was better than for the CT standard group (0-1: 25%, 2-3: 25%, 4-5: 38%, and 6: 12%). The frequency of symptomatic intracranial hemorrhage was lower for the MRI standard group (6.9%) than for the CT standard group (18.8%). However, there was no statistical difference in the prognosis and incidence of intracranial hemorrhage between the 2 groups, due to the small number of cases. When selecting patients for intravenous rt-PA, brain infarction improved more, prognosis was better three months later, and the frequency of symptomatic intracranial hemorrhage was lower among patients selected based on MRI standards than among those selected based on CT standards. (author)

  12. Effect of Selection of Design Parameters on the Optimization of a Horizontal Axis Wind Turbine via Genetic Algorithm

    International Nuclear Information System (INIS)

    Alpman, Emre

    2014-01-01

    The effect of selecting the twist angle and chord length distributions on the wind turbine blade design was investigated by performing aerodynamic optimization of a two-bladed stall regulated horizontal axis wind turbine. Twist angle and chord length distributions were defined using Bezier curve using 3, 5, 7 and 9 control points uniformly distributed along the span. Optimizations performed using a micro-genetic algorithm with populations composed of 5, 10, 15, 20 individuals showed that, the number of control points clearly affected the outcome of the process; however the effects were different for different population sizes. The results also showed the superiority of micro-genetic algorithm over a standard genetic algorithm, for the selected population sizes. Optimizations were also performed using a macroevolutionary algorithm and the resulting best blade design was compared with that yielded by micro-genetic algorithm

  13. Metal artifact reduction algorithm based on model images and spatial information

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jay [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Shih, Cheng-Ting [Department of Biomedical Engineering and Environmental Sciences, National Tsing-Hua University, Hsinchu, Taiwan (China); Chang, Shu-Jun [Health Physics Division, Institute of Nuclear Energy Research, Taoyuan, Taiwan (China); Huang, Tzung-Chi [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan (China); Sun, Jing-Yi [Institute of Radiological Science, Central Taiwan University of Science and Technology, Taichung, Taiwan (China); Wu, Tung-Hsin, E-mail: tung@ym.edu.tw [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, No.155, Sec. 2, Linong Street, Taipei 112, Taiwan (China)

    2011-10-01

    Computed tomography (CT) has become one of the most favorable choices for diagnosis of trauma. However, high-density metal implants can induce metal artifacts in CT images, compromising image quality. In this study, we proposed a model-based metal artifact reduction (MAR) algorithm. First, we built a model image using the k-means clustering technique with spatial information and calculated the difference between the original image and the model image. Then, the projection data of these two images were combined using an exponential weighting function. At last, the corrected image was reconstructed using the filter back-projection algorithm. Two metal-artifact contaminated images were studied. For the cylindrical water phantom image, the metal artifact was effectively removed. The mean CT number of water was improved from -28.95{+-}97.97 to -4.76{+-}4.28. For the clinical pelvic CT image, the dark band and the metal line were removed, and the continuity and uniformity of the soft tissue were recovered as well. These results indicate that the proposed MAR algorithm is useful for reducing metal artifact and could improve the diagnostic value of metal-artifact contaminated CT images.

  14. The combination of a reduction in contrast agent dose with low tube voltage and an adaptive statistical iterative reconstruction algorithm in CT enterography: Effects on image quality and radiation dose.

    Science.gov (United States)

    Feng, Cui; Zhu, Di; Zou, Xianlun; Li, Anqin; Hu, Xuemei; Li, Zhen; Hu, Daoyu

    2018-03-01

    To investigate the subjective and quantitative image quality and radiation exposure of CT enterography (CTE) examination performed at low tube voltage and low concentration of contrast agent with adaptive statistical iterative reconstruction (ASIR) algorithm, compared with conventional CTE.One hundred thirty-seven patients with suspected or proved gastrointestinal diseases underwent contrast enhanced CTE in a multidetector computed tomography (MDCT) scanner. All cases were assigned to 2 groups. Group A (n = 79) underwent CT with low tube voltage based on patient body mass index (BMI) (BMI contrast agent (270 mg I/mL), the images were reconstructed with standard filtered back projection (FBP) algorithm and 50% ASIR algorithm. Group B (n = 58) underwent conventional CTE with 120 kVp and 350 mg I/mL contrast agent, the images were reconstructed with FBP algorithm. The computed tomography dose index volume (CTDIvol), dose length product (DLP), effective dose (ED), and total iodine dosage were calculated and compared. The CT values, contrast-to-noise ratio (CNR), and signal-to-noise ratio (SNR) of the normal bowel wall, gastrointestinal lesions, and mesenteric vessels were assessed and compared. The subjective image quality was assessed independently and blindly by 2 radiologists using a 5-point Likert scale.The differences of values for CTDIvol (8.64 ± 2.72 vs 11.55 ± 3.95, P  .05) and all image quality scores were greater than or equal to 3 (moderate). Fifty percent ASIR-A group images provided lower image noise, but similar or higher quantitative image quality in comparison with FBP-B group images.Compared with the conventional protocol, CTE performed at low tube voltage, low concentration of contrast agent with 50% ASIR algorithm produce a diagnostically acceptable image quality with a mean ED of 6.34 mSv and a total iodine dose reduction of 26.1%.

  15. PET/CT and radiotherapy

    International Nuclear Information System (INIS)

    Messa, C.; CNR, Milano; S. Gerardo Hospital, Monza; Di Muzio, N.; Picchio, M.; Bettinardi, V.; Gilardi, M.C.; CNR, Milano; San Raffaele Scientific Institute, Milano; Fazio, F.; CNR, Milano; San Raffaele Scientific Institute, Milano; San Raffaele Scientific Institute, Milano

    2006-01-01

    This article reviews the state of the art of PET/CT applications in radiotherapy, specifically its use in disease staging, patient selection, treatment planning and treatment evaluation. Diseases for which radiotherapy with radical intent is indicated will be considered, as well as those in which PET/CT may actually change the course of disease. The methodological and technological aspects of PET/CT in radiotherapy are discussed, focusing on the problem of target volume definition with CT and PET functional imaging and the problem of tumor motion with respect to imaging and dose delivery

  16. TH-E-BRE-04: An Online Replanning Algorithm for VMAT

    Energy Technology Data Exchange (ETDEWEB)

    Ahunbay, E; Li, X [Medical College of Wisconsin, Milwaukee, WI (United States); Moreau, M [Elekta, Inc, Verona, WI (Italy)

    2014-06-15

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on the geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams.

  17. TH-E-BRE-04: An Online Replanning Algorithm for VMAT

    International Nuclear Information System (INIS)

    Ahunbay, E; Li, X; Moreau, M

    2014-01-01

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on the geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams

  18. Planar scintigraphy with 123I/99mTc-sestamibi, 99mTc-sestamibi SPECT/CT, 11C-methionine PET/CT, or selective venous sampling before reoperation of primary hyperparathyroidism?

    Science.gov (United States)

    Schalin-Jäntti, Camilla; Ryhänen, Eeva; Heiskanen, Ilkka; Seppänen, Marko; Arola, Johanna; Schildt, Jukka; Väisänen, Mika; Nelimarkka, Lassi; Lisinen, Irina; Aalto, Ville; Nuutila, Pirjo; Välimäki, Matti J

    2013-05-01

    All patients with primary hyperparathyroidism should undergo localization studies before reoperation, but it is not known which method is most accurate. The purpose of this prospective study was to compare the performance of planar scintigraphy with (123)I/(99m)Tc-sestamibi, (99m)Tc-sestamibi SPECT (SPECT/CT), (11)C-methionine PET/CT, and selective venous sampling (SVS) in persistent primary hyperparathyroidism. Twenty-one patients referred for reoperation of persistent hyperparathyroidism were included and investigated with (123)I/(99m)Tc-sestamibi, SPECT/CT (n = 19), (11)C-methionine PET/CT, and SVS (n = 18) before reoperation. All patients had been operated on 1-2 times previously because of hyperparathyroidism. The results of the localization studies were compared with operative findings, histology, and biochemical cure. Eighteen (86%) of 21 patients were biochemically cured. Nineteen parathyroid glands (9 adenomas, 1 atypical adenoma, and 9 hyperplastic glands) were removed from 17 patients, and 1 patient who was biochemically cured had an unclear histology result. The accuracy for localizing a pathologic parathyroid gland to the correct side of the neck was 59% (95% confidence interval [CI], 36%-79%) for (123)I/(99m)Tc-sestamibi, 19% (95% CI, 5%-42%) for SPECT/CT, 65% (95% CI, 43%-84%) for (11)C-methionine PET/CT, and 40% (95% CI, 19%-65%) for SVS (P hyperparathyroidism and is recommended as first-line imaging before reoperation. (11)C-methionine PET/CT provides valuable additional information if (123)I/(99m)Tc-sestamibi scan results remain negative. (99m)Tc-sestamibi SPECT/CT and SVS provide no additional information, compared with the combined results of (123)I/(99m)Tc-sestamibi and (11)C-methionine PET/CT imaging.

  19. Inter-slice bidirectional registration-based segmentation of the prostate gland in MR and CT image sequences

    Energy Technology Data Exchange (ETDEWEB)

    Khalvati, Farzad, E-mail: farzad.khalvati@uwaterloo.ca; Tizhoosh, Hamid R. [Department of Systems Design Engineering, University of Waterloo, Waterloo, Ontario N2L 3G1 (Canada); Salmanpour, Aryan; Rahnamayan, Shahryar [Department of Engineering and Applied Science, University of Ontario Institute of Technology, Oshawa, Ontario L1H 7K4 (Canada); Rodrigues, George [Department of Radiation Oncology, London Regional Cancer Program, London, Ontario N6C 2R6, Canada and Department of Epidemiology/Biostatistics, University of Western Ontario, London, Ontario N6A 3K7 (Canada)

    2013-12-15

    Purpose: Accurate segmentation and volume estimation of the prostate gland in magnetic resonance (MR) and computed tomography (CT) images are necessary steps in diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semiautomated segmentation of individual slices in T2-weighted MR and CT image sequences. Methods: The proposedInter-Slice Bidirectional Registration-based Segmentation (iBRS) algorithm relies on interslice image registration of volume data to segment the prostate gland without the use of an anatomical atlas. It requires the user to mark only three slices in a given volume dataset, i.e., the first, middle, and last slices. Next, the proposed algorithm uses a registration algorithm to autosegment the remaining slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid techniques). Results: The results with the proposed technique were compared with manual marking using prostate MR and CT images from 117 patients. Manual marking was performed by an expert user for all 117 patients. The median accuracies for individual slices measured using the Dice similarity coefficient (DSC) were 92% and 91% for MR and CT images, respectively. The iBRS algorithm was also evaluated regarding user variability, which confirmed that the algorithm was robust to interuser variability when marking the prostate gland. Conclusions: The proposed algorithm exploits the interslice data redundancy of the images in a volume dataset of MR and CT images and eliminates the need for an atlas, minimizing the computational cost while producing highly accurate results which are robust to interuser variability.

  20. Inter-slice bidirectional registration-based segmentation of the prostate gland in MR and CT image sequences

    International Nuclear Information System (INIS)

    Khalvati, Farzad; Tizhoosh, Hamid R.; Salmanpour, Aryan; Rahnamayan, Shahryar; Rodrigues, George

    2013-01-01

    Purpose: Accurate segmentation and volume estimation of the prostate gland in magnetic resonance (MR) and computed tomography (CT) images are necessary steps in diagnosis, treatment, and monitoring of prostate cancer. This paper presents an algorithm for the prostate gland volume estimation based on the semiautomated segmentation of individual slices in T2-weighted MR and CT image sequences. Methods: The proposedInter-Slice Bidirectional Registration-based Segmentation (iBRS) algorithm relies on interslice image registration of volume data to segment the prostate gland without the use of an anatomical atlas. It requires the user to mark only three slices in a given volume dataset, i.e., the first, middle, and last slices. Next, the proposed algorithm uses a registration algorithm to autosegment the remaining slices. We conducted comprehensive experiments to measure the performance of the proposed algorithm using three registration methods (i.e., rigid, affine, and nonrigid techniques). Results: The results with the proposed technique were compared with manual marking using prostate MR and CT images from 117 patients. Manual marking was performed by an expert user for all 117 patients. The median accuracies for individual slices measured using the Dice similarity coefficient (DSC) were 92% and 91% for MR and CT images, respectively. The iBRS algorithm was also evaluated regarding user variability, which confirmed that the algorithm was robust to interuser variability when marking the prostate gland. Conclusions: The proposed algorithm exploits the interslice data redundancy of the images in a volume dataset of MR and CT images and eliminates the need for an atlas, minimizing the computational cost while producing highly accurate results which are robust to interuser variability

  1. Performance improvement of multi-class detection using greedy algorithm for Viola-Jones cascade selection

    Science.gov (United States)

    Tereshin, Alexander A.; Usilin, Sergey A.; Arlazarov, Vladimir V.

    2018-04-01

    This paper aims to study the problem of multi-class object detection in video stream with Viola-Jones cascades. An adaptive algorithm for selecting Viola-Jones cascade based on greedy choice strategy in solution of the N-armed bandit problem is proposed. The efficiency of the algorithm on the problem of detection and recognition of the bank card logos in the video stream is shown. The proposed algorithm can be effectively used in documents localization and identification, recognition of road scene elements, localization and tracking of the lengthy objects , and for solving other problems of rigid object detection in a heterogeneous data flows. The computational efficiency of the algorithm makes it possible to use it both on personal computers and on mobile devices based on processors with low power consumption.

  2. Computer-aided pulmonary nodule detection. Performance of two CAD systems at different CT dose levels

    International Nuclear Information System (INIS)

    Hein, Patrick Alexander; Rogalla, P.; Klessen, C.; Lembcke, A.; Romano, V.C.

    2009-01-01

    Purpose: To evaluate the impact of dose reduction on the performance of computer-aided lung nodule detection systems (CAD) of two manufacturers by comparing respective CAD results on ultra-low-dose computed tomography (ULD-CT) and standard dose CT (SD-CT). Materials and Methods: Multi-slice computed tomography (MSCT) data sets of 26 patients (13 male and 13 female, patients 31 - 74 years old) were retrospectively selected for CAD analysis. Indication for CT examination was staging of a known primary malignancy or suspected pulmonary malignancy. CT images were consecutively acquired at 5 mAs (ULD-CT) and 75 mAs (SD-CT) with 120kV tube voltage (1 mm slice thickness). The standard of reference was determined by three experienced readers in consensus. CAD reading algorithms (pre-commercial CAD system, Philips, Netherlands: CAD-1; LungCARE, Siemens, Germany: CAD-2) were applied to the CT data sets. Results: Consensus reading identified 253 nodules on SD-CT and ULD-CT. Nodules ranged in diameter between 2 and 41 mm (mean diameter 4.8 mm). Detection rates were recorded with 72% and 62% (CAD-1 vs. CAD-2) for SD-CT and with 73% and 56% for ULD-CT. Median also positive rates per patient were calculated with 6 and 5 (CAD-1 vs. CAD-2) for SD-CT and with 8 and 3 for ULD-CT. After separate statistical analysis of nodules with diameters of 5 mm and greater, the detection rates increased to 83% and 61% for SD-CT and to 89% and 67% for ULD-CT (CAD-1 vs. CAD-2). For both CAD systems there were no significant differences between the detection rates for standard and ultra-low-dose data sets (p>0.05). Conclusion: Dose reduction of the underlying CT scan did not significantly influence nodule detection performance of the tested CAD systems. (orig.)

  3. Hitting times of local and global optima in genetic algorithms with very high selection pressure

    Directory of Open Access Journals (Sweden)

    Eremeev Anton V.

    2017-01-01

    Full Text Available The paper is devoted to upper bounds on the expected first hitting times of the sets of local or global optima for non-elitist genetic algorithms with very high selection pressure. The results of this paper extend the range of situations where the upper bounds on the expected runtime are known for genetic algorithms and apply, in particular, to the Canonical Genetic Algorithm. The obtained bounds do not require the probability of fitness-decreasing mutation to be bounded by a constant which is less than one.

  4. Virtual Non-Contrast CT Using Dual-Energy Spectral CT: Feasibility of Coronary Artery Calcium Scoring.

    Science.gov (United States)

    Song, Inyoung; Yi, Jeong Geun; Park, Jeong Hee; Kim, Sung Mok; Lee, Kyung Soo; Chung, Myung Jin

    2016-01-01

    To evaluate the feasibility of coronary artery calcium scoring based on three virtual noncontrast-enhanced (VNC) images derived from single-source spectral dual-energy CT (DECT) as compared with true noncontrast-enhanced (TNC) images. This prospective study was conducted with the approval of our Institutional Review Board. Ninety-seven patients underwent noncontrast CT followed by contrast-enhanced chest CT using single-source spectral DECT. Iodine eliminated VNC images were reconstructed using two kinds of 2-material decomposition algorithms (material density iodine-water pair [MDW], material density iodine-calcium pair [MDC]) and a material suppressed algorithm (material suppressed iodine [MSI]). Two readers independently quantified calcium on VNC and TNC images. The Spearman correlation coefficient test and Bland-Altman method were used for statistical analyses. Coronary artery calcium scores from all three VNC images showed excellent correlation with those from the TNC images (Spearman's correlation coefficient [ρ] = 0.94, 0.88, and 0.89 for MDW, MDC, and MSI, respectively; p VNC images also correlated well with those from TNC images (ρ = 0.92, 0.87, and 0.91 for MDW, MDC, and MSI, respectively; p VNC images, coronary calcium from MDW correlated best with that from TNC. The coronary artery calcium scores and volumes were significantly lower from the VNC images than from the TNC images (p VNC images from contrast-enhanced CT using dual-energy material decomposition/suppression is feasible for coronary calcium scoring. The absolute value from VNC tends to be smaller than that from TNC.

  5. Evaluation of the use of automatic exposure control and automatic tube potential selection in low-dose cerebrospinal fluid shunt head CT.

    Science.gov (United States)

    Wallace, Adam N; Vyhmeister, Ross; Bagade, Swapnil; Chatterjee, Arindam; Hicks, Brandon; Ramirez-Giraldo, Juan Carlos; McKinstry, Robert C

    2015-06-01

    Cerebrospinal fluid shunts are primarily used for the treatment of hydrocephalus. Shunt complications may necessitate multiple non-contrast head CT scans resulting in potentially high levels of radiation dose starting at an early age. A new head CT protocol using automatic exposure control and automated tube potential selection has been implemented at our institution to reduce radiation exposure. The purpose of this study was to evaluate the reduction in radiation dose achieved by this protocol compared with a protocol with fixed parameters. A retrospective sample of 60 non-contrast head CT scans assessing for cerebrospinal fluid shunt malfunction was identified, 30 of which were performed with each protocol. The radiation doses of the two protocols were compared using the volume CT dose index and dose length product. The diagnostic acceptability and quality of each scan were evaluated by three independent readers. The new protocol lowered the average volume CT dose index from 15.2 to 9.2 mGy representing a 39 % reduction (P < 0.01; 95 % CI 35-44 %) and lowered the dose length product from 259.5 to 151.2 mGy/cm representing a 42 % reduction (P < 0.01; 95 % CI 34-50 %). The new protocol produced diagnostically acceptable scans with comparable image quality to the fixed parameter protocol. A pediatric shunt non-contrast head CT protocol using automatic exposure control and automated tube potential selection reduced patient radiation dose compared with a fixed parameter protocol while producing diagnostic images of comparable quality.

  6. CTC-ask: a new algorithm for conversion of CT numbers to tissue parameters for Monte Carlo dose calculations applying DICOM RS knowledge

    International Nuclear Information System (INIS)

    Ottosson, Rickard O; Behrens, Claus F

    2011-01-01

    One of the building blocks in Monte Carlo (MC) treatment planning is to convert patient CT data to MC compatible phantoms, consisting of density and media matrices. The resulting dose distribution is highly influenced by the accuracy of the conversion. Two major contributing factors are precise conversion of CT number to density and proper differentiation between air and lung. Existing tools do not address this issue specifically. Moreover, their density conversion may depend on the number of media used. Differentiation between air and lung is an important task in MC treatment planning and misassignment may lead to local dose errors on the order of 10%. A novel algorithm, CTC-ask, is presented in this study. It enables locally confined constraints for the media assignment and is independent of the number of media used for the conversion of CT number to density. MC compatible phantoms were generated for two clinical cases using a CT-conversion scheme implemented in both CTC-ask and the DICOM-RT toolbox. Full MC dose calculation was subsequently conducted and the resulting dose distributions were compared. The DICOM-RT toolbox inaccurately assigned lung in 9.9% and 12.2% of the voxels located outside of the lungs for the two cases studied, respectively. This was completely avoided by CTC-ask. CTC-ask is able to reduce anatomically irrational media assignment. The CTC-ask source code can be made available upon request to the authors. (note)

  7. A Novel Approach of Cardiac Segmentation In CT Image Based On Spline Interpolation

    International Nuclear Information System (INIS)

    Gao Yuan; Ma Pengcheng

    2011-01-01

    Organ segmentation in CT images is the basis of organ model reconstruction, thus precisely detecting and extracting the organ boundary are keys for reconstruction. In CT image the cardiac are often adjacent to the surrounding tissues and gray gradient between them is too slight, which cause the difficulty of applying classical segmentation method. We proposed a novel algorithm for cardiac segmentation in CT images in this paper, which combines the gray gradient methods and the B-spline interpolation. This algorithm can perfectly detect the boundaries of cardiac, at the same time it could well keep the timeliness because of the automatic processing.

  8. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Gene selection using hybrid binary black hole algorithm and modified binary particle swarm optimization.

    Science.gov (United States)

    Pashaei, Elnaz; Pashaei, Elham; Aydin, Nizamettin

    2018-04-14

    In cancer classification, gene selection is an important data preprocessing technique, but it is a difficult task due to the large search space. Accordingly, the objective of this study is to develop a hybrid meta-heuristic Binary Black Hole Algorithm (BBHA) and Binary Particle Swarm Optimization (BPSO) (4-2) model that emphasizes gene selection. In this model, the BBHA is embedded in the BPSO (4-2) algorithm to make the BPSO (4-2) more effective and to facilitate the exploration and exploitation of the BPSO (4-2) algorithm to further improve the performance. This model has been associated with Random Forest Recursive Feature Elimination (RF-RFE) pre-filtering technique. The classifiers which are evaluated in the proposed framework are Sparse Partial Least Squares Discriminant Analysis (SPLSDA); k-nearest neighbor and Naive Bayes. The performance of the proposed method was evaluated on two benchmark and three clinical microarrays. The experimental results and statistical analysis confirm the better performance of the BPSO (4-2)-BBHA compared with the BBHA, the BPSO (4-2) and several state-of-the-art methods in terms of avoiding local minima, convergence rate, accuracy and number of selected genes. The results also show that the BPSO (4-2)-BBHA model can successfully identify known biologically and statistically significant genes from the clinical datasets. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Pediatric CT angiography

    International Nuclear Information System (INIS)

    Siegel, M.J.

    2005-01-01

    Advances in CT technology are having profound impact on imaging children and have made CT angiography possible even in neonates. Even with the tiny anatomy of neonates, small volumes of contrast material, and small venous access catheters, successful CT angiography can be performed with attention to detail. Meticulous attention to patient preparation, the proper selection of technical factors, and optimal delivery of contrast material are crucial. Data post-processing and the creation of 3-D reconstructions are also essential in establishing a correct diagnosis. The applications fo CT angiography are different in children than in adults and most applications in children involve assessment of congenital and postoperative vascular and cardiac diseases. The use of CT angiography offers the opportunity to eliminate the long periods of sedation associated with MR and reduce the radiation exposure associated with conventional angiography. Generally, the benefits of CT angiography in children outweigh the risk, namely that of radiation exposure. However, care must still be taken to minimize the radiation exposure. (orig.)

  11. Degree of contribution (DoC) feature selection algorithm for structural brain MRI volumetric features in depression detection.

    Science.gov (United States)

    Kipli, Kuryati; Kouzani, Abbas Z

    2015-07-01

    Accurate detection of depression at an individual level using structural magnetic resonance imaging (sMRI) remains a challenge. Brain volumetric changes at a structural level appear to have importance in depression biomarkers studies. An automated algorithm is developed to select brain sMRI volumetric features for the detection of depression. A feature selection (FS) algorithm called degree of contribution (DoC) is developed for selection of sMRI volumetric features. This algorithm uses an ensemble approach to determine the degree of contribution in detection of major depressive disorder. The DoC is the score of feature importance used for feature ranking. The algorithm involves four stages: feature ranking, subset generation, subset evaluation, and DoC analysis. The performance of DoC is evaluated on the Duke University Multi-site Imaging Research in the Analysis of Depression sMRI dataset. The dataset consists of 115 brain sMRI scans of 88 healthy controls and 27 depressed subjects. Forty-four sMRI volumetric features are used in the evaluation. The DoC score of forty-four features was determined as the accuracy threshold (Acc_Thresh) was varied. The DoC performance was compared with that of four existing FS algorithms. At all defined Acc_Threshs, DoC outperformed the four examined FS algorithms for the average classification score and the maximum classification score. DoC has a good ability to generate reduced-size subsets of important features that could yield high classification accuracy. Based on the DoC score, the most discriminant volumetric features are those from the left-brain region.

  12. Spinal CT scan, 2

    International Nuclear Information System (INIS)

    Nakagawa, Hiroshi

    1982-01-01

    Plain CT described fairly accurately the anatomy and lesions of the lumbar and sacral spines on their transverse sections. Since hernia of the intervertebral disc could be directly diagnosed by CT, indications of myelography could be restricted. Spinal-canal stenosis of the lumbar spine occurs because of various factors, and CT not only demonstrated the accurate size and morphology of bony canals, but also elucidated thickening of the joints and yellow ligament. CT was also useful for the diagnosis of tumors in the lumbar and sacral spines, visualizing the images of bone changes and soft tissues on the trasverse sections. But the diagnosis of intradural tumors required myelography and metrizamide CT. CT has become important for the diagnosis of spinal and spinal-cord diseases and for selection of the route of surgical arrival. (Chiba, N.)

  13. Applications of machine-learning algorithms for infrared colour selection of Galactic Wolf-Rayet stars

    Science.gov (United States)

    Morello, Giuseppe; Morris, P. W.; Van Dyk, S. D.; Marston, A. P.; Mauerhan, J. C.

    2018-01-01

    We have investigated and applied machine-learning algorithms for infrared colour selection of Galactic Wolf-Rayet (WR) candidates. Objects taken from the Spitzer Galactic Legacy Infrared Midplane Survey Extraordinaire (GLIMPSE) catalogue of the infrared objects in the Galactic plane can be classified into different stellar populations based on the colours inferred from their broad-band photometric magnitudes [J, H and Ks from 2 Micron All Sky Survey (2MASS), and the four Spitzer/IRAC bands]. The algorithms tested in this pilot study are variants of the k-nearest neighbours approach, which is ideal for exploratory studies of classification problems where interrelations between variables and classes are complicated. The aims of this study are (1) to provide an automated tool to select reliable WR candidates and potentially other classes of objects, (2) to measure the efficiency of infrared colour selection at performing these tasks and (3) to lay the groundwork for statistically inferring the total number of WR stars in our Galaxy. We report the performance results obtained over a set of known objects and selected candidates for which we have carried out follow-up spectroscopic observations, and confirm the discovery of four new WR stars.

  14. A particle swarm optimization algorithm for beam angle selection in intensity-modulated radiotherapy planning

    International Nuclear Information System (INIS)

    Li Yongjie; Yao Dezhong; Yao, Jonathan; Chen Wufan

    2005-01-01

    Automatic beam angle selection is an important but challenging problem for intensity-modulated radiation therapy (IMRT) planning. Though many efforts have been made, it is still not very satisfactory in clinical IMRT practice because of overextensive computation of the inverse problem. In this paper, a new technique named BASPSO (Beam Angle Selection with a Particle Swarm Optimization algorithm) is presented to improve the efficiency of the beam angle optimization problem. Originally developed as a tool for simulating social behaviour, the particle swarm optimization (PSO) algorithm is a relatively new population-based evolutionary optimization technique first introduced by Kennedy and Eberhart in 1995. In the proposed BASPSO, the beam angles are optimized using PSO by treating each beam configuration as a particle (individual), and the beam intensity maps for each beam configuration are optimized using the conjugate gradient (CG) algorithm. These two optimization processes are implemented iteratively. The performance of each individual is evaluated by a fitness value calculated with a physical objective function. A population of these individuals is evolved by cooperation and competition among the individuals themselves through generations. The optimization results of a simulated case with known optimal beam angles and two clinical cases (a prostate case and a head-and-neck case) show that PSO is valid and efficient and can speed up the beam angle optimization process. Furthermore, the performance comparisons based on the preliminary results indicate that, as a whole, the PSO-based algorithm seems to outperform, or at least compete with, the GA-based algorithm in computation time and robustness. In conclusion, the reported work suggested that the introduced PSO algorithm could act as a new promising solution to the beam angle optimization problem and potentially other optimization problems in IMRT, though further studies need to be investigated

  15. An improved fast and elitist multi-objective genetic algorithm-ANSGA-II for multi-objective optimization of inverse radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Cao Ruifen; Li Guoli; Song Gang; Zhao Pan; Lin Hui; Wu Aidong; Huang Chenyu; Wu Yican

    2007-01-01

    Objective: To provide a fast and effective multi-objective optimization algorithm for inverse radiotherapy treatment planning system. Methods: Non-dominated Sorting Genetic Algorithm-NSGA-II is a representative of multi-objective evolutionary optimization algorithms and excels the others. The paper produces ANSGA-II that makes use of advantage of NSGA-II, and uses adaptive crossover and mutation to improve its flexibility; according the character of inverse radiotherapy treatment planning, the paper uses the pre-known knowledge to generate individuals of every generation in the course of optimization, which enhances the convergent speed and improves efficiency. Results: The example of optimizing average dose of a sheet of CT, including PTV, OAR, NT, proves the algorithm could find satisfied solutions in several minutes. Conclusions: The algorithm could provide clinic inverse radiotherapy treatment planning system with selection of optimization algorithms. (authors)

  16. Simulation of the radiography formation process from CT patient volume

    Energy Technology Data Exchange (ETDEWEB)

    Bifulco, P; Cesarelli, M; Verso, E; Roccasalva Firenze, M; Sansone, M; Bracale, M [University of Naples, Federico II, Electronic Engineering Department, Bioengineering Unit, Via Claudio, 21 - 80125 Naples (Italy)

    1999-12-31

    The aim of this work is to develop an algorithm to simulate the radiographic image formation process using volumetric anatomical data of the patient, obtained from 3D diagnostic CT images. Many applications, including radiographic driven surgery, virtual reality in medicine and radiologist teaching and training, may take advantage of such technique. The designed algorithm has been developed to simulate a generic radiographic equipment, whatever oriented respect to the patient. The simulated radiography is obtained considering a discrete number of X-ray paths departing from the focus, passing through the patient volume and reaching the radiographic plane. To evaluate a generic pixel of the simulated radiography, the cumulative absorption along the corresponding X-ray is computed. To estimate X-ray absorption in a generic point of the patient volume, 3D interpolation of CT data has been adopted. The proposed technique is quite similar to those employed in Ray Tracing. A computer designed test volume has been used to assess the reliability of the radiography simulation algorithm as a measuring tool. From the errors analysis emerges that the accuracy achieved by the radiographic simulation algorithm is largely confined within the sampling step of the CT volume. (authors) 16 refs., 12 figs., 1 tabs.

  17. A fast method to emulate an iterative POCS image reconstruction algorithm.

    Science.gov (United States)

    Zeng, Gengsheng L

    2017-10-01

    Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.

  18. Sub-Circuit Selection and Replacement Algorithms Modeled as Term Rewriting Systems

    Science.gov (United States)

    2008-12-16

    of Defense, or the United States Government . AFIT/GCO/ENG/09-02 Sub-circuit Selection and Replacement Algorithms Modeled as Term Rewriting Systems... unicorns and random programs”. Communications and Computer Networks, 24–30. 2005. 87 Vita Eric D. Simonaire graduated from Granite Baptist Church School in...Service to attend the Air Force Institute of Technol- ogy in 2007. Upon graduation, he will serve the federal government in an Information Assurance

  19. Whole-body CT. Spiral and multislice CT. 2. tot. rev. and enl. ed.

    International Nuclear Information System (INIS)

    Prokop, M.; Galanski, M.; Schaefer-Prokop, C.; Molen, A.J. van der

    2007-01-01

    Spiral and multidetector techniques have improved the diagnostic possibilities of CT, so that image analysis and interpretation have become increasingly complex. This book represents the current state of the art in CT imaging, including the most recent technical scanner developments. The second edition comprises the current state of knowledge in cT imaging. There are new chapters on image processing, application of contrasting agents and radiation dose. All organ-specific pathological findings are discussed in full. There are hints for optimum use and interpretation of CT, including CT angiography, CT colonography, CT-IVPL, and 3D imaging. There is an introduction to cardio-CT, from calcium scoring and CTA of the coronary arteries to judgement of cardiac morphology. There are detailed scan protocols with descriptions of how to go about parameter selection. Practical hints are given for better image quality and lower radiation exposure of patients, guidelines for patient preparation and complication management, and more than 1900 images in optimum RRR quality. (orig.)

  20. Indeterminate lesions on planar bone scintigraphy in lung cancer patients: SPECT, CT or SPECT-CT?

    International Nuclear Information System (INIS)

    Sharma, Punit; Kumar, Rakesh; Singh, Harmandeep; Bal, Chandrasekhar; Malhotra, Arun; Julka, Pramod Kumar; Thulkar, Sanjay

    2012-01-01

    The objective of the present study was to compare the role of single photon emission computed tomography (SPECT), computed tomography (CT) and SPECT-CT of selected volume in lung cancer patients with indeterminate lesions on planar bone scintigraphy (BS). The data of 50 lung cancer patients (53 ± 10.3 years; range 30-75; male/female 38/12) with 65 indeterminate lesions on planar BS (January 2010 to November 2010) were retrospectively evaluated. All of them underwent SPECT-CT of a selected volume. SPECT, CT and SPECT-CT images were independently evaluated by two experienced readers (experience in musculoskeletal imaging, including CT: 5 and 7 years) in separate sessions. A scoring scale of 1 to 5 was used, in which 1 is definitely metastatic, 2 is probably metastatic, 3 is indeterminate, 4 is probably benign and 5 is definitely benign. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were calculated for each modality, taking a score ≤2 as metastatic. With receiver operating characteristic (ROC) curve analysis, areas under the curve (AUC) were calculated for each modality and compared. Clinical and imaging follow-up and/or histopathology were taken as reference standard. For both readers SPECT was inferior to CT (P = 0.004, P = 0.022) and SPECT-CT (P = 0.003, P = 0.037). However, no significant difference was found between CT and SPECT-CT for reader 1 (P = 0.847) and reader 2 (P = 0.592). The findings were similar for lytic as well as sclerotic lesions. Moderate inter-observer agreement was seen for SPECT images (κ = 0.426), while almost perfect agreement was seen for CT (κ = 0.834) and SPECT-CT (κ = 0.971). CT alone and SPECT-CT are better than SPECT for accurate characterisation of indeterminate lesions on planar BS in lung cancer patients. CT alone is not inferior to SPECT-CT for this purpose and might be preferred because of shorter acquisition time and wider availability. (orig.)

  1. Re-editing and Censoring of Detectors in Negative Selection Algorithm

    Directory of Open Access Journals (Sweden)

    X.Z. Gao

    2009-12-01

    Full Text Available The Negative Selection Algorithm (NSA is a kind of novelty detection method inspired by the biological self/nonself discrimination principles. In this paper, we propose two new schemes for the detectors re-editing and censoring in the NSA. The detectors that fail to pass the negative selection phase are re-edited and updated to become qualified using the Differential Evolution (DE method. In the detectors censoring, the qualification of all the detectors is evaluated, and only those appropriate ones are retained. Prior knowledge of the anomalous data is utilized to discriminate the detectors so that their anomaly detection performances can be improved. The effectiveness of our detectors re-editing and censoring approaches is examined with both artificial signals and a practical bearings fault detection problem.

  2. Diagnostic value of contrast-enhanced CT combined with 18-FDG PET in patients selected for cytoreductive surgery and hyperthermic intraperitoneal chemotherapy (HIPEC).

    Science.gov (United States)

    Sommariva, Antonio; Evangelista, Laura; Pintacuda, Giovanna; Cervino, Anna Rita; Ramondo, Gaetano; Rossi, Carlo Riccardo

    2018-05-01

    Aim of the study is to assess the reliability and correlation with surgical peritoneal cancer index (PCI) of combined PET/CT and ceCT scans (PET/ceCT) performed in a session in patients with peritoneal carcinomatosis candidates for cytoreductive surgery (CS) and hyperthermic intraperitoneal chemotherapy (HIPEC). We retrospectively analyzed data collected from 27 patients with different types of peritoneal carcinomatosis candidates to CS + HIPEC who underwent FDG PET/ceCT in a single session. Two nuclear medicine physicians and two radiologists independently and blindly evaluated PET/CT and ceCT imaging, respectively. In the case of discordance, the consensus was reached by a discussion between the specialists. Moreover, the combined images were evaluated by all the specialists in consensus. The PCIs obtained from surgical look, PET/CT, ceCT, and PET/ceCT were compared with each other. The coefficients of correlation (r) were calculated. The study was conducted after approval of local ethics committee. Surgical PCI was available in 21 patients. The coefficient of correlation between PCI of PET/CT and surgery was 0.528, while it resulted higher between PET/ceCT and surgery (r = 0.878), very similar to ceCT and surgery (r = 0.876). The r coefficient between surgical PCI and PET/CT was higher in patients with a non-mucinous cancer (n = 12) than the counterpart (0.601 vs. 0.303) and the addition of ceCT significantly increases the correlation (r = 0.863), which is anyway similar to ceCT alone (r = 0.856). PET/ceCT as single examination is more accurate than PET/CT but not than ceCT alone for the definition of PCI in a selected group of patients candidates to CS + HIPEC.

  3. Fast parallel algorithm for CT image reconstruction.

    Science.gov (United States)

    Flores, Liubov A; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2012-01-01

    In X-ray computed tomography (CT) the X rays are used to obtain the projection data needed to generate an image of the inside of an object. The image can be generated with different techniques. Iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions and from a small number of projections. Their use may be important in portable scanners for their functionality in emergency situations. However, in practice, these methods are not widely used due to the high computational cost of their implementation. In this work we analyze iterative parallel image reconstruction with the Portable Extensive Toolkit for Scientific computation (PETSc).

  4. Optimal Selection of Clustering Algorithm via Multi-Criteria Decision Analysis (MCDA for Load Profiling Applications

    Directory of Open Access Journals (Sweden)

    Ioannis P. Panapakidis

    2018-02-01

    Full Text Available Due to high implementation rates of smart meter systems, considerable amount of research is placed in machine learning tools for data handling and information retrieval. A key tool in load data processing is clustering. In recent years, a number of researches have proposed different clustering algorithms in the load profiling field. The present paper provides a methodology for addressing the aforementioned problem through Multi-Criteria Decision Analysis (MCDA and namely, using the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS. A comparison of the algorithms is employed. Next, a single test case on the selection of an algorithm is examined. User specific weights are applied and based on these weight values, the optimal algorithm is drawn.

  5. FRCA: A Fuzzy Relevance-Based Cluster Head Selection Algorithm for Wireless Mobile Ad-Hoc Sensor Networks

    Directory of Open Access Journals (Sweden)

    Taegwon Jeong

    2011-05-01

    Full Text Available Clustering is an important mechanism that efficiently provides information for mobile nodes and improves the processing capacity of routing, bandwidth allocation, and resource management and sharing. Clustering algorithms can be based on such criteria as the battery power of nodes, mobility, network size, distance, speed and direction. Above all, in order to achieve good clustering performance, overhead should be minimized, allowing mobile nodes to join and leave without perturbing the membership of the cluster while preserving current cluster structure as much as possible. This paper proposes a Fuzzy Relevance-based Cluster head selection Algorithm (FRCA to solve problems found in existing wireless mobile ad hoc sensor networks, such as the node distribution found in dynamic properties due to mobility and flat structures and disturbance of the cluster formation. The proposed mechanism uses fuzzy relevance to select the cluster head for clustering in wireless mobile ad hoc sensor networks. In the simulation implemented on the NS-2 simulator, the proposed FRCA is compared with algorithms such as the Cluster-based Routing Protocol (CBRP, the Weighted-based Adaptive Clustering Algorithm (WACA, and the Scenario-based Clustering Algorithm for Mobile ad hoc networks (SCAM. The simulation results showed that the proposed FRCA achieves better performance than that of the other existing mechanisms.

  6. FRCA: a fuzzy relevance-based cluster head selection algorithm for wireless mobile ad-hoc sensor networks.

    Science.gov (United States)

    Lee, Chongdeuk; Jeong, Taegwon

    2011-01-01

    Clustering is an important mechanism that efficiently provides information for mobile nodes and improves the processing capacity of routing, bandwidth allocation, and resource management and sharing. Clustering algorithms can be based on such criteria as the battery power of nodes, mobility, network size, distance, speed and direction. Above all, in order to achieve good clustering performance, overhead should be minimized, allowing mobile nodes to join and leave without perturbing the membership of the cluster while preserving current cluster structure as much as possible. This paper proposes a Fuzzy Relevance-based Cluster head selection Algorithm (FRCA) to solve problems found in existing wireless mobile ad hoc sensor networks, such as the node distribution found in dynamic properties due to mobility and flat structures and disturbance of the cluster formation. The proposed mechanism uses fuzzy relevance to select the cluster head for clustering in wireless mobile ad hoc sensor networks. In the simulation implemented on the NS-2 simulator, the proposed FRCA is compared with algorithms such as the Cluster-based Routing Protocol (CBRP), the Weighted-based Adaptive Clustering Algorithm (WACA), and the Scenario-based Clustering Algorithm for Mobile ad hoc networks (SCAM). The simulation results showed that the proposed FRCA achieves better performance than that of the other existing mechanisms.

  7. Resolution enhancement of lung 4D-CT data using multiscale interphase iterative nonlocal means

    International Nuclear Information System (INIS)

    Zhang Yu; Yap, Pew-Thian; Wu Guorong; Feng Qianjin; Chen Wufan; Lian Jun; Shen Dinggang

    2013-01-01

    Purpose: Four-dimensional computer tomography (4D-CT) has been widely used in lung cancer radiotherapy due to its capability in providing important tumor motion information. However, the prolonged scanning duration required by 4D-CT causes considerable increase in radiation dose. To minimize the radiation-related health risk, radiation dose is often reduced at the expense of interslice spatial resolution. However, inadequate resolution in 4D-CT causes artifacts and increases uncertainty in tumor localization, which eventually results in extra damages of healthy tissues during radiotherapy. In this paper, the authors propose a novel postprocessing algorithm to enhance the resolution of lung 4D-CT data. Methods: The authors' premise is that anatomical information missing in one phase can be recovered from the complementary information embedded in other phases. The authors employ a patch-based mechanism to propagate information across phases for the reconstruction of intermediate slices in the longitudinal direction, where resolution is normally the lowest. Specifically, the structurally matching and spatially nearby patches are combined for reconstruction of each patch. For greater sensitivity to anatomical details, the authors employ a quad-tree technique to adaptively partition the image for more fine-grained refinement. The authors further devise an iterative strategy for significant enhancement of anatomical details. Results: The authors evaluated their algorithm using a publicly available lung data that consist of 10 4D-CT cases. The authors’ algorithm gives very promising results with significantly enhanced image structures and much less artifacts. Quantitative analysis shows that the authors’ algorithm increases peak signal-to-noise ratio by 3–4 dB and the structural similarity index by 3%–5% when compared with the standard interpolation-based algorithms. Conclusions: The authors have developed a new algorithm to improve the resolution of 4D-CT. It

  8. Feature-based US to CT registration of the aortic root

    Science.gov (United States)

    Lang, Pencilla; Chen, Elvis C. S.; Guiraudon, Gerard M.; Jones, Doug L.; Bainbridge, Daniel; Chu, Michael W.; Drangova, Maria; Hata, Noby; Jain, Ameet; Peters, Terry M.

    2011-03-01

    A feature-based registration was developed to align biplane and tracked ultrasound images of the aortic root with a preoperative CT volume. In transcatheter aortic valve replacement, a prosthetic valve is inserted into the aortic annulus via a catheter. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to significant morbidity and mortality. Registration of pre-operative CT to transesophageal ultrasound and fluoroscopy images is a major step towards providing augmented image guidance for this procedure. The proposed registration approach uses an iterative closest point algorithm to register a surface mesh generated from CT to 3D US points reconstructed from a single biplane US acquisition, or multiple tracked US images. The use of a single simultaneous acquisition biplane image eliminates reconstruction error introduced by cardiac gating and TEE probe tracking, creating potential for real-time intra-operative registration. A simple initialization procedure is used to minimize changes to operating room workflow. The algorithm is tested on images acquired from excised porcine hearts. Results demonstrate a clinically acceptable accuracy of 2.6mm and 5mm for tracked US to CT and biplane US to CT registration respectively.

  9. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    Science.gov (United States)

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Automated tube voltage selection for radiation dose and contrast medium reduction at coronary CT angiography using 3{sup rd} generation dual-source CT

    Energy Technology Data Exchange (ETDEWEB)

    Mangold, Stefanie [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany); Wichmann, Julian L. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University Hospital Frankfurt, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Schoepf, U.J. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Medical University of South Carolina, Division of Cardiology, Department of Medicine, Charleston, SC (United States); Poole, Zachary B.; Varga-Szemes, Akos; De Cecco, Carlo N. [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); Canstein, Christian [Siemens Medical Solutions, Malvern, PA (United States); Caruso, Damiano [Medical University of South Carolina, Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Charleston, SC (United States); University of Rome ' ' Sapienza' ' , Department of Radiological Sciences, Oncology and Pathology, Rome (Italy); Bamberg, Fabian; Nikolaou, Konstantin [Eberhard-Karls University Tuebingen, Department of Diagnostic and Interventional Radiology, Tuebingen (Germany)

    2016-10-15

    To investigate the relationship between automated tube voltage selection (ATVS) and body mass index (BMI) and its effect on image quality and radiation dose of coronary CT angiography (CCTA). We evaluated 272 patients who underwent CCTA with 3{sup rd} generation dual-source CT (DSCT). Prospectively ECG-triggered spiral acquisition was performed with automated tube current selection and advanced iterative reconstruction. Tube voltages were selected by ATVS (70-120 kV). BMI, effective dose (ED), and vascular attenuation in the coronary arteries were recorded. Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. Five-point scales were used for subjective image quality analysis. Image quality was rated good to excellent in 98.9 % of examinations without significant differences for proximal and distal attenuation (all p ≥.0516), whereas image noise was rated significantly higher at 70 kV compared to ≥100 kV (all p <.0266). However, no significant differences were observed in SNR or CNR at 70-120 kV (all p ≥.0829). Mean ED at 70-120 kV was 1.5 ± 1.2 mSv, 2.4 ± 1.5 mSv, 3.6 ± 2.7 mSv, 5.9 ± 4.0 mSv, 7.9 ± 4.2 mSv, and 10.7 ± 4.1 mSv, respectively (all p ≤.0414). Correlation analysis showed a moderate association between tube voltage and BMI (r =.639). ATVS allows individual tube voltage adaptation for CCTA performed with 3{sup rd} generation DSCT, resulting in significantly decreased radiation exposure while maintaining image quality. (orig.)

  11. ProSelection: A Novel Algorithm to Select Proper Protein Structure Subsets for in Silico Target Identification and Drug Discovery Research.

    Science.gov (United States)

    Wang, Nanyi; Wang, Lirong; Xie, Xiang-Qun

    2017-11-27

    Molecular docking is widely applied to computer-aided drug design and has become relatively mature in the recent decades. Application of docking in modeling varies from single lead compound optimization to large-scale virtual screening. The performance of molecular docking is highly dependent on the protein structures selected. It is especially challenging for large-scale target prediction research when multiple structures are available for a single target. Therefore, we have established ProSelection, a docking preferred-protein selection algorithm, in order to generate the proper structure subset(s). By the ProSelection algorithm, protein structures of "weak selectors" are filtered out whereas structures of "strong selectors" are kept. Specifically, the structure which has a good statistical performance of distinguishing active ligands from inactive ligands is defined as a strong selector. In this study, 249 protein structures of 14 autophagy-related targets are investigated. Surflex-dock was used as the docking engine to distinguish active and inactive compounds against these protein structures. Both t test and Mann-Whitney U test were used to distinguish the strong from the weak selectors based on the normality of the docking score distribution. The suggested docking score threshold for active ligands (SDA) was generated for each strong selector structure according to the receiver operating characteristic (ROC) curve. The performance of ProSelection was further validated by predicting the potential off-targets of 43 U.S. Federal Drug Administration approved small molecule antineoplastic drugs. Overall, ProSelection will accelerate the computational work in protein structure selection and could be a useful tool for molecular docking, target prediction, and protein-chemical database establishment research.

  12. Topogram-based automated selection of the tube potential and current in thoraco-abdominal trauma CT - a comparison to fixed kV with mAs modulation alone

    International Nuclear Information System (INIS)

    Frellesen, Claudia; Stock, Wenzel; Kerl, J.M.; Lehnert, Thomas; Wichmann, Julian L.; Beeres, Martin; Schulz, Boris; Bodelle, Boris; Vogl, Thomas J.; Nau, Christoph; Geiger, Emanuel; Wutzler, Sebastian; Ackermann, Hanns; Bauer, Ralf W.

    2014-01-01

    To investigate the impact of automated attenuation-based tube potential selection on image quality and exposure parameters in polytrauma patients undergoing contrast-enhanced thoraco-abdominal CT. One hundred patients were examined on a 16-slice device at 120 kV with 190 ref.mAs and automated mA modulation only. Another 100 patients underwent 128-slice CT with automated mA modulation and topogram-based automated tube potential selection (autokV) at 100, 120 or 140 kV. Volume CT dose index (CTDI vol ), dose-length product (DLP), body diameters, noise, signal-to-noise ratio (SNR) and subjective image quality were compared. In the autokV group, 100 kV was automatically selected in 82 patients, 120 kV in 12 patients and 140 kV in 6 patients. Patient diameters increased with higher kV settings. The median CTDI vol (8.3 vs. 12.4 mGy; -33 %) and DLP (594 vs. 909 mGy cm; -35 %) in the entire autokV group were significantly lower than in the group with fixed 120 kV (p < 0.05 for both). Image quality remained at a constantly high level at any selected kV level. Topogram-based automated selection of the tube potential allows for significant dose savings in thoraco-abdominal trauma CT while image quality remains at a constantly high level. (orig.)

  13. Topogram-based automated selection of the tube potential and current in thoraco-abdominal trauma CT - a comparison to fixed kV with mAs modulation alone

    Energy Technology Data Exchange (ETDEWEB)

    Frellesen, Claudia; Stock, Wenzel; Kerl, J.M.; Lehnert, Thomas; Wichmann, Julian L.; Beeres, Martin; Schulz, Boris; Bodelle, Boris; Vogl, Thomas J. [Clinic of the Goethe University, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Nau, Christoph; Geiger, Emanuel; Wutzler, Sebastian [Clinic of the Goethe University, Department of Trauma, Hand and Reconstructive Surgery, Frankfurt (Germany); Ackermann, Hanns [Clinic of the Goethe University, Department of Biostatistics and Mathematical Modelling, Frankfurt (Germany); Bauer, Ralf W. [Clinic of the Goethe University, Department of Diagnostic and Interventional Radiology, Frankfurt (Germany); Klinikum der Goethe-Universitaet, Institut fuer Diagnostische und Interventionelle Radiologie, Frankfurt am Main (Germany)

    2014-07-15

    To investigate the impact of automated attenuation-based tube potential selection on image quality and exposure parameters in polytrauma patients undergoing contrast-enhanced thoraco-abdominal CT. One hundred patients were examined on a 16-slice device at 120 kV with 190 ref.mAs and automated mA modulation only. Another 100 patients underwent 128-slice CT with automated mA modulation and topogram-based automated tube potential selection (autokV) at 100, 120 or 140 kV. Volume CT dose index (CTDI{sub vol}), dose-length product (DLP), body diameters, noise, signal-to-noise ratio (SNR) and subjective image quality were compared. In the autokV group, 100 kV was automatically selected in 82 patients, 120 kV in 12 patients and 140 kV in 6 patients. Patient diameters increased with higher kV settings. The median CTDI{sub vol} (8.3 vs. 12.4 mGy; -33 %) and DLP (594 vs. 909 mGy cm; -35 %) in the entire autokV group were significantly lower than in the group with fixed 120 kV (p < 0.05 for both). Image quality remained at a constantly high level at any selected kV level. Topogram-based automated selection of the tube potential allows for significant dose savings in thoraco-abdominal trauma CT while image quality remains at a constantly high level. (orig.)

  14. Clinical impact of SPECT-CT in the diagnosis and surgical management of hyper-parathyroidism.

    Science.gov (United States)

    Tokmak, Handan; Demirkol, Mehmet Onur; Alagöl, Faruk; Tezelman, Serdar; Terzioglu, Tarik

    2014-01-01

    Hyper-functioning parathyroid glands with autonomous overproduction of PTH is the most frequent cause of hypercalcemia in outpatient populations with primary hyper-parathyroidism. It is generally caused by a solitary adenoma in 80%-90% of patients. Despite the various methodologies that are available for preoperative localization of parathyroid lesions, there is still no certain preoperative imaging algorithm to guide a surgical approach prior to the management of primary hyper-parathyroidism (P-HPT). Minimally invasive surgery has replaced the traditional bilateral neck exploration (BNE) as the initial approach in parathyroidectomy at many referral hospitals worldwide. In our study, we investigated diagnostic contributions of SPECT-CT combined with conventional planar scintigraphy in the detection of hyper-functioning parathyroid gland localization, since planar imaging has limitations. We also evaluated the efficacy of preoperative USG in adding to initial diagnostic imaging algorithms to localize a parathyroid adenoma. A total of 256 consecutive surgically naive patients with hyper-parathyroidism diagnosis were included in the following preoperative localization study. The study consisted of 256 consecutive patients with HPT, with a selected 154 patients who had neck surgery with definitive histology reports. All patients had 99mTc-methoxyisobutylisonitrile (99mTc-MIBI) double-phase scintigraphy. The SPECT-CT procedure, combined with standard 99mTc-MIBI planar parathyroid scintigraphy with a pinhole and parallel-hole collimator to evaluate whether the SPECT-CT procedure was able to provide additional information in the localization of the pathology, caused hyper-parathyroidism in both P-HPT and S-HPT. In the 154 P-HPT patients, 168 lesions (142 adenomas including 2 intrathyroidal and 2 double adenoma, 2 carcinoma, and 22 hyperplastic glands (four patients had MEN I, each with four hyperplastic glands)), were found at surgery. SPECT-CT detected more lesions than

  15. A Simple Density with Distance Based Initial Seed Selection Technique for K Means Algorithm

    Directory of Open Access Journals (Sweden)

    Sajidha Syed Azimuddin

    2017-01-01

    Full Text Available Open issues with respect to K means algorithm are identifying the number of clusters, initial seed concept selection, clustering tendency, handling empty clusters, identifying outliers etc. In this paper we propose a novel and a simple technique considering both density and distance of the concepts in a dataset to identify initial seed concepts for clustering. Many authors have proposed different techniques to identify initial seed concepts; but our method ensures that the initial seed concepts are chosen from different clusters that are to be generated by the clustering solution. The hallmark of our algorithm is that it is a single pass algorithm that does not require any extra parameters to be estimated. Further, our seed concepts are one among the actual concepts and not the mean of representative concepts as is the case in many other algorithms. We have implemented our proposed algorithm and compared the results with the interval based technique of Fouad Khan. We see that our method outperforms the interval based method. We have also compared our method with the original random K means and K Means++ algorithms.

  16. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    Science.gov (United States)

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  17. A deformable-model approach to semi-automatic segmentation of CT images demonstrated by application to the spinal canal

    International Nuclear Information System (INIS)

    Burnett, Stuart S.C.; Starkschall, George; Stevens, Craig W.; Liao Zhongxing

    2004-01-01

    Because of the importance of accurately defining the target in radiation treatment planning, we have developed a deformable-template algorithm for the semi-automatic delineation of normal tissue structures on computed tomography (CT) images. We illustrate the method by applying it to the spinal canal. Segmentation is performed in three steps: (a) partial delineation of the anatomic structure is obtained by wavelet-based edge detection; (b) a deformable-model template is fitted to the edge set by chamfer matching; and (c) the template is relaxed away from its original shape into its final position. Appropriately chosen ranges for the model parameters limit the deformations of the template, accounting for interpatient variability. Our approach differs from those used in other deformable models in that it does not inherently require the modeling of forces. Instead, the spinal canal was modeled using Fourier descriptors derived from four sets of manually drawn contours. Segmentation was carried out, without manual intervention, on five CT data sets and the algorithm's performance was judged subjectively by two radiation oncologists. Two assessments were considered: in the first, segmentation on a random selection of 100 axial CT images was compared with the corresponding contours drawn manually by one of six dosimetrists, also chosen randomly; in the second assessment, the segmentation of each image in the five evaluable CT sets (a total of 557 axial images) was rated as either successful, unsuccessful, or requiring further editing. Contours generated by the algorithm were more likely than manually drawn contours to be considered acceptable by the oncologists. The mean proportions of acceptable contours were 93% (automatic) and 69% (manual). Automatic delineation of the spinal canal was deemed to be successful on 91% of the images, unsuccessful on 2% of the images, and requiring further editing on 7% of the images. Our deformable template algorithm thus gives a robust

  18. Mathematical filtering minimizes metallic halation of titanium implants in MicroCT images.

    Science.gov (United States)

    Ha, Jee; Osher, Stanley J; Nishimura, Ichiro

    2013-01-01

    Microcomputed tomography (MicroCT) images containing titanium implant suffer from x-rays scattering, artifact and the implant surface is critically affected by metallic halation. To improve the metallic halation artifact, a nonlinear Total Variation denoising algorithm such as Split Bregman algorithm was applied to the digital data set of MicroCT images. This study demonstrated that the use of a mathematical filter could successfully reduce metallic halation, facilitating the osseointegration evaluation at the bone implant interface in the reconstructed images.

  19. Dose assessment according to changes in algorithm in cardiac CT

    Science.gov (United States)

    Jang, H. C.; Cho, J. H.; Lee, H. K.; Hong, I. S.; Cho, M. S.; Park, C. S.; Lee, S. Y.; Dong, K. R.; Goo, E. H.; Chung, W. K.; Ryu, Y. H.; Lim, C. S.

    2012-06-01

    The principal objective of this study was to determine the effects of the application of the adaptive statistical iterative reconstruction (ASIR) technique in combination with another two factors (body mass index (BMI) and tube potential) on radiation dose in cardiac computed tomography (CT). For quantitative analysis, regions of interest were positioned on the central region of the great coronary artery, the right coronary artery, and the left anterior descending artery, after which the means and standard deviations of measured CT numbers were obtained. For qualitative analysis, images taken from the major coronary arteries (right coronary, left anterior descending, and left circumflex) were graded on a scale of 1-5, with 5 indicating the best image quality. Effective dose, which was calculated by multiplying the value of the dose length product by a standard conversion factor of 0.017 for the chest, was employed as a measure of radiation exposure dose. In cardiac CT in patients with BMI of less than 25 kg/m2, the use of 40% ASIR in combination with a low tube potential of 100 kVp resulted in a significant reduction in the radiation dose without compromising diagnostic quality. Additionally, the combination of the 120 kVp protocol and the application of 40% ASIR application for patients with BMI higher than 25 kg/m2 yielded similar results.

  20. Your choice MATor(s) : large-scale quantitative anonymity assessment of Tor path selection algorithms against structural attacks

    OpenAIRE

    Backes, Michael; Meiser, Sebastian; Slowik, Marcin

    2015-01-01

    In this paper, we present a rigorous methodology for quantifying the anonymity provided by Tor against a variety of structural attacks, i.e., adversaries that compromise Tor nodes and thereby perform eavesdropping attacks to deanonymize Tor users. First, we provide an algorithmic approach for computing the anonymity impact of such structural attacks against Tor. The algorithm is parametric in the considered path selection algorithm and is, hence, capable of reasoning about variants of Tor and...

  1. CPAC: Energy-Efficient Data Collection through Adaptive Selection of Compression Algorithms for Sensor Networks

    Science.gov (United States)

    Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon

    2014-01-01

    We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763

  2. A Flexible Method for Multi-Material Decomposition of Dual-Energy CT Images.

    Science.gov (United States)

    Mendonca, Paulo R S; Lamb, Peter; Sahani, Dushyant V

    2014-01-01

    The ability of dual-energy computed-tomographic (CT) systems to determine the concentration of constituent materials in a mixture, known as material decomposition, is the basis for many of dual-energy CT's clinical applications. However, the complex composition of tissues and organs in the human body poses a challenge for many material decomposition methods, which assume the presence of only two, or at most three, materials in the mixture. We developed a flexible, model-based method that extends dual-energy CT's core material decomposition capability to handle more complex situations, in which it is necessary to disambiguate among and quantify the concentration of a larger number of materials. The proposed method, named multi-material decomposition (MMD), was used to develop two image analysis algorithms. The first was virtual unenhancement (VUE), which digitally removes the effect of contrast agents from contrast-enhanced dual-energy CT exams. VUE has the ability to reduce patient dose and improve clinical workflow, and can be used in a number of clinical applications such as CT urography and CT angiography. The second algorithm developed was liver-fat quantification (LFQ), which accurately quantifies the fat concentration in the liver from dual-energy CT exams. LFQ can form the basis of a clinical application targeting the diagnosis and treatment of fatty liver disease. Using image data collected from a cohort consisting of 50 patients and from phantoms, the application of MMD to VUE and LFQ yielded quantitatively accurate results when compared against gold standards. Furthermore, consistent results were obtained across all phases of imaging (contrast-free and contrast-enhanced). This is of particular importance since most clinical protocols for abdominal imaging with CT call for multi-phase imaging. We conclude that MMD can successfully form the basis of a number of dual-energy CT image analysis algorithms, and has the potential to improve the clinical utility

  3. Dose reduction in pediatric abdominal CT: use of iterative reconstruction techniques across different CT platforms

    International Nuclear Information System (INIS)

    Khawaja, Ranish Deedar Ali; Singh, Sarabjeet; Otrakji, Alexi; Padole, Atul; Lim, Ruth; Nimkin, Katherine; Westra, Sjirk; Kalra, Mannudeep K.; Gee, Michael S.

    2015-01-01

    Dose reduction in children undergoing CT scanning is an important priority for the radiology community and public at large. Drawbacks of radiation reduction are increased image noise and artifacts, which can affect image interpretation. Iterative reconstruction techniques have been developed to reduce noise and artifacts from reduced-dose CT examinations, although reconstruction algorithm, magnitude of dose reduction and effects on image quality vary. We review the reconstruction principles, radiation dose potential and effects on image quality of several iterative reconstruction techniques commonly used in clinical settings, including 3-D adaptive iterative dose reduction (AIDR-3D), adaptive statistical iterative reconstruction (ASIR), iDose, sinogram-affirmed iterative reconstruction (SAFIRE) and model-based iterative reconstruction (MBIR). We also discuss clinical applications of iterative reconstruction techniques in pediatric abdominal CT. (orig.)

  4. Dose reduction in pediatric abdominal CT: use of iterative reconstruction techniques across different CT platforms

    Energy Technology Data Exchange (ETDEWEB)

    Khawaja, Ranish Deedar Ali; Singh, Sarabjeet; Otrakji, Alexi; Padole, Atul; Lim, Ruth; Nimkin, Katherine; Westra, Sjirk; Kalra, Mannudeep K.; Gee, Michael S. [MGH Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA (United States)

    2015-07-15

    Dose reduction in children undergoing CT scanning is an important priority for the radiology community and public at large. Drawbacks of radiation reduction are increased image noise and artifacts, which can affect image interpretation. Iterative reconstruction techniques have been developed to reduce noise and artifacts from reduced-dose CT examinations, although reconstruction algorithm, magnitude of dose reduction and effects on image quality vary. We review the reconstruction principles, radiation dose potential and effects on image quality of several iterative reconstruction techniques commonly used in clinical settings, including 3-D adaptive iterative dose reduction (AIDR-3D), adaptive statistical iterative reconstruction (ASIR), iDose, sinogram-affirmed iterative reconstruction (SAFIRE) and model-based iterative reconstruction (MBIR). We also discuss clinical applications of iterative reconstruction techniques in pediatric abdominal CT. (orig.)

  5. CT-guided interventions in children

    International Nuclear Information System (INIS)

    Honnef, D.; Wildberger, J.E.; Schubert, H.; Hohl, C.; Guenther, R.W.; Mahnken, A.

    2007-01-01

    In pediatric CT-guided interventions specific features have to be taken into account. Due to a lack of cooperation or limited ability to cooperate, procedures are often performed using analgosedation or general anesthesia. To provide radiation protection, justified indication for CT-guided intervention is necessary and sonography and MRI are to be preferred whenever possible. CT examinations also need to be dose-adapted with sequential scanning and a tube voltage and tube current reduction compared to pediatric diagnostic CT studies must be ensured. Gonad shields are recommended for male patients. Biopsy device selection depends on the assumed tumor entity since histology and also immunohistochemical, molecular pathological and cytogenetical analysis are necessary to differentiate pediatric tumors (small, round, blue cell tumors). In addition to diagnostic procedures, therapeutic interventions (drainage, injection therapies, neurolysis, and radiofrequency ablation) can also be used in children and can provide an alternative to surgery in selected cases. With justified indications and precise performance, CT-guided interventions can be successful in pediatric patients with limited risks. (orig.)

  6. New horizons in cardiac CT

    International Nuclear Information System (INIS)

    Harder, A.M. den; Willemink, M.J.; Jong, P.A. de; Schilham, A.M.R.; Rajiah, P.; Takx, R.A.P.; Leiner, T.

    2016-01-01

    Until recently, cardiovascular computed tomography angiography (CCTA) was associated with considerable radiation doses. The introduction of tube current modulation and automatic tube potential selection as well as high-pitch prospective ECG-triggering and iterative reconstruction offer the ability to decrease dose with approximately one order of magnitude, often to sub-millisievert dose levels. In parallel, advancements in computational technology have enabled the measurement of fractional flow reserve (FFR) from CCTA data (FFR_C_T). This technique shows potential to replace invasively measured FFR to select patients in need of coronary intervention. Furthermore, developments in scanner hardware have led to the introduction of dual-energy and photon-counting CT, which offer the possibility of material decomposition imaging. Dual-energy CT reduces beam hardening, which enables CCTA in patients with a high calcium burden and more robust myocardial CT perfusion imaging. Future-generation CT systems will be capable of counting individual X-ray photons. Photon-counting CT is promising and may result in a substantial further radiation dose reduction, vastly increased spatial resolution, and the introduction of a whole new class of contrast agents.

  7. Plain Radiography May Be Safely Omitted for Selected Major Trauma Patients Undergoing Whole Body CT: Database Study

    Directory of Open Access Journals (Sweden)

    Sarah Hudson

    2012-01-01

    Full Text Available Introduction. Whole body CT is being used increasingly in the primary survey of major trauma patients. We evaluated whether omitting plain films of the chest and pelvis in the primary survey was safe. We compared the probability of survival of patients and time to CT who had plain X-rays to those who did not. Method. We performed a database study on major trauma patients admitted between 2008 and 2010 using data from Trauma, Audit and Research Network (TARN and our PACS system. We included adult major trauma patients who has an ISS of greater than 15 and underwent whole body CT. Results. 245 patients were included in the study. 44 (17.9% did not undergo plain films. The median time to whole body CT from the time of admission was longer (47 minutes in patients having plain films, than those who did not have plain films performed (30 minutes, P<0.005. Mortality was increased in the group who received plain films, 9.5% compared to 4.5%, but this was not statistically significant (P=0.77. Conclusion. We conclude that plain films may be safely omitted during the primary survey of selected major trauma patients.

  8. Intraoperative validation of CT-based lymph nodal levels, sublevels IIa and IIb: Is it of clinical relevance in selective radiation therapy?

    International Nuclear Information System (INIS)

    Levendag, Peter; Gregoire, Vincent; Hamoir, Marc; Voet, Peter; Est, Henrie van der; Heijmen, Ben; Kerrebijn, Jeroen

    2005-01-01

    Purpose: The objectives of this study are to discuss the intraoperative validation of CT-based boundaries of lymph nodal levels in the neck, and in particular the clinical relevance of the delineation of sublevels IIa and IIb in case of selective radiation therapy (RT). Methods and Materials: To validate the radiologically defined level contours, clips were positioned intraoperatively at the level boundaries defined by surgical anatomy. In 10 consecutive patients, clips were placed, at the time of a neck dissection being performed, at the most cranial border of the neck. Anterior-posterior and lateral X-ray films were obtained intraoperatively. Next, in 3 patients, neck levels were contoured on preoperative contrast-enhanced CT scans according to the international consensus guidelines. From each of these 3 patients, an intraoperative CT scan was also obtained, with clips placed at the surgical-anatomy-based level boundaries. The preoperative (CT-based) and intraoperative (surgery-defined) CT scans were matched. Results: Clips placed at the most cranial part of the neck lined up at the caudal part of the transverse process of the cervical vertebra C-I. The posterior border of surgical level IIa (spinal accessory nerve [SAN]) did not match with the posterior border of CT-based level IIa (internal jugular vein [IJV]). Other surgical boundaries and CT-based contours were in good agreement. Conclusions: The cranial border of the neck, i.e., the cranial border of level IIa/IIb, corresponds to the caudal edge of the lateral process of C-I. Except for the posterior border between level IIa and level IIb, a perfect match was observed between the other surgical-clip-identified levels II-V boundaries (surgical-anatomy) and the CT-based delineation contours. It is argued that (1) because of the parotid gland overlapping part of level II, and (2) the frequent infestation of occult metastatic cells in the lymph channels around the IJV, the division of level II into radiologic

  9. Toward optimal X-ray flux utilization in breast CT

    DEFF Research Database (Denmark)

    Jørgensen, Jakob Heide; Hansen, Per Christian; Sidky, Emil Y.

    2011-01-01

    A realistic computer-simulation of a breast computed tomography (CT) system and subject is constructed. The model is used to investigate the optimal number of views for the scan given a fixed total X-ray fluence. The reconstruction algorithm is based on accurate solution to a constrained......, TVminimization problem, which has received much interest recently for sparse-view CT data....

  10. FDG PET/CT in cancer

    DEFF Research Database (Denmark)

    Petersen, Henrik; Holdgaard, Paw Christian; Madsen, Poul Henning

    2016-01-01

    PURPOSE: The Region of Southern Denmark (RSD), covering 1.2 of Denmark's 5.6 million inhabitants, established a task force to (1) retrieve literature evidence for the clinical use of positron emission tomography (PET)/CT and provide consequent recommendations and further to (2) compare the actual...... use of PET/CT in the RSD with these recommendations. This article summarizes the results. METHODS: A Work Group appointed a professional Subgroup which made Clinician Groups conduct literature reviews on six selected cancers responsible for 5,768 (62.6 %) of 9,213 PET/CT scans in the RSD in 2012...... use of PET/CT and literature-based recommendations was high in the first five mentioned cancers in that 96.2 % of scans were made for grade A or B indications versus only 22.2 % in gynaecological cancers. CONCLUSION: Evidence-based usefulness was reported in five of six selected cancers; evidence...

  11. Dual scan CT image recovery from truncated projections

    Science.gov (United States)

    Sarkar, Shubhabrata; Wahi, Pankaj; Munshi, Prabhat

    2017-12-01

    There are computerized tomography (CT) scanners available commercially for imaging small objects and they are often categorized as mini-CT X-ray machines. One major limitation of these machines is their inability to scan large objects with good image quality because of the truncation of projection data. An algorithm is proposed in this work which enables such machines to scan large objects while maintaining the quality of the recovered image.

  12. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Choo, Ji Yung; Goo, Jin Mo; Park, Chang Min; Park, Sang Joon; Lee, Chang Hyun; Shim, Mi-Suk

    2014-01-01

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  13. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  14. Improved image quality with simultaneously reduced radiation exposure: Knowledge-based iterative model reconstruction algorithms for coronary CT angiography in a clinical setting.

    Science.gov (United States)

    André, Florian; Fortner, Philipp; Vembar, Mani; Mueller, Dirk; Stiller, Wolfram; Buss, Sebastian J; Kauczor, Hans-Ulrich; Katus, Hugo A; Korosoglou, Grigorios

    The aim of this study was to assess the potential for radiation dose reduction using knowledge-based iterative model reconstruction (K-IMR) algorithms in combination with ultra-low dose body mass index (BMI)-adapted protocols in coronary CT angiography (coronary CTA). Forty patients undergoing clinically indicated coronary CTA were randomly assigned to two groups with BMI-adapted (I: quality was significantly better in the ULD group using K-IMR CR 1 compared to FBP, iD 2 and iD 5 in the LD group, resulting in fewer non-diagnostic coronary segments (2.4% vs. 11.6%, 9.2% and 6.1%; p quality compared to LD protocols with FBP or hybrid iterative algorithms. Therefore, K-IMR allows for coronary CTA examinations with high diagnostic value and very low radiation exposure in clinical routine. Copyright © 2017 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.

  15. Evaluation of pre and post-operative spinal plain CT and CT-myelography

    International Nuclear Information System (INIS)

    Nagase, Joji; Inoue, Shunichi; Miyasaka, Hitoshi; Kamata, Sakae; Shinohara, Hiroyasu.

    1983-01-01

    Confirmation of the level of scan slices is essential for the CT diagnosis of spinal and spinal-cord diseases. Pre- and postoperative comparison should be made on the same level. For reading of plain CT and CTM, window levels should be identical pre- and postoperatively. Both methods demonstrated the spinal canal, morphology of the spinal cord, and three-Fdimensional pathologic pictures inside and outside the spinal cord. Preoperative CT contributed useful information on the pathologic conditions and selection of surgical procedures and routes. Postoperative plain CT confirmed surgical results, and CTM revealed the spinal cord and the subarachnoid space, as well as the range and degree of decompression from the spinal cord. (Chiba, N.)

  16. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, A. [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 (United States); Ranallo, F. N. [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53792 (United States); Judy, P. F. [Brigham and Women’s Hospital, Boston, Massachusetts 02115 (United States); Gierada, D. S. [Department of Radiology, Washington University, St. Louis, Missouri 63110 (United States); Fain, S. B., E-mail: sfain@wisc.edu [Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53705 (United States); Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin 53792 (United States); Department of Biomedical Engineering,University of Wisconsin School of Engineering, Madison, Wisconsin 53706 (United States)

    2014-11-01

    Purpose: To determine the impact of constrained reconstruction techniques on quantitative CT (qCT) of the lung parenchyma and airways for low x-ray radiation dose. Methods: Measurement of small airways with qCT remains a challenge, especially for low x-ray dose protocols. Images of the COPDGene quality assurance phantom (CTP698, The Phantom Laboratory, Salem, NY) were obtained using a GE discovery CT750 HD scanner for helical scans at x-ray radiation dose-equivalents ranging from 1 to 4.12 mSv (12–100 mA s current–time product). Other parameters were 40 mm collimation, 0.984 pitch, 0.5 s rotation, and 0.625 mm thickness. The phantom was sandwiched between 7.5 cm thick water attenuating phantoms for a total length of 20 cm to better simulate the scatter conditions of patient scans. Image data sets were reconstructed using STANDARD (STD), DETAIL, BONE, and EDGE algorithms for filtered back projection (FBP), 100% adaptive statistical iterative reconstruction (ASIR), and Veo reconstructions. Reduced (half) display field of view (DFOV) was used to increase sampling across airway phantom structures. Inner diameter (ID), wall area percent (WA%), and wall thickness (WT) measurements of eight airway mimicking tubes in the phantom, including a 2.5 mm ID (42.6 WA%, 0.4 mm WT), 3 mm ID (49.0 WA%, 0.6 mm WT), and 6 mm ID (49.0 WA%, 1.2 mm WT) were performed with Airway Inspector (Surgical Planning Laboratory, Brigham and Women’s Hospital, Boston, MA) using the phase congruency edge detection method. The average of individual measures at five central slices of the phantom was taken to reduce measurement error. Results: WA% measures were greatly overestimated while IDs were underestimated for the smaller airways, especially for reconstructions at full DFOV (36 cm) using the STD kernel, due to poor sampling and spatial resolution (0.7 mm pixel size). Despite low radiation dose, the ID of the 6 mm ID airway was consistently measured accurately for all methods other than STD

  17. CT reconstruction techniques for improved accuracy of lung CT airway measurement

    International Nuclear Information System (INIS)

    Rodriguez, A.; Ranallo, F. N.; Judy, P. F.; Gierada, D. S.; Fain, S. B.

    2014-01-01

    Purpose: To determine the impact of constrained reconstruction techniques on quantitative CT (qCT) of the lung parenchyma and airways for low x-ray radiation dose. Methods: Measurement of small airways with qCT remains a challenge, especially for low x-ray dose protocols. Images of the COPDGene quality assurance phantom (CTP698, The Phantom Laboratory, Salem, NY) were obtained using a GE discovery CT750 HD scanner for helical scans at x-ray radiation dose-equivalents ranging from 1 to 4.12 mSv (12–100 mA s current–time product). Other parameters were 40 mm collimation, 0.984 pitch, 0.5 s rotation, and 0.625 mm thickness. The phantom was sandwiched between 7.5 cm thick water attenuating phantoms for a total length of 20 cm to better simulate the scatter conditions of patient scans. Image data sets were reconstructed using STANDARD (STD), DETAIL, BONE, and EDGE algorithms for filtered back projection (FBP), 100% adaptive statistical iterative reconstruction (ASIR), and Veo reconstructions. Reduced (half) display field of view (DFOV) was used to increase sampling across airway phantom structures. Inner diameter (ID), wall area percent (WA%), and wall thickness (WT) measurements of eight airway mimicking tubes in the phantom, including a 2.5 mm ID (42.6 WA%, 0.4 mm WT), 3 mm ID (49.0 WA%, 0.6 mm WT), and 6 mm ID (49.0 WA%, 1.2 mm WT) were performed with Airway Inspector (Surgical Planning Laboratory, Brigham and Women’s Hospital, Boston, MA) using the phase congruency edge detection method. The average of individual measures at five central slices of the phantom was taken to reduce measurement error. Results: WA% measures were greatly overestimated while IDs were underestimated for the smaller airways, especially for reconstructions at full DFOV (36 cm) using the STD kernel, due to poor sampling and spatial resolution (0.7 mm pixel size). Despite low radiation dose, the ID of the 6 mm ID airway was consistently measured accurately for all methods other than STD

  18. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata.

    Science.gov (United States)

    Liu, Aiming; Chen, Kun; Liu, Quan; Ai, Qingsong; Xie, Yi; Chen, Anqi

    2017-11-08

    Motor Imagery (MI) electroencephalography (EEG) is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA) can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA) to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP) and local characteristic-scale decomposition (LCD) algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA) classifier. Both the fourth brain-computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain-computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain-computer interface systems.

  19. Feature Selection for Motor Imagery EEG Classification Based on Firefly Algorithm and Learning Automata

    Directory of Open Access Journals (Sweden)

    Aiming Liu

    2017-11-01

    Full Text Available Motor Imagery (MI electroencephalography (EEG is widely studied for its non-invasiveness, easy availability, portability, and high temporal resolution. As for MI EEG signal processing, the high dimensions of features represent a research challenge. It is necessary to eliminate redundant features, which not only create an additional overhead of managing the space complexity, but also might include outliers, thereby reducing classification accuracy. The firefly algorithm (FA can adaptively select the best subset of features, and improve classification accuracy. However, the FA is easily entrapped in a local optimum. To solve this problem, this paper proposes a method of combining the firefly algorithm and learning automata (LA to optimize feature selection for motor imagery EEG. We employed a method of combining common spatial pattern (CSP and local characteristic-scale decomposition (LCD algorithms to obtain a high dimensional feature set, and classified it by using the spectral regression discriminant analysis (SRDA classifier. Both the fourth brain–computer interface competition data and real-time data acquired in our designed experiments were used to verify the validation of the proposed method. Compared with genetic and adaptive weight particle swarm optimization algorithms, the experimental results show that our proposed method effectively eliminates redundant features, and improves the classification accuracy of MI EEG signals. In addition, a real-time brain–computer interface system was implemented to verify the feasibility of our proposed methods being applied in practical brain–computer interface systems.

  20. Local curvature analysis for classifying breast tumors: Preliminary analysis in dedicated breast CT

    International Nuclear Information System (INIS)

    Lee, Juhun; Nishikawa, Robert M.; Reiser, Ingrid; Boone, John M.; Lindfors, Karen K.

    2015-01-01

    Purpose: The purpose of this study is to measure the effectiveness of local curvature measures as novel image features for classifying breast tumors. Methods: A total of 119 breast lesions from 104 noncontrast dedicated breast computed tomography images of women were used in this study. Volumetric segmentation was done using a seed-based segmentation algorithm and then a triangulated surface was extracted from the resulting segmentation. Total, mean, and Gaussian curvatures were then computed. Normalized curvatures were used as classification features. In addition, traditional image features were also extracted and a forward feature selection scheme was used to select the optimal feature set. Logistic regression was used as a classifier and leave-one-out cross-validation was utilized to evaluate the classification performances of the features. The area under the receiver operating characteristic curve (AUC, area under curve) was used as a figure of merit. Results: Among curvature measures, the normalized total curvature (C_T) showed the best classification performance (AUC of 0.74), while the others showed no classification power individually. Five traditional image features (two shape, two margin, and one texture descriptors) were selected via the feature selection scheme and its resulting classifier achieved an AUC of 0.83. Among those five features, the radial gradient index (RGI), which is a margin descriptor, showed the best classification performance (AUC of 0.73). A classifier combining RGI and C_T yielded an AUC of 0.81, which showed similar performance (i.e., no statistically significant difference) to the classifier with the above five traditional image features. Additional comparisons in AUC values between classifiers using different combinations of traditional image features and C_T were conducted. The results showed that C_T was able to replace the other four image features for the classification task. Conclusions: The normalized curvature measure

  1. Determination of Selection Method in Genetic Algorithm for Land Suitability

    Directory of Open Access Journals (Sweden)

    Irfianti Asti Dwi

    2016-01-01

    Full Text Available Genetic Algoirthm is one alternative solution in the field of modeling optimization, automatic programming and machine learning. The purpose of the study was to compare some type of selection methods in Genetic Algorithm for land suitability. Contribution of this research applies the best method to develop region based horticultural commodities. This testing is done by comparing the three methods on the method of selection, the Roulette Wheel, Tournament Selection and Stochastic Universal Sampling. Parameters of the locations used in the test scenarios include Temperature = 27°C, Rainfall = 1200 mm, hummidity = 30%, Cluster fruit = 4, Crossover Probabiitiy (Pc = 0.6, Mutation Probabilty (Pm = 0.2 and Epoch = 10. The second test epoch incluides location parameters consist of Temperature = 30°C, Rainfall = 2000 mm, Humidity = 35%, Cluster fruit = 5, Crossover Probability (Pc = 0.7, Mutation Probability (Pm = 0.3 and Epoch 10. The conclusion of this study shows that the Roulette Wheel is the best method because it produces more stable and fitness value than the other two methods.

  2. Novel medical image enhancement algorithms

    Science.gov (United States)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  3. A modification of the successive projections algorithm for spectral variable selection in the presence of unknown interferents.

    Science.gov (United States)

    Soares, Sófacles Figueredo Carreiro; Galvão, Roberto Kawakami Harrop; Araújo, Mário César Ugulino; da Silva, Edvan Cirino; Pereira, Claudete Fernandes; de Andrade, Stéfani Iury Evangelista; Leite, Flaviano Carvalho

    2011-03-09

    This work proposes a modification to the successive projections algorithm (SPA) aimed at selecting spectral variables for multiple linear regression (MLR) in the presence of unknown interferents not included in the calibration data set. The modified algorithm favours the selection of variables in which the effect of the interferent is less pronounced. The proposed procedure can be regarded as an adaptive modelling technique, because the spectral features of the samples to be analyzed are considered in the variable selection process. The advantages of this new approach are demonstrated in two analytical problems, namely (1) ultraviolet-visible spectrometric determination of tartrazine, allure red and sunset yellow in aqueous solutions under the interference of erythrosine, and (2) near-infrared spectrometric determination of ethanol in gasoline under the interference of toluene. In these case studies, the performance of conventional MLR-SPA models is substantially degraded by the presence of the interferent. This problem is circumvented by applying the proposed Adaptive MLR-SPA approach, which results in prediction errors smaller than those obtained by three other multivariate calibration techniques, namely stepwise regression, full-spectrum partial-least-squares (PLS) and PLS with variables selected by a genetic algorithm. An inspection of the variable selection results reveals that the Adaptive approach successfully avoids spectral regions in which the interference is more intense. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Phase-contrast X-ray CT

    Energy Technology Data Exchange (ETDEWEB)

    Momose, Atsushi [Hitachi Ltd., Saitama (Japan). Advanced Research Laboratory; Takeda, Tohoru; Itai, Yuji

    1995-12-01

    Phase-contrast X-ray computed tomography (CT) enabling the observation of biological soft tissues without contrast enhancement has been developed. The X-ray phase shift caused by an object is measured and input to a standard CT reconstruction algorithm. A thousand times increase in the image sensitivity to soft tissues is achieved compared with the conventional CT using absorption contrast. This is because the X-ray phase shift cross section of light elements is about a thousand times larger than the absorption cross section. The phase shift is detected using an X-ray interferometer and computer analyses of interference patterns. Experiments were performed using a synchrotron X-ray source. Excellent image sensitivity is demonstrated in the observation of cancerous rabbit liver. The CT images distinguish cancer lesion from normal liver tissue and, moreover, visualize the pathological condition in the lesion. Although the X-ray energy employed and the present observation area size are not suitable for medical applications as they are, phase-contrast X-ray CT is promising for investigating the internal structure of soft tissue which is almost transparent for X-rays. The high sensitivity also provides the advantage of reducing X-ray doses. (author).

  5. An automatic fuzzy-based multi-temporal brain digital subtraction angiography image fusion algorithm using curvelet transform and content selection strategy.

    Science.gov (United States)

    Momeni, Saba; Pourghassem, Hossein

    2014-08-01

    Recently image fusion has prominent role in medical image processing and is useful to diagnose and treat many diseases. Digital subtraction angiography is one of the most applicable imaging to diagnose brain vascular diseases and radiosurgery of brain. This paper proposes an automatic fuzzy-based multi-temporal fusion algorithm for 2-D digital subtraction angiography images. In this algorithm, for blood vessel map extraction, the valuable frames of brain angiography video are automatically determined to form the digital subtraction angiography images based on a novel definition of vessel dispersion generated by injected contrast material. Our proposed fusion scheme contains different fusion methods for high and low frequency contents based on the coefficient characteristic of wrapping second generation of curvelet transform and a novel content selection strategy. Our proposed content selection strategy is defined based on sample correlation of the curvelet transform coefficients. In our proposed fuzzy-based fusion scheme, the selection of curvelet coefficients are optimized by applying weighted averaging and maximum selection rules for the high frequency coefficients. For low frequency coefficients, the maximum selection rule based on local energy criterion is applied to better visual perception. Our proposed fusion algorithm is evaluated on a perfect brain angiography image dataset consisting of one hundred 2-D internal carotid rotational angiography videos. The obtained results demonstrate the effectiveness and efficiency of our proposed fusion algorithm in comparison with common and basic fusion algorithms.

  6. An optimisation algorithm for determination of treatment margins around moving and deformable targets

    International Nuclear Information System (INIS)

    Redpath, Anthony Thomas; Muren, Ludvig Paul

    2005-01-01

    Purpose: Determining treatment margins for inter-fractional motion of moving and deformable clinical target volumes (CTVs) remains a major challenge. This paper describes and applies an optimisation algorithm designed to derive such margins. Material and methods: The algorithm works by expanding the CTV, as determined from a pre-treatment or planning scan, to enclose the CTV positions observed during treatment. CTV positions during treatment may be obtained using, for example, repeat CT scanning and/or repeat electronic portal imaging (EPI). The algorithm can be applied to both individual patients and to a set of patients. The margins derived will minimise the excess volume outside the envelope that encloses all observed CTV positions (the CTV envelope). Initially, margins are set such that the envelope is more than adequately covered when the planning CTV is expanded. The algorithm uses an iterative method where the margins are sampled randomly and are then either increased or decreased randomly. The algorithm is tested on a set of 19 bladder cancer patients that underwent weekly repeat CT scanning and EPI throughout their treatment course. Results: From repeated runs on individual patients, the algorithm produces margins within a range of ±2 mm that lie among the best results found with an exhaustive search approach, and that agree within 3 mm with margins determined by a manual approach on the same data. The algorithm could be used to determine margins to cover any specified geometrical uncertainty, and allows for the determination of reduced margins by relaxing the coverage criteria, for example disregarding extreme CTV positions, or an arbitrarily selected volume fraction of the CTV envelope, and/or patients with extreme geometrical uncertainties. Conclusion: An optimisation approach to margin determination is found to give reproducible results within the accuracy required. The major advantage with this algorithm is that it is completely empirical, and it is

  7. Algorithms of control parameters selection for automation of FDM 3D printing process

    Directory of Open Access Journals (Sweden)

    Kogut Paweł

    2017-01-01

    Full Text Available The paper presents algorithms of control parameters selection of the Fused Deposition Modelling (FDM technology in case of an open printing solutions environment and 3DGence ONE printer. The following parameters were distinguished: model mesh density, material flow speed, cooling performance, retraction and printing speeds. These parameters are independent in principle printing system, but in fact to a certain degree that results from the selected printing equipment features. This is the first step for automation of the 3D printing process in FDM technology.

  8. Estimation of skull table thickness with clinical CT and validation with microCT.

    Science.gov (United States)

    Lillie, Elizabeth M; Urban, Jillian E; Weaver, Ashley A; Powers, Alexander K; Stitzel, Joel D

    2015-01-01

    Brain injuries resulting from motor vehicle crashes (MVC) are extremely common yet the details of the mechanism of injury remain to be well characterized. Skull deformation is believed to be a contributing factor to some types of traumatic brain injury (TBI). Understanding biomechanical contributors to skull deformation would provide further insight into the mechanism of head injury resulting from blunt trauma. In particular, skull thickness is thought be a very important factor governing deformation of the skull and its propensity for fracture. Current computed tomography (CT) technology is limited in its ability to accurately measure cortical thickness using standard techniques. A method to evaluate cortical thickness using cortical density measured from CT data has been developed previously. This effort validates this technique for measurement of skull table thickness in clinical head CT scans using two postmortem human specimens. Bone samples were harvested from the skulls of two cadavers and scanned with microCT to evaluate the accuracy of the estimated cortical thickness measured from clinical CT. Clinical scans were collected at 0.488 and 0.625 mm in plane resolution with 0.625 mm thickness. The overall cortical thickness error was determined to be 0.078 ± 0.58 mm for cortical samples thinner than 4 mm. It was determined that 91.3% of these differences fell within the scanner resolution. Color maps of clinical CT thickness estimations are comparable to color maps of microCT thickness measurements, indicating good quantitative agreement. These data confirm that the cortical density algorithm successfully estimates skull table thickness from clinical CT scans. The application of this technique to clinical CT scans enables evaluation of cortical thickness in population-based studies. © 2014 Anatomical Society.

  9. Simultaneous Reduction in Noise and Cross-Contamination Artifacts for Dual-Energy X-Ray CT

    Directory of Open Access Journals (Sweden)

    Baojun Li

    2013-01-01

    Full Text Available Purpose. Dual-energy CT imaging tends to suffer from much lower signal-to-noise ratio than single-energy CT. In this paper, we propose an improved anticorrelated noise reduction (ACNR method without causing cross-contamination artifacts. Methods. The proposed algorithm diffuses both basis material density images (e.g., water and iodine at the same time using a novel correlated diffusion algorithm. The algorithm has been compared to the original ACNR algorithm in a contrast-enhanced, IRB-approved patient study. Material density accuracy and noise reduction are quantitatively evaluated by the percent density error and the percent noise reduction. Results. Both algorithms have significantly reduced the noises of basis material density images in all cases. The average percent noise reduction is 69.3% and 66.5% with the ACNR algorithm and the proposed algorithm, respectively. However, the ACNR algorithm alters the original material density by an average of 13% (or 2.18 mg/cc with a maximum of 58.7% (or 8.97 mg/cc in this study. This is evident in the water density images as massive cross-contaminations are seen in all five clinical cases. On the contrary, the proposed algorithm only changes the mean density by 2.4% (or 0.69 mg/cc with a maximum of 7.6% (or 1.31 mg/cc. The cross-contamination artifacts are significantly minimized or absent with the proposed algorithm. Conclusion. The proposed algorithm can significantly reduce image noise present in basis material density images from dual-energy CT imaging, with minimized cross-contaminations compared to the ACNR algorithm.

  10. A standard deviation selection in evolutionary algorithm for grouper fish feed formulation

    Science.gov (United States)

    Cai-Juan, Soong; Ramli, Razamin; Rahman, Rosshairy Abdul

    2016-10-01

    Malaysia is one of the major producer countries for fishery production due to its location in the equatorial environment. Grouper fish is one of the potential markets in contributing to the income of the country due to its desirable taste, high demand and high price. However, the demand of grouper fish is still insufficient from the wild catch. Therefore, there is a need to farm grouper fish to cater to the market demand. In order to farm grouper fish, there is a need to have prior knowledge of the proper nutrients needed because there is no exact data available. Therefore, in this study, primary data and secondary data are collected even though there is a limitation of related papers and 30 samples are investigated by using standard deviation selection in Evolutionary algorithm. Thus, this study would unlock frontiers for an extensive research in respect of grouper fish feed formulation. Results shown that the fitness of standard deviation selection in evolutionary algorithm is applicable. The feasible and low fitness, quick solution can be obtained. These fitness can be further predicted to minimize cost in farming grouper fish.

  11. A stereo remote sensing feature selection method based on artificial bee colony algorithm

    Science.gov (United States)

    Yan, Yiming; Liu, Pigang; Zhang, Ye; Su, Nan; Tian, Shu; Gao, Fengjiao; Shen, Yi

    2014-05-01

    To improve the efficiency of stereo information for remote sensing classification, a stereo remote sensing feature selection method is proposed in this paper presents, which is based on artificial bee colony algorithm. Remote sensing stereo information could be described by digital surface model (DSM) and optical image, which contain information of the three-dimensional structure and optical characteristics, respectively. Firstly, three-dimensional structure characteristic could be analyzed by 3D-Zernike descriptors (3DZD). However, different parameters of 3DZD could descript different complexity of three-dimensional structure, and it needs to be better optimized selected for various objects on the ground. Secondly, features for representing optical characteristic also need to be optimized. If not properly handled, when a stereo feature vector composed of 3DZD and image features, that would be a lot of redundant information, and the redundant information may not improve the classification accuracy, even cause adverse effects. To reduce information redundancy while maintaining or improving the classification accuracy, an optimized frame for this stereo feature selection problem is created, and artificial bee colony algorithm is introduced for solving this optimization problem. Experimental results show that the proposed method can effectively improve the computational efficiency, improve the classification accuracy.

  12. Advances in CT imaging for urolithiasis

    Directory of Open Access Journals (Sweden)

    Yasir Andrabi

    2015-01-01

    Full Text Available Urolithiasis is a common disease with increasing prevalence worldwide and a lifetime-estimated recurrence risk of over 50%. Imaging plays a critical role in the initial diagnosis, follow-up and urological management of urinary tract stone disease. Unenhanced helical computed tomography (CT is highly sensitive (>95% and specific (>96% in the diagnosis of urolithiasis and is the imaging investigation of choice for the initial assessment of patients with suspected urolithiasis. The emergence of multi-detector CT (MDCT and technological innovations in CT such as dual-energy CT (DECT has widened the scope of MDCT in the stone disease management from initial diagnosis to encompass treatment planning and monitoring of treatment success. DECT has been shown to enhance pre-treatment characterization of stone composition in comparison with conventional MDCT and is being increasingly used. Although CT-related radiation dose exposure remains a valid concern, the use of low-dose MDCT protocols and integration of newer iterative reconstruction algorithms into routine CT practice has resulted in a substantial decrease in ionizing radiation exposure. In this review article, our intent is to discuss the role of MDCT in the diagnosis and post-treatment evaluation of urolithiasis and review the impact of emerging CT technologies such as dual energy in clinical practice.

  13. Usefulness of multi-plane dynamic subtraction CT (MPDS-CT) for intracranial high density lesions

    Energy Technology Data Exchange (ETDEWEB)

    Takagi, Ryo; Kumazaki, Tatsuo [Nippon Medical School, Tokyo (Japan)

    1996-02-01

    We present a new CT technique using the high speed CT scanner in detection and evaluation of temporal and spatial contrast enhancement of intracranial high density lesions. A multi-plane dynamic subtraction CT (MPDS-CT) was performed in 21 patients with intracranial high density lesions. These lesions consisted of 10 brain tumors, 7 intracerebral hemorrhages and 4 vascular malformations (2 untreated, 2 post-embolization). Baseline study was first performed, and 5 sequential planes of covering total high density lesions were selected. After obtaining the 5 sequential CT images as mask images, three series of multi-plane dynamic CT were performed for the same 5 planes with an intravenous bolus injection of contrast medium. MPDS-CT images were reconstructed by subtracting dynamic CT images from the mask ones. MPDS-CT were compared with conventional contrast-enhanced CT. MPDS-CT images showed the definite contrast enhancement of high density brain tumors and vascular malformations which were not clearly identified on conventional contrast-enhanced CT images because of calcified or hemorrhagic lesions and embolic materials, enabling us to eliminate enhanced abnormalities with non-enhanced areas such as unusual intracerebral hemorrhages. MPDS-CT will provide us further accurate and objective information and will be greatly helpful for interpreting pathophysiologic condition. (author).

  14. Feature selection using genetic algorithms for fetal heart rate analysis

    International Nuclear Information System (INIS)

    Xu, Liang; Redman, Christopher W G; Georgieva, Antoniya; Payne, Stephen J

    2014-01-01

    The fetal heart rate (FHR) is monitored on a paper strip (cardiotocogram) during labour to assess fetal health. If necessary, clinicians can intervene and assist with a prompt delivery of the baby. Data-driven computerized FHR analysis could help clinicians in the decision-making process. However, selecting the best computerized FHR features that relate to labour outcome is a pressing research problem. The objective of this study is to apply genetic algorithms (GA) as a feature selection method to select the best feature subset from 64 FHR features and to integrate these best features to recognize unfavourable FHR patterns. The GA was trained on 404 cases and tested on 106 cases (both balanced datasets) using three classifiers, respectively. Regularization methods and backward selection were used to optimize the GA. Reasonable classification performance is shown on the testing set for the best feature subset (Cohen's kappa values of 0.45 to 0.49 using different classifiers). This is, to our knowledge, the first time that a feature selection method for FHR analysis has been developed on a database of this size. This study indicates that different FHR features, when integrated, can show good performance in predicting labour outcome. It also gives the importance of each feature, which will be a valuable reference point for further studies. (paper)

  15. Selective epidemic broadcast algorithm to suppress broadcast storm in vehicular ad hoc networks

    Directory of Open Access Journals (Sweden)

    M. Chitra

    2018-03-01

    Full Text Available Broadcasting in Vehicular Ad Hoc Networks is the best way to spread emergency messages all over the network. With the dynamic nature of vehicular ad hoc networks, simple broadcast or flooding faces the problem called as Broadcast Storm Problem (BSP. The issue of the BSP will degrade the performance of a message broadcasting process like increased overhead, collision and dissemination delay. The paper is motivated to solve the problems in the existing Broadcast Strom Suppression Algorithms (BSSAs like p-Persistence, TLO, VSPB, G-SAB and SIR. This paper proposes to suppress the Broadcast Storm Problem and to improve the Emergency Safety message dissemination rate through a new BSSA based on Selective Epidemic Broadcast Algorithm (SEB. The simulation results clearly show that the SEB outperforms the existing algorithms in terms of ESM Delivery Ratio, Message Overhead, Collision Ratio, Broadcast Storm Ratio and Redundant Rebroadcast Ratio with decreased Dissemination Delay.

  16. A parallel implementation of 3-d CT image reconstruction on a hypercube multiprocessor

    International Nuclear Information System (INIS)

    Chen, C.M.; Lee, S.Y.; Cho, Z.H.

    1990-01-01

    In this paper, the authors describe how image reconstruction in computerized tomography (CT) can be parallelized on a message-passing multiprocessor. In particular, the results obtained from parallel implementation of 3-D CT image reconstruction for parallel beam geometries on the Intel hypercube, iPSC/2, are presented. A two stage pipelining approach is employed for filtering (convolution) and backprojection. The conventional sequential convolution algorithm is modified such that the symmetry of the filter kernel is fully utilized for parallelization. In the backprojection stage, the 3-D incremental algorithm, the authors' recently developed backprojection scheme which is shown to be faster than conventional algorithm, is parallelized

  17. Suspected acute pulmonary emboli: cost-effectiveness of chest helical computed tomography versus a standard diagnostic algorithm incorporating ventilation-perfusion scintigraphy

    International Nuclear Information System (INIS)

    Larcos, G.; Chi, K.K.G.; Berry, G.; Westmead Hospital, Sydney, NSW; Shiell, A.

    2000-01-01

    There is a controversy regarding the investigation of patients with suspected acute pulmonary embolism (PE). To compare the cost-effectiveness of alternative methods of diagnosing acute PE, chest helical computed tomography (CT) alone and in combination with venous ultrasound (US) of legs and pulmonary angiography (PA) were compared to a conventional algorithm using ventilation-perfusion (V/Q) scintigraphy supplemented in selected cases by US and PA. A decision-analytical model was constructed to model the costs and effects of the three diagnostic strategies in a hypothetical cohort of 1000 patients each. Transition probabilities were based on published data. Life years gained by each strategy were estimated from published mortality rates. Schedule fees were used to estimate costs. The V/Q protocol is both more expensive and more effective than CT alone resulting in 20.1 additional lives saved at a (discounted) cost of $940 per life year gained. An additional 2.5 lives can be saved if CT replaces V/Q scintigraphy in the diagnostic algorithm but at a cost of $23,905 per life year saved. It resulted that the more effective diagnostic strategies are also more expensive. In patients with suspected PE, the incremental cost-effectiveness of the V/Q based strategy over CT alone is reasonable in comparison with other health interventions. The cost-effectiveness of the supplemented CT strategy is more questionable. Copyright (2000) The Australasian College of Physicians

  18. Brain tumor and CT, 1

    International Nuclear Information System (INIS)

    Suzuki, Nobuyuki; Katada, Kazuhiro; Shinomiya, Youichi; Sano, Hirotoshi; Kanno, Tetsuo

    1981-01-01

    It is very important for a neurosurgeon to know the consistency of a brain tumor preoperatively, since the information which is of much use in indicating the likely difficulty of the operation, which operative tools should be selected, the amount of bleeding to be expected from the tumor, and so on. The authors, therefore, tried to evaluate the consistency of brain tumors preoperatively 27 cases in which the margin of the tumor was made clear with a homogeneous stain were studied concerning the relationship between the tumor consistency and the CT findings. The results are as follows: 1) A higher CT number on a plain CT indicated a harder consistency of the tumor. 2) A lesser contrast index (CT number on enhancement CT/CT number on plain CT) showed a harder consistency of the tumor. (author)

  19. A randomised clinical trial of routine versus selective CT imaging in acute abdomen: Impact of patient age on treatment costs and hospital resource use

    Energy Technology Data Exchange (ETDEWEB)

    Lehtimäki, Tiina T., E-mail: tiina.lehtimaki@kuh.fi [Department of Clinical Radiology, Kuopio University Hospital, Puijonlaaksontie 2, FI-70210, Kuopio (Finland); Valtonen, Hannu, E-mail: hannu.valtonen@uef.fi [University of Eastern Finland, Department of Health and Social Management, Yliopistonranta 1, FI-70211 Kuopio (Finland); Miettinen, Pekka, E-mail: pekka.miettinen@satucon.fi [Department of Gastrointestinal Surgery, Kuopio University Hospital, Puijonlaaksontie 2, FI-70210 Kuopio (Finland); Juvonen, Petri, E-mail: petri.juvonen@kuh.fi [Department of Gastrointestinal Surgery, Kuopio University Hospital, Puijonlaaksontie 2, FI-70210 Kuopio (Finland); Paajanen, Hannu, E-mail: hannu.paajanen@kuh.fi [Department of Gastrointestinal Surgery, Kuopio University Hospital, Puijonlaaksontie 2, FI-70210 Kuopio (Finland); University of Eastern Finland, Department of Clinical Medicine, Unit of Surgery, Yliopistonranta 1, FI-70211 Kuopio (Finland); Vanninen, Ritva, E-mail: ritva.vanninen@kuh.fi [Department of Clinical Radiology, Kuopio University Hospital, Puijonlaaksontie 2, FI-70210, Kuopio (Finland); University of Eastern Finland, Department of Clinical Medicine, Unit of Radiology, Yliopistonranta 1, FI-70211 Kuopio (Finland)

    2017-02-15

    Objectives: To evaluate the impact of patient age on hospital resource use and treatment costs of acute abdominal pain (AAP). Materials and methods: A total of 300 adult patients with AAP were randomised to either computed tomography (CT, n = 150) or selective imaging practice (SIP, n = 150) groups. Final analysis included 254 patients, 143 (42 patients ≥65 years) in the CT and 111 (32 patients ≥65 years) in the SIP group. All CT group patients underwent abdominal CT whereas in the SIP group, imaging was based on the clinical assessment. For each patient, the hospital length of stay (LOS), the numbers and costs of diagnostic and treatment procedures arising from AAP were calculated and registered. The incremental cost-effectiveness ratio (ICER) and bootstrapped cost-effectiveness acceptability curve (CEAC) were estimated for routine CT. Results: Treatment costs, imaging costs and LOS increased in conjunction with aging in both study groups, and were generally higher in the CT group compared to the SIP group. In the SIP group, CT was undertaken in 34% (27/79) of the <65 year olds but in 59% (19/32) of the older patients (≥65 years) (p = 0.02). The proportion of patients with non-specific abdominal pain was significantly lower in patients ≥65 years than in their younger counterparts (p = 0.04). In the routine CT group, the ICER of obtaining a specific diagnosis was 1682 € for patients <65 years and 1055 € for patients ≥65 years. According to CEAC estimation, routine CT for every patient with AAP has a 95% probability of being cost-effective if society is willing to pay 14087 € for an additional specific diagnosis for patients <65 years but only 4204 € in those ≥65 years. Conclusion: Treatment costs of AAP increase in parallel with aging, and the costs are generally higher with routine CT compared to selective imaging. The probability of obtaining a specific diagnosis of AAP increases with aging. If obtaining a specific diagnosis is deemed crucial

  20. Application of multi-objective optimization based on genetic algorithm for sustainable strategic supplier selection under fuzzy environment

    Energy Technology Data Exchange (ETDEWEB)

    Hashim, M.; Nazam, M.; Yao, L.; Baig, S.A.; Abrar, M.; Zia-ur-Rehman, M.

    2017-07-01

    The incorporation of environmental objective into the conventional supplier selection practices is crucial for corporations seeking to promote green supply chain management (GSCM). Challenges and risks associated with green supplier selection have been broadly recognized by procurement and supplier management professionals. This paper aims to solve a Tetra “S” (SSSS) problem based on a fuzzy multi-objective optimization with genetic algorithm in a holistic supply chain environment. In this empirical study, a mathematical model with fuzzy coefficients is considered for sustainable strategic supplier selection (SSSS) problem and a corresponding model is developed to tackle this problem. Design/methodology/approach: Sustainable strategic supplier selection (SSSS) decisions are typically multi-objectives in nature and it is an important part of green production and supply chain management for many firms. The proposed uncertain model is transferred into deterministic model by applying the expected value mesurement (EVM) and genetic algorithm with weighted sum approach for solving the multi-objective problem. This research focus on a multi-objective optimization model for minimizing lean cost, maximizing sustainable service and greener product quality level. Finally, a mathematical case of textile sector is presented to exemplify the effectiveness of the proposed model with a sensitivity analysis. Findings: This study makes a certain contribution by introducing the Tetra ‘S’ concept in both the theoretical and practical research related to multi-objective optimization as well as in the study of sustainable strategic supplier selection (SSSS) under uncertain environment. Our results suggest that decision makers tend to select strategic supplier first then enhance the sustainability. Research limitations/implications: Although the fuzzy expected value model (EVM) with fuzzy coefficients constructed in present research should be helpful for solving real world